A while ago we had a post with a comic that was a bit controversial due to it being generated by genAI, but we did not explicitly have a rule against it.
We wanted to discuss this and ask the community, but this apparently had already been a topic on feddit.uk for awhile and they have made a instance rule about it (announced in this post).
Since buyeuropean community is on feddit.uk, the feddit.uk rules apply to this community and therefore I wanted to announce this new rule so it doesn’t come as a surprise.
Copy of the post body text from the announcement of this rule on feddit.uk:
So no:
- AI generated memes of images
- AI generated answers to questions
edit: this applies to feddit.uk communities, we won’t block AI art communities on other instances or sanction our users for posting on them.
because LLMs can’t differentiate what’s facts and what’s fiction, which is quite important when trying to determine the origin of a product? and because AI generated content is, most of the time, low effort garbage?
So basically we should stop funding le chat mistral ai and miss out on a market just like we did with smartphones?
You think that’s a good idea buddy? Not supporting our own products?
oh no, not missing out on the technology that hallucinates false information and makes fake people with six fingers, for a meagre cost of half an Amazon jungle per prompt! the horror!
Are you too young to have gone through past innovations? Have you not used the internet in 2002? YouTube was laughably bad back when it started. Microsoft was just a basic company.
You don’t know that AI will be improved upon? Are you this ignorant?
The Belgian government already made it law to use peppol invoices. That’s so that AI can automate the bookkeeping and that governments will have all the information they need in order to tax correctly.
Damn fools on this platform
What about if the text on an image is factual but the accompanying stock photography is just an AI generated one? what’s the harm and/or who cares?
if you use an AI-generated header for your article, then I’m going assume the text has been AI-generated, too. and I’m not going to bother reading something that no one could be bothered to write.
People have tried so damn hard to be objective. To take their own subjectivity out of their writings.
But that’s impossible.
Ai can do just that. It can analyse far more data than you can even imagine.
It’s the future.
AI is never objective. It’s always influenced by its training set and its parameters. What data is it going to analyse? Where does that data come from? And even if it were: choosing to write about one thing instead of another is also bias.
Humans are also never objective. Which is good. I’d rather know the biases of the author instead of some fake objectivity.
Funnily, the best explanation on this thread was just me copy pasting it from le chat mistral.
It simply gave a good explanation of how it works. Why it can’t be objective.
It’s removed though.
Objectivity is the wrong word then. I seek to know multiple angles all at once.
Nobody on this thread is pro AI, but that’s insane. As it’s one of the most growing markets. So there’s a lot of information lacking here.
“AI” doesn’t have a mind of its own to formulate am “objective” opinion, it just regurgitates whatever it’s being fed, and what it’s being fed is our biases.
It objectively states a summary of all of our combined biases. Which is valuable.
What else are you going to do? Humans are always going to search for information that supports their own bias.
AI forces them to read through bullet points that go against their own bias. It lowers the effect of polarisation if this is done on large scale.
nope, it doesn’t have a way of telling what’s objective and what isn’t.
nope
https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market#%3A~%3Atext=The+global+artificial+intelligence+market+size+was+estimated+at+USD%2CUSD+1%2C811.75+billion+by+2030.
It’s too big to stop anyways
oh cool so let’s just give up then
what an irrelevant thing to say.
In that case the rules should be a) no wrong information and b) no low effort garbage, if you ask me.
so “no AI-generated content”, but with more words
This reply was first written by me, using the spellcheck and auto-complete features of my keyboard, and then run through an LLM to optimize it for readability with explicit instructions to not change the tone. It does not contain any incorrect information, and is obviously not low effort, however per the rules this comment should not be allowed.
Even then, AI models (be it text or image) are generally unethically trained (i.e. without consent of the authors/artists of the training material) and have a significant energy consumption, even for single prompts.
And I do have to ask: To what degree is running your comment through an LLM actually beneficial? You say it improved readability, but how unreadable was your original comment actually, that it would require fixing via external tool?
I get your sarcasm, but since I believe your comment holds a serious conviction I want to ask: have you never seen lawbooks? Clarity is good, but not at all costs.