A while ago we had a post with a comic that was a bit controversial due to it being generated by genAI, but we did not explicitly have a rule against it.
We wanted to discuss this and ask the community, but this apparently had already been a topic on feddit.uk for awhile and they have made a instance rule about it (announced in this post).
Since buyeuropean community is on feddit.uk, the feddit.uk rules apply to this community and therefore I wanted to announce this new rule so it doesn’t come as a surprise.
Copy of the post body text from the announcement of this rule on feddit.uk:
So no:
- AI generated memes of images
- AI generated answers to questions
edit: this applies to feddit.uk communities, we won’t block AI art communities on other instances or sanction our users for posting on them.
What about if the text on an image is factual but the accompanying stock photography is just an AI generated one? what’s the harm and/or who cares?
if you use an AI-generated header for your article, then I’m going assume the text has been AI-generated, too. and I’m not going to bother reading something that no one could be bothered to write.
People have tried so damn hard to be objective. To take their own subjectivity out of their writings.
But that’s impossible.
Ai can do just that. It can analyse far more data than you can even imagine.
It’s the future.
AI is never objective. It’s always influenced by its training set and its parameters. What data is it going to analyse? Where does that data come from? And even if it were: choosing to write about one thing instead of another is also bias.
Humans are also never objective. Which is good. I’d rather know the biases of the author instead of some fake objectivity.
Funnily, the best explanation on this thread was just me copy pasting it from le chat mistral.
It simply gave a good explanation of how it works. Why it can’t be objective.
It’s removed though.
Objectivity is the wrong word then. I seek to know multiple angles all at once.
Nobody on this thread is pro AI, but that’s insane. As it’s one of the most growing markets. So there’s a lot of information lacking here.
“AI” doesn’t have a mind of its own to formulate am “objective” opinion, it just regurgitates whatever it’s being fed, and what it’s being fed is our biases.
It objectively states a summary of all of our combined biases. Which is valuable.
What else are you going to do? Humans are always going to search for information that supports their own bias.
AI forces them to read through bullet points that go against their own bias. It lowers the effect of polarisation if this is done on large scale.
nope, it doesn’t have a way of telling what’s objective and what isn’t.
nope
https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market#%3A~%3Atext=The+global+artificial+intelligence+market+size+was+estimated+at+USD%2CUSD+1%2C811.75+billion+by+2030.
It’s too big to stop anyways
oh cool so let’s just give up then
what an irrelevant thing to say.
Removed by mod