A while ago we had a post with a comic that was a bit controversial due to it being generated by genAI, but we did not explicitly have a rule against it.
We wanted to discuss this and ask the community, but this apparently had already been a topic on feddit.uk for awhile and they have made a instance rule about it (announced in this post).
Since buyeuropean community is on feddit.uk, the feddit.uk rules apply to this community and therefore I wanted to announce this new rule so it doesn’t come as a surprise.
Copy of the post body text from the announcement of this rule on feddit.uk:
So no:
- AI generated memes of images
- AI generated answers to questions
edit: this applies to feddit.uk communities, we won’t block AI art communities on other instances or sanction our users for posting on them.
How will you possibly enforce this?
How will you know ?
For images, text can be inconsistent and different parts of the image that would make sense to be copied would be unique
On the cryptography forum(s) I run my rule is that all use of LLM/AI must be declared, including the prompt.
Wouldn’t mind banning it completely, but I think it’s better to not discourage people who are genuinely trying to learn, while getting an opportunity to show them where LLM will go wrong.
If the point about teaching doesn’t apply to your forum (like one about memes), I don’t see the usefulness of a disclosure rule and thus you might as well ban it completely.
For this community specifically I somewhat answered your question in this comment chain.
TLDR: we won’t probably always know. We trust that this rule is a clear guideline and hope people read rules before posting.
If you can’t tell; does it matter? I imagine the rule is there to filter out noise.
Good rule!
Thank you for announcing it, I never noticed any AI generated content here, but still better safe than sorry
Yeah it seemed like community had some thoughts about it and better to have a clear entry about that in the sidebar.
First community to actually do something about this 🙏
I want to mention I’m personally not against AI generated content, and don’t know why so many people seem against all forms of AI, especially when it comes to images, but i am against wrong information and low effort crap so it will just say you do you feddit.uk and good of our community to follow the instance.
I’m against it because of the questionable ways AI gets trained (stealing art or books for example) and also because of the environmental impacts.
With the unethical training habits and energy consumption from megacorps, in addition to just being brain-killing slop, AI generated content should have no place in social media. https://en.wikipedia.org/wiki/The_Sorcerer's_Apprentice
I agree those two are bad things, but it is to me not enough reason to ban it entirely.
You’re ignoring one very problematic aspect: these artists and authors they’re stealing from? They have no way to opt-in or opt-out. These multi-billion dollar companies can just slurp up whatever they want, and they do. What’s your favorite web comic artist or indie musician going to do? Sue them? With what money?
Nowhere is consent a consideration, and until these companies start acting in good faith and instead of like billionaires (fat chance), they should not be allowed to run their slop generators.
If you generally mean “machine learning,” I agree that there’s good applications, such as in medicine. The arts, though? It has no business there.
Still disagree although I agree that is a very problematic aspect. I think the ai companies have been given too much freedom. I’m also fine with everybody choosing not to use it. I also agree artists were fucked over and deserve artistic credits and financial compensation, I will support them getting this. I myself have not generated an image that looks like something from Ghibli for example. But still, all things considered; I don’t think banning all ai generated content a reasonable thing to do. But that has more to do with my world view concerning individual freedom than how I view ai generated things and the behaviour of ai companies.
What’s your take on AI like Adobe’s Firefly? it’s supposedly trained on their own licensed stock images, the artists get paid and they can opt-in / opt-out.
It was not, their claims are not verifiable as they also used “third party datasets” without disclosing which ones. Also supposedly opt-out options appeared after they used their stocks to train/fine tune Firefly which contradict their promise of notifying users before doing so and likely illegal in most European countries. To add to that, their stocks are not curated/moderated and anyone can upload the whole collection of an artist there without agreement (opt-out is a deeply flawed system that protect no one), not to mention that the “pay” the contributors received was absolutely ridiculous…
Yeah my main interest in this community is the topic, and thankfully this doesn’t really affect this community dramatically since the content that people post is generally not related to AI.
And anyway, good to have clear guidelines. There are plenty of communities where the focus is AI (or anti-AI) and for feddit.uk users who want to participate still can (the ban is about communities hosted on feddit.uk, they are not defederating or banning AI on remote communities).
I think living in a democracy (and I consider this also as one, dispute there not being elections) is accepting I don’t make the rules. And in that case I rather have clear rules I disagree with than vague rules that give me a desire to argue about my interpretation of them all the time.
Also good that we follow the instance, since that means that if people have a problem with a certain rule than there is a set place for that discussion and we as a community can be free of arguments about them. Plus if I want to try and change a certain rule, I know where I need to direct my time and energy to.
deleted by creator
Removed by mod
deleted by creator
@huppakee@lemm.ee already gave the perfect answer in a reply to you, but in general some content is just clearly AI generated and we didn’t have a rule about that before.
I’m not going to be trigger happy and label every suspected post as genAI and I don’t have enough resources to check. This is mostly to give people posting here guidelines as to what is acceptable.
The post that triggered this discussion in this community was not hiding the fact that it was AI and also OP wasn’t acting in bad faith - they even stated that “the posting rules didn’t prohibit AI content”, which is fair enough.
You’re right you can’t, but if there is a rule you have clarity beforehand instead of arguments after.
oh why?
The rule has been added because communities have to follow the rules of the instance they are hosted on and feddit.uk (buyeuropean is hosted on feddit.uk) has introduced this rule.
If they had not introduced it we would have had to have a discussion about genAI rule anyway on this community, because it was a controversial topic as we saw on an earlier post here.
because LLMs can’t differentiate what’s facts and what’s fiction, which is quite important when trying to determine the origin of a product? and because AI generated content is, most of the time, low effort garbage?
So basically we should stop funding le chat mistral ai and miss out on a market just like we did with smartphones?
You think that’s a good idea buddy? Not supporting our own products?
oh no, not missing out on the technology that hallucinates false information and makes fake people with six fingers, for a meagre cost of half an Amazon jungle per prompt! the horror!
Are you too young to have gone through past innovations? Have you not used the internet in 2002? YouTube was laughably bad back when it started. Microsoft was just a basic company.
You don’t know that AI will be improved upon? Are you this ignorant?
The Belgian government already made it law to use peppol invoices. That’s so that AI can automate the bookkeeping and that governments will have all the information they need in order to tax correctly.
Damn fools on this platform
What about if the text on an image is factual but the accompanying stock photography is just an AI generated one? what’s the harm and/or who cares?
if you use an AI-generated header for your article, then I’m going assume the text has been AI-generated, too. and I’m not going to bother reading something that no one could be bothered to write.
People have tried so damn hard to be objective. To take their own subjectivity out of their writings.
But that’s impossible.
Ai can do just that. It can analyse far more data than you can even imagine.
It’s the future.
AI is never objective. It’s always influenced by its training set and its parameters. What data is it going to analyse? Where does that data come from? And even if it were: choosing to write about one thing instead of another is also bias.
Humans are also never objective. Which is good. I’d rather know the biases of the author instead of some fake objectivity.
“AI” doesn’t have a mind of its own to formulate am “objective” opinion, it just regurgitates whatever it’s being fed, and what it’s being fed is our biases.
It objectively states a summary of all of our combined biases. Which is valuable.
What else are you going to do? Humans are always going to search for information that supports their own bias.
AI forces them to read through bullet points that go against their own bias. It lowers the effect of polarisation if this is done on large scale.
it objectively
nope, it doesn’t have a way of telling what’s objective and what isn’t.
Which is valuable.
In that case the rules should be a) no wrong information and b) no low effort garbage, if you ask me.
so “no AI-generated content”, but with more words
This reply was first written by me, using the spellcheck and auto-complete features of my keyboard, and then run through an LLM to optimize it for readability with explicit instructions to not change the tone. It does not contain any incorrect information, and is obviously not low effort, however per the rules this comment should not be allowed.
Even then, AI models (be it text or image) are generally unethically trained (i.e. without consent of the authors/artists of the training material) and have a significant energy consumption, even for single prompts.
And I do have to ask: To what degree is running your comment through an LLM actually beneficial? You say it improved readability, but how unreadable was your original comment actually, that it would require fixing via external tool?
I get your sarcasm, but since I believe your comment holds a serious conviction I want to ask: have you never seen lawbooks? Clarity is good, but not at all costs.