• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    5 months ago

    Their interpretation of California law currently is that it is not specifically illegal, because it doesn’t involve an image of an actual child […]

    Though, taking an actual image of a classmate and face-swapping it onto a naked body or “enhancing” it definitely involves an image of an actual child. Hence it should be illegal. And I think the argumentation is bullshit. It just applies to completely fabricated images. Which I think shouldn’t be mixed with deepfakes of actual people which I think is a much more pressing issue. And shouldn’t be slowed down by arguing about everything at the same time. The really bad stuff could be addressed right away. And laws are already in place.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      5 months ago

      The thing with those LLM images is that they aren’t real. Yes, you can photoshop a real child’s face onto a naked body, but that still does not make the naked body a real naked child. It just makes it a fake computer drawn body with a real face on it. That’s about as real as classic face swapping or people drawing / modeling fan art (R34), regardless of the age. Hence why I’d say sexual abuse is very much misleading. Someone generating a bunch of fake images of real people isn’t really sexually abusing anyone, just like someone fantasizing real people to be naked or doing sexual things isn’t sexually abusing anyone. Those people wouldn’t even know it happened at that point. What matters more is what that person does with those images. If they end up using them for things like bullying, blackmail, disinformation etc. then I’d agree there should be definitely legal consequences. But with this current media hysteria about the “AI” topic we’ll likely see some very draconic laws due to uninformed / ignorant public pressure.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        5 months ago

        I disagree. They used the word “involved”. And that term means sth has a part somewhere in the process. Which applies if you feed it a real picture of a face or even use it as a template. So a real image was “involved”, disregarding if it looks or is real. However it gets processed or becomes part of the final thing, it was involved nonetheless.

        The issue with that is that it’s a very annoying form of bullying. And kids are afraid to tell someone because of the nature of it, they’re being blackmailed or whatever and it’s difficult to cope. I think I even read that lead to teens trying to commit suicide. Which isn’t nice, if true.

        But I certainly agree that there is too much hysteria, mixing everything together, spreading FUD, and exploiting child abuse to push for mass surveillance or other political agendas.

        Ultimately, I think a phrase with “involved” is a good choice. It makes it clear that it’s not okay if a real child is made part of the process. We can argue if it’s alright to do fictional drawings which seem to be okay in California and for example in Japan. But (ab)using real people no matter where in the process would definitely cross the line for me.

      • pete_the_cat@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        I was just talking with a friend who is a software dev (I’m a Linux Engineer so I do software as part of my job, just not my main focus) and we were just commiserating on how 75-80% of the world doesn’t understand that “AI” is just regurgitating information it has collected and it’s not like Jarvis or Skynet and thinks for itself.

        I agree that the term “sexual abuse” is definitely misleading, I think “sexual exploitation” is better. I agree with you it’s no different than face swapping, but the difference is that it’s a lot easier for the general public to do it now than it was 5 or 10 years ago. It’s also pretty fucked that a fake image of you could potentially put you in “hot water” years down the road and you have zero control over it.

        While I definitely hate the “AI bubble” that has grown tremendously over the past 2-3 years, we definitely need to figure out how to place limits on it before shit really gets out of hand in another year or two. The problem is that anyone that knows anything about this stuff doesn’t work in or for the government. The woman in the article that said that this needs to be regulated at every point of course doesn’t work in tech, she works for some rights organization 🤦‍♂️