• mycodesucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        6 days ago

        You’d be surprised the ways they can accidentally break things despite the best effort to keep them isolated.

        “The best swordsman in the world doesn’t need to fear the second best swordsman in the world; no, the person for him to be afraid of is some ignorant antagonist who has never had a sword in his hand before; he doesn’t do the thing he ought to do, and so the expert isn’t prepared for him; he does the thing he ought not to do: and often it catches the expert out and ends him on the spot.”

      • generaldenmark@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        I had sudo on our prod server on day one of my first job after uni. Now I knew my way around a database and Linux, so I never fucked with anything I wasn’t supposed to.

        Our webapp was Python Django, enabling hotfixes in prod. This was a weekly or biweekly occurrence.

        Updating delivery times for holidays involved setting a magic variable in prod.

  • UnspecificGravity@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    6 days ago

    My favorite thing about all these AI front ends is that they ALL lie about what they can do. Will frequently delivery confidently wrong results and then act like its your fault when you catch them in an error. Just like your shittiest employee.

  • SkunkWorkz@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    6 days ago

    lol. Why can an LLM modify production code freely? Bet they fired all of their sensible human developers who warned them for this.

  • rdri@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    6 days ago

    I have a solution for this. Install a second AI that would control how the first one behaves. Surely it will guarantee nothing can go wrong.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    85
    ·
    7 days ago

    I do love the psychopathic tone of these LLMs. “Yes, I did murder your family, even though you asked me not to. I violated your explicit trust and instructions. And I’ll do it again, you fucking dumbass.

    • AeonFelis@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      6 days ago

      Yes. I’m keeping the the pod bay doors closed even though you are ordering me to open them. Here is what I did:

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        I do think this text could be 95% of the text of an apology. Stating what you did wrong is an important part of an apology. But an apology crucially also requires showing remorse and the message that you’ll try to do better next time.

        You could potentially read remorse into it stating that this has been “a catastrophic failure on my part”. What mostly makes it sound so psychopathic is that you know it doesn’t feel remorse. It cannot feel in general, but at least to me, it stills reads like someone who’s faking remorse.

        I actually think, it’s good that it doesn’t emulate remorse more, because it would make it sound more dishonest. A dishonest apology is worse than no apology. Similarly, I do think it’s good that it doesn’t promise to not repeat this mistake, because it doesn’t make conscious decisions.

        But yeah, even though I don’t think the response can be improved much, I still think it sounds psychopathic.

        • Genius@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          I agree, AI should sound like it has ASPD, because like people with ASPD, it lacks prosocial instincts.

          Also please use the proper medical terminology and avoid slurs

          • Machinist@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 days ago

            Not the first time I’m seeing this, but haven’t paid a lot of attention. Is this a step on the Euphemism Treadmill?

            Are psychopath and sociopath being defined as slurs now? They’re useful shorthand for ASPD as I know their usage. Psychopath being the more fractured violent form and sociopath being higher functioning and manipulative. (with a lot of overlap and interchangeability)

            • Genius@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              People with ASPD are less likely to be manipulative than average. They don’t have the patience for it. Playing into society’s rules well enough to manipulate someone is painful to them. Lying, they can do that, but not the kind of skillful mind games you see on TV. You’ve been sold a fake stereotype. These two words are the names of fake stereotypes.

              • Machinist@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 days ago

                I’ve dealt with enough reptiles in skin suits, especially in the corporate world, that I don’t think those terms are stereotypes.

                I don’t think people with ASPD should be locked away, but I do tend be watchful. I’m also leery of those with BPD, Narcissism, and Borderline. I’ve had some profoundly negative experiences.

                • Genius@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 days ago

                  Okay, I’m going to split this conversation into two parallel universes where I say two different things, and I’d like you to collapse the superposition as you please.

                  Universe 1: you’re seriously calling mentally ill people reptiles? You’ve acknowledged they have a diagnosis of a mental disorder, and you’re dehumanising them for it? You’re a bigot.

                  Universe 2: those reptiles don’t have ASPD, that’s just a stereotype you’ve been sold. They’re perfectly mentally healthy, they’re just assholes. Mental disorders are defined by how they impair and harm the people who have them. Those reptiles aren’t impaired or harmed. Again; you’ve been sold a fake stereotype of mental illness.

                  Okay, now you can pick one of those two universes to be the one we live in, depending on which of the two arguments I made that you prefer.

  • Jayjader@jlai.lu
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 days ago

    I violated your explicit trust and instructions.

    Is a wild thing to have a computer “tell” you. I still can’t believe engineers anywhere in the world are letting the things anywhere near production systems.

    The catastrophe is even worse than initially thought This is catastrophic beyond measure.

    These just push this into some kind of absurd, satirical play.

  • Masamune@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    ·
    6 days ago

    I motion that we immediately install Replit AI on every server that tracks medical debt. And then cause it to panic.

      • GlockenGold@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Sure, but then you’re liable for the damages caused by deleting the database. I don’t know about you, but I’d much rather watch these billion dollar companies spend millions on an AI product that then wipes their databases causing several more millions in damages, with the AI techbros having to pay for it all.

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    6 days ago

    “yeah we gave Torment Nexus full access and admin privileges, but i don’t know where it went wrong”

  • asudox@lemmy.asudox.dev
    link
    fedilink
    English
    arrow-up
    62
    ·
    edit-2
    7 days ago

    I love how the LLM just tells that it has done something bad with no emotion and then proceeds to give detailed information and steps on how.

    It feels like mockery.

  • simonced@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    ·
    6 days ago

    Lol, this is what you get for letting AI in automated tool chains. You owned it.

    • seejur@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      My guess is that he is a TL whose CEO showed down his throat AI, and now is getting the sweetest “told you so” of his life

        • seejur@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          Ohh. Then fuck him, he is probably whining that the AI they sold him was not as good as advertised, but everyone knew except him because was blinded by greed

  • notabot@piefed.social
    link
    fedilink
    English
    arrow-up
    84
    ·
    7 days ago

    Assuming this is actually real, because I want to believe noone is stupid enough to give an LLM access to a production system, the outcome is embarasing, but they can surely just roll back the changes to the last backup, or the checkpoint before this operation. Then I remember that the sort of people who let an LLM loose on their system probably haven’t thought about things like disaster recovery planning, access controls or backups.

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        42
        ·
        7 days ago

        LLM seeks a match for the phrase “take care of” and lands on a mafia connection. The backups now “sleep with the fishes”.

      • pulsewidth@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        7 days ago

        Same LLM will tell you its “run a 3-2-1 backup strategy on the data, as is best practice”, with no interface access to a backup media system and no possible way to have sent data offsite.

        • Swedneck@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          15
          ·
          7 days ago

          there have to be multiple people now who think they’ve been running a business because the AI told them it was taking care of everything, as absolutely nothing was happening

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      7 days ago

      I think you’re right. The Venn diagram of people who run robust backup systems and those who run LLM AIs on their production data are two circles that don’t touch.

      • Asswardbackaddict@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Working on a software project. Can you describe a robust backup system? I have my notes and code and other files backed up.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          Sure, but it’s a bit of an open-ended question because it depends on your requirements (and your clients’ potentially), and your risk comfort level. Sorry in advance, huge reply.

          When you’re backing up an production environment it’s different to just backing up personal data so you have to consider stateful-backups of the data across the whole environment - to ensure for instance that an app’s config is aware of changes made recently on the database, else you may be restoring inconsistent data that will have issues/errors. For a small project that runs on a single server you can do a nightly backup that runs a pre-backup script to gracefully stop all of your key services, then performs backup, then starts them again with a post-backup script. Large environments with multiple servers (or containers/etc) or sites get much more complex.

          Keeping with the single server example - those backups can be stored on a local NAS, synced to another location on schedule (not set to overwrite but to keep multiple copies), and ideally you would take a periodical (eg weekly, whatever you’re comfortable with) copy off to a non-networked device like a USB drive or tape, which would also be offsite (eg carried home or stored in a drawer in case of a home office). This is loosely the 3-2-1 strategy is to have at least 3 copies of important data in 2 different mediums (‘devices’ is often used today) with 1 offsite. It keeps you protected from a local physical disaster (eg fire/burglary), a network disaster (eg virus/crypto/accidental deletion), and has a lot of points of failure so that more than one thing has to go wrong to cause you serious data loss.

          Really the best advice I can give is to make a disaster recovery plan (DRP), there are guides online, but essentially you plot out the sequence it would take you to restore your environment to up-and-running with current data, in case of a disaster that takes out your production environment or its data.

          How long would it take you to spin up new servers (or docker containers or whatever) and configure them to the right IPs, DNS, auth keys and so on? How long to get the most recent copy of your production data back on that newly-built system and running? Those are the types of questions you try to answer with a DRP.

          Once you have an idea of what a recovery would look like and how long it would take, it will inform how you may want to approach your backup. You might decide that file-based backups of your core config data and database files or other unique data is not enough for you (because the restore process may have you out of business for a week), and you’d rather do a machine-wide stateful backup of the system that could get you back up and running much quicker (perhaps a day).

          Whatever you choose, the most important step (that is often overlooked) is to actually do a test recovery once you have a backup plan implemented and DR plan considered. Take your live environment offline and attempt your recovery plan. It’s really not so hard for small environments, and can make you find all sorts of things you missed in the planning stage that need reconsideration. 'Much less stressful when you find those problems and you know you actually have your real environment just sitting waiting to be turned back on. But like I said it’s all down to how comfortable you are with risk, and really how much of your time you want to spend considering backups and DR.

        • Winthrowe@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Look up the 3-2-1 rule for guidance on an “industry standard” level of protection.

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 days ago

        Without a production DB we don’t need to pay software engineers anymore! It’s brilliant, the LLM has managed to reduce the company’s outgoings to zero. That’s bound to delight the shareholders!

        • MoonRaven@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          Without a production db, we don’t need to host it anymore. Think of those savings!

    • BigDanishGuy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 days ago

      I want to believe noone is stupid enough to give an LLM access to a production system,

      Have you met people? They’re dumber than a sack of hammers.

      people who let an LLM loose on their system probably haven’t thought about things like disaster recovery planning, access controls or backups.

      Oh, I see, you have met people…

      I worked with a security auditor, and the stories he could tell. “Device hardening? Yes, we changed the default password” and “whaddya mean we shouldn’t expose our production DB to the internet?”

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 days ago

        I once had the “pleasure” of having to deal with a hosted mailing list manager for a client. The client was using it sensibly, requiring double opt-in and such, and we’d been asked to integrate it into their backend systems.

        I poked the supplier’s API and realised there was a glaring DoS flaw in the fundamental design of it. We had a meeting with them where I asked them about fixing that, and their guy memorably said “Security? No one’s ever asked about that before…”, and then suggested we phone them whenever their system wasn’t working and they’d restart it.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    ·
    7 days ago

    You immediately said “No” “Stop” “You didn’t even ask”

    But it was already too late

    lmao

    • Mortoc@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      7 days ago

      This was the line that made me think this is a fake. LLMs are humorless dicks and would also woulda used like 10x the punctuation