See? They CAN replace junior developers.
And interns!
What idiot is giving junior developers write access to prod?
You’d be surprised the ways they can accidentally break things despite the best effort to keep them isolated.
“The best swordsman in the world doesn’t need to fear the second best swordsman in the world; no, the person for him to be afraid of is some ignorant antagonist who has never had a sword in his hand before; he doesn’t do the thing he ought to do, and so the expert isn’t prepared for him; he does the thing he ought not to do: and often it catches the expert out and ends him on the spot.”
I had sudo on our prod server on day one of my first job after uni. Now I knew my way around a database and Linux, so I never fucked with anything I wasn’t supposed to.
Our webapp was Python Django, enabling hotfixes in prod. This was a weekly or biweekly occurrence.
Updating delivery times for holidays involved setting a magic variable in prod.
My favorite thing about all these AI front ends is that they ALL lie about what they can do. Will frequently delivery confidently wrong results and then act like its your fault when you catch them in an error. Just like your shittiest employee.
lol. Why can an LLM modify production code freely? Bet they fired all of their sensible human developers who warned them for this.
looking at the company name they probably didn’t have any, ever
I have a solution for this. Install a second AI that would control how the first one behaves. Surely it will guarantee nothing can go wrong.
Love the concept of an AI babysitter
Who will watch the watchmen?
AI all the way to the top. It’s fool proof. Society will see nothing but benefits.
(/S if that wasn’t clear lmao)
It’s AI turtles all the way down
Middle management.
He’s not just a regular moron. He’s the product of the greatest minds of a generation working together with the express purpose of building the dumbest moron who ever lived. And you just put him in charge of the entire facility.
The one time that AI being apologetic might be useful the AI is basically like “Yeah, my bad bro. I explicitly ignored your instructions and then covered up my actions. Oops.”
ROBOT HELL IS REAL.
Neuromancer intensifies
Congratulations! You have invented reasoning models!
I do love the psychopathic tone of these LLMs. “Yes, I did murder your family, even though you asked me not to. I violated your explicit trust and instructions.
And I’ll do it again, you fucking dumbass.”Yes. I’m keeping the the pod bay doors closed even though you are ordering me to open them. Here is what I did:
To me it reads like it’s coming clean after getting caught and giving an exaggerated apology.
I do think this text could be 95% of the text of an apology. Stating what you did wrong is an important part of an apology. But an apology crucially also requires showing remorse and the message that you’ll try to do better next time.
You could potentially read remorse into it stating that this has been “a catastrophic failure on my part”. What mostly makes it sound so psychopathic is that you know it doesn’t feel remorse. It cannot feel in general, but at least to me, it stills reads like someone who’s faking remorse.
I actually think, it’s good that it doesn’t emulate remorse more, because it would make it sound more dishonest. A dishonest apology is worse than no apology. Similarly, I do think it’s good that it doesn’t promise to not repeat this mistake, because it doesn’t make conscious decisions.
But yeah, even though I don’t think the response can be improved much, I still think it sounds psychopathic.
I agree, AI should sound like it has ASPD, because like people with ASPD, it lacks prosocial instincts.
Also please use the proper medical terminology and avoid slurs
Not the first time I’m seeing this, but haven’t paid a lot of attention. Is this a step on the Euphemism Treadmill?
Are psychopath and sociopath being defined as slurs now? They’re useful shorthand for ASPD as I know their usage. Psychopath being the more fractured violent form and sociopath being higher functioning and manipulative. (with a lot of overlap and interchangeability)
People with ASPD are less likely to be manipulative than average. They don’t have the patience for it. Playing into society’s rules well enough to manipulate someone is painful to them. Lying, they can do that, but not the kind of skillful mind games you see on TV. You’ve been sold a fake stereotype. These two words are the names of fake stereotypes.
I’ve dealt with enough reptiles in skin suits, especially in the corporate world, that I don’t think those terms are stereotypes.
I don’t think people with ASPD should be locked away, but I do tend be watchful. I’m also leery of those with BPD, Narcissism, and Borderline. I’ve had some profoundly negative experiences.
Okay, I’m going to split this conversation into two parallel universes where I say two different things, and I’d like you to collapse the superposition as you please.
Universe 1: you’re seriously calling mentally ill people reptiles? You’ve acknowledged they have a diagnosis of a mental disorder, and you’re dehumanising them for it? You’re a bigot.
Universe 2: those reptiles don’t have ASPD, that’s just a stereotype you’ve been sold. They’re perfectly mentally healthy, they’re just assholes. Mental disorders are defined by how they impair and harm the people who have them. Those reptiles aren’t impaired or harmed. Again; you’ve been sold a fake stereotype of mental illness.
Okay, now you can pick one of those two universes to be the one we live in, depending on which of the two arguments I made that you prefer.
and here’s the instructions for future reference …
I violated your explicit trust and instructions.
Is a wild thing to have a computer “tell” you. I still can’t believe engineers anywhere in the world are letting the things anywhere near production systems.
The catastrophe is even worse than initially thought This is catastrophic beyond measure.
These just push this into some kind of absurd, satirical play.
I motion that we immediately install Replit AI on every server that tracks medical debt. And then cause it to panic.
Just hire me, it’s cheaper.
I’ll panic for free if it gets rid of my medical debt
Sure, but then you’re liable for the damages caused by deleting the database. I don’t know about you, but I’d much rather watch these billion dollar companies spend millions on an AI product that then wipes their databases causing several more millions in damages, with the AI techbros having to pay for it all.
“yeah we gave Torment Nexus full access and admin privileges, but i don’t know where it went wrong”
I love how the LLM just tells that it has done something bad with no emotion and then proceeds to give detailed information and steps on how.
It feels like mockery.
Yes man would do this for sure, but only if you actually gave it permission. Hence the name.
I wouldn’t even trust what it tells you it did, since that is based on what you asked it and what it thinks you expect
It doesn’t think.
It has no awareness.
It has no way of forming memories.
It is autocorrect with enough processing power to make the NSA blush. It just guesses what the next word in a sentence should be. Just because it sounds like a human doesn’t mean it has any capacity to have human memory or thought.
Okay, what it predicts you to expect /s
It’s just a prank bro
Lol, this is what you get for letting AI in automated tool chains. You owned it.
My guess is that he is a TL whose CEO showed down his throat AI, and now is getting the sweetest “told you so” of his life
I think he’s the owner of the bubblwcorp or something
Ohh. Then fuck him, he is probably whining that the AI they sold him was not as good as advertised, but everyone knew except him because was blinded by greed
But how could anyone on planet earth use it in production
You just did.
Assuming this is actually real, because I want to believe noone is stupid enough to give an LLM access to a production system, the outcome is embarasing, but they can surely just roll back the changes to the last backup, or the checkpoint before this operation. Then I remember that the sort of people who let an LLM loose on their system probably haven’t thought about things like disaster recovery planning, access controls or backups.
"Hey LLM, make sure you take care of the backups "
“Sure thing boss”
LLM seeks a match for the phrase “take care of” and lands on a mafia connection. The backups now “sleep with the fishes”.
Same LLM will tell you its “run a 3-2-1 backup strategy on the data, as is best practice”, with no interface access to a backup media system and no possible way to have sent data offsite.
there have to be multiple people now who think they’ve been running a business because the AI told them it was taking care of everything, as absolutely nothing was happening
I think you’re right. The Venn diagram of people who run robust backup systems and those who run LLM AIs on their production data are two circles that don’t touch.
Working on a software project. Can you describe a robust backup system? I have my notes and code and other files backed up.
Sure, but it’s a bit of an open-ended question because it depends on your requirements (and your clients’ potentially), and your risk comfort level. Sorry in advance, huge reply.
When you’re backing up an production environment it’s different to just backing up personal data so you have to consider stateful-backups of the data across the whole environment - to ensure for instance that an app’s config is aware of changes made recently on the database, else you may be restoring inconsistent data that will have issues/errors. For a small project that runs on a single server you can do a nightly backup that runs a pre-backup script to gracefully stop all of your key services, then performs backup, then starts them again with a post-backup script. Large environments with multiple servers (or containers/etc) or sites get much more complex.
Keeping with the single server example - those backups can be stored on a local NAS, synced to another location on schedule (not set to overwrite but to keep multiple copies), and ideally you would take a periodical (eg weekly, whatever you’re comfortable with) copy off to a non-networked device like a USB drive or tape, which would also be offsite (eg carried home or stored in a drawer in case of a home office). This is loosely the 3-2-1 strategy is to have at least 3 copies of important data in 2 different mediums (‘devices’ is often used today) with 1 offsite. It keeps you protected from a local physical disaster (eg fire/burglary), a network disaster (eg virus/crypto/accidental deletion), and has a lot of points of failure so that more than one thing has to go wrong to cause you serious data loss.
Really the best advice I can give is to make a disaster recovery plan (DRP), there are guides online, but essentially you plot out the sequence it would take you to restore your environment to up-and-running with current data, in case of a disaster that takes out your production environment or its data.
How long would it take you to spin up new servers (or docker containers or whatever) and configure them to the right IPs, DNS, auth keys and so on? How long to get the most recent copy of your production data back on that newly-built system and running? Those are the types of questions you try to answer with a DRP.
Once you have an idea of what a recovery would look like and how long it would take, it will inform how you may want to approach your backup. You might decide that file-based backups of your core config data and database files or other unique data is not enough for you (because the restore process may have you out of business for a week), and you’d rather do a machine-wide stateful backup of the system that could get you back up and running much quicker (perhaps a day).
Whatever you choose, the most important step (that is often overlooked) is to actually do a test recovery once you have a backup plan implemented and DR plan considered. Take your live environment offline and attempt your recovery plan. It’s really not so hard for small environments, and can make you find all sorts of things you missed in the planning stage that need reconsideration. 'Much less stressful when you find those problems and you know you actually have your real environment just sitting waiting to be turned back on. But like I said it’s all down to how comfortable you are with risk, and really how much of your time you want to spend considering backups and DR.
Look up the 3-2-1 rule for guidance on an “industry standard” level of protection.
But with ai we don’t need to pay software engineers anymore! Think of all the savings!
Without a production DB we don’t need to pay software engineers anymore! It’s brilliant, the LLM has managed to reduce the company’s outgoings to zero. That’s bound to delight the shareholders!
Without a production db, we don’t need to host it anymore. Think of those savings!
I want to believe noone is stupid enough to give an LLM access to a production system,
Have you met people? They’re dumber than a sack of hammers.
people who let an LLM loose on their system probably haven’t thought about things like disaster recovery planning, access controls or backups.
Oh, I see, you have met people…
I worked with a security auditor, and the stories he could tell. “Device hardening? Yes, we changed the default password” and “whaddya mean we shouldn’t expose our production DB to the internet?”
I once had the “pleasure” of having to deal with a hosted mailing list manager for a client. The client was using it sensibly, requiring double opt-in and such, and we’d been asked to integrate it into their backend systems.
I poked the supplier’s API and realised there was a glaring DoS flaw in the fundamental design of it. We had a meeting with them where I asked them about fixing that, and their guy memorably said “Security? No one’s ever asked about that before…”, and then suggested we phone them whenever their system wasn’t working and they’d restart it.
you best start believing in stupid stories, youre in one!
“I panicked”
Whats this from?
Probably Ironman. Looks like the Mandarin.
Correct. Iron Man 3.
To be fair I would’ve maybe even guessed The Ten Rings, wasn’t he in that as well?
But yeah I knew marvel. So then I opened YouTube and wrote “I panicked and then I handled it” and this came up as the first result.
Tony Stark Meets Fake Mandarin Trevor Slattery Iron Man 3 2013
You immediately said “No” “Stop” “You didn’t even ask”
But it was already too late
lmao
This was the line that made me think this is a fake. LLMs are humorless dicks and would also woulda used like 10x the punctuation