shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 9 months agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square24fedilinkarrow-up1294arrow-down14cross-posted to: technology@lemm.ee
arrow-up1290arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 9 months agomessage-square24fedilinkcross-posted to: technology@lemm.ee
minus-squarevamputer@infosec.publinkfedilinkEnglisharrow-up20arrow-down1·9 months agoAnd then, in the case of it explaining how to counterfeit money, the AI gets so excited about solving the puzzle, it immediately disregards everything else and shouts the word in all-caps just like a real idiot would. It’s so lifelike…
And then, in the case of it explaining how to counterfeit money, the AI gets so excited about solving the puzzle, it immediately disregards everything else and shouts the word in all-caps just like a real idiot would. It’s so lifelike…