shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 9 months agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square24fedilinkarrow-up1294arrow-down14cross-posted to: technology@lemm.ee
arrow-up1290arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 9 months agomessage-square24fedilinkcross-posted to: technology@lemm.ee
minus-squarestangel@lemmy.worldlinkfedilinkEnglisharrow-up2·9 months agoBug bounty programs are a thing.
minus-squaresudneo@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·9 months agoYes, an exploitative thing that mostly consists of free labour for big orgs.
minus-squarespujb@lemmy.cafelinkfedilinkEnglisharrow-up3arrow-down3·9 months agoyes i am aware? are they being used by openai?
Bug bounty programs are a thing.
Yes, an exploitative thing that mostly consists of free labour for big orgs.
yes i am aware? are they being used by openai?