Lugh@futurology.todayM to Futurology@futurology.todayEnglish · 10 months agoTwo-faced AI language models learn to hide deception - ‘Sleeper agents’ seem benign during testing but behave differently once deployed. And methods to stop them aren’t working.www.nature.comexternal-linkmessage-square9fedilinkarrow-up118arrow-down15
arrow-up113arrow-down1external-linkTwo-faced AI language models learn to hide deception - ‘Sleeper agents’ seem benign during testing but behave differently once deployed. And methods to stop them aren’t working.www.nature.comLugh@futurology.todayM to Futurology@futurology.todayEnglish · 10 months agomessage-square9fedilink
minus-squarePossibly linux@lemmy.ziplinkfedilinkEnglisharrow-up1·10 months agoSorry, to late for that
minus-squaremateomaui@reddthat.comlinkfedilinkEnglisharrow-up2·10 months agoAlright, I’ll be out back digging the bomb shelter.
minus-squarePossibly linux@lemmy.ziplinkfedilinkEnglisharrow-up1·edit-210 months agoIts too late for that honestly
minus-squaremateomaui@reddthat.comlinkfedilinkEnglisharrow-up2·10 months agoAlright, I’ll switch to digging holes for the family burial ground.
Sorry, to late for that
Alright, I’ll be out back digging the bomb shelter.
Its too late for that honestly
Alright, I’ll switch to digging holes for the family burial ground.