ThefuzzyFurryComrade@pawb.social to Fuck AI@lemmy.world · 4 days agoOn AI Reliabilitypawb.socialexternal-linkmessage-square11fedilinkarrow-up1503arrow-down16file-text
arrow-up1497arrow-down1external-linkOn AI Reliabilitypawb.socialThefuzzyFurryComrade@pawb.social to Fuck AI@lemmy.world · 4 days agomessage-square11fedilinkfile-text
minus-square𝕸𝖔𝖘𝖘@infosec.publinkfedilinkarrow-up61·4 days agoUnless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
minus-squarejsomae@lemmy.mllinkfedilinkarrow-up13·4 days agoThis is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
minus-squarehenfredemars@infosec.publinkfedilinkEnglisharrow-up17·4 days agoThis is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
minus-squaredavidgro@lemmy.worldlinkfedilinkarrow-up11·4 days agoAnd they are very specifically optimized to be convincing.
minus-squarefriend_of_satan@lemmy.worldlinkfedilinkEnglisharrow-up3·3 days agoHaha, yeah, I was going to say 40% is way more impressive than the results I get.
Unless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
This is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
This is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
And they are very specifically optimized to be convincing.
Haha, yeah, I was going to say 40% is way more impressive than the results I get.