cm0002@lemmy.world to Technology@lemmy.zipEnglish · 1 month agoChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands whywww.pcgamer.comexternal-linkmessage-square16linkfedilinkarrow-up166arrow-down12cross-posted to: [email protected][email protected][email protected]
arrow-up164arrow-down1external-linkChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands whywww.pcgamer.comcm0002@lemmy.world to Technology@lemmy.zipEnglish · 1 month agomessage-square16linkfedilinkcross-posted to: [email protected][email protected][email protected]
minus-squarefinitebanjo@lemmy.worldlinkfedilinkEnglisharrow-up2·1 month agoI think comparing a small model’s collapse to a large model’s corruption is a bit of a fallacy. What proof do you have that the two behave the same in response to poisoned data?
I think comparing a small model’s collapse to a large model’s corruption is a bit of a fallacy. What proof do you have that the two behave the same in response to poisoned data?