According to filings at the superior court of the state of California on Tuesday, OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.
It said that its terms of use prohibited asking ChatGPT for advice about self-harm and highlighted a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information”.



A library doesn’t ask you questions and confidently try to assist you, though. ChatGPT is made to sound like a person, so much so that people believe it’s actually intelligent (it’s not.)
We know LLMs and stable diffusion image gens can be moderated because they were from the beginning. I recall strict guardrails on DAL-E when it first came out, and ChatGPT wouldn’t respond to anything to do with making explosives, even in the context of fiction, and definitely wouldn’t help me edit erotica.
The rot’s in the system itself, though. The culture puts shareholder value ahead of people’s wellbeing, so they get an erratic stock market and a mental health crisis.