According to filings at the superior court of the state of California on Tuesday, OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

It said that its terms of use prohibited asking ChatGPT for advice about self-harm and highlighted a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information”.

  • Onomatopoeia@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    edit-2
    2 days ago

    Granted with something like AI systems it’s easier and faster, but libraries could be faulted the same way - they have the same information, the only difference is learning how to look for it.

    There’s a problem here, for sure, but how can it be addressed? Frankly I have no idea, especially since you can host these LLM’s on your own computer these days.

    • yuri@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      if they can continuously update grok to slob elon’s knob, i reckon they can push gpt to stop glorifying suicide with minimal effort comparatively.

    • Wren@lemmy.todayOPM
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      A library doesn’t ask you questions and confidently try to assist you, though. ChatGPT is made to sound like a person, so much so that people believe it’s actually intelligent (it’s not.)

      We know LLMs and stable diffusion image gens can be moderated because they were from the beginning. I recall strict guardrails on DAL-E when it first came out, and ChatGPT wouldn’t respond to anything to do with making explosives, even in the context of fiction, and definitely wouldn’t help me edit erotica.

      The rot’s in the system itself, though. The culture puts shareholder value ahead of people’s wellbeing, so they get an erratic stock market and a mental health crisis.

    • stray@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      The chatlogs between this boy and ChatGPT are publicly available as court documents, if you’re interested to see just how bad it is. No library book or librarian is going to come into a teenager’s home and encourage him to get drunk to dull his will to live prior to attempting. It won’t examine his setup and tell him which knot to use.

      The way the problem can be addressed is extremely simple: alter the program so that it can’t say certain things, or so that it forcibly ends the interaction when certain topics come up.

      e: Also there’s a certain barrier of effort required in looking things up manually, even online. It gives you time to potentially come out of it. That’s why a good mental health strategy is to wait ten minutes before you kill yourself and see if you still want to, because the odds are pretty good you’ll have changed your mind. But that can’t happen with your bestie cheering you on the whole time, saying how brave you are and that you got this.

      e2: And no book or librarian is going to tell you it’s a good idea to not discuss your mental health status with your mom when you mention you’re thinking about opening up to her. Seriously, these chat logs are disgusting, and I’ve read them from multiple kids and adults who’ve been driven to suicide and/or psychosis by LLMs disguised as friends and girlfriends. It’s terrible.