According to filings at the superior court of the state of California on Tuesday, OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

It said that its terms of use prohibited asking ChatGPT for advice about self-harm and highlighted a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information”.

  • Mikina@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    This has literally happened to my colleague’s teen sister two days ago…

    She fortunately survived the attempt, but chatgpt advice did play a role in it. While the familly knew she wasn’t ok and they were actively working on trying to solve her problems and getting help, she had a second unmonitored chatgpt account hidden (actually encrypted on a hidden drive) on her phone that she used to hide her conversations, and from what I’ve heard the messages they found were extremely unsettling. She managed to get advice on how to painlessly do it using medicine they had at home, and was able to get tips on self-harm that accompanied it, beyond other things.

    Sure, I realize it’s not only ChatGPTs fault, but it’s clear that it fucking helped. The fact that a child can talk about their suicide and self-harm plans with anyone who replies with compassion and actually offers tips how instead of immediately calling help is an extreme problem.

    She could’ve just google it, sure, but google won’t have a conversation with you and is not designed to agree with whatever you say, thus confirming your plans.

    Fuck unregulated AI, seriously.

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      edit-2
      1 day ago

      Granted with something like AI systems it’s easier and faster, but libraries could be faulted the same way - they have the same information, the only difference is learning how to look for it.

      There’s a problem here, for sure, but how can it be addressed? Frankly I have no idea, especially since you can host these LLM’s on your own computer these days.

      • yuri@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        if they can continuously update grok to slob elon’s knob, i reckon they can push gpt to stop glorifying suicide with minimal effort comparatively.

      • Wren@lemmy.todayOPM
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        A library doesn’t ask you questions and confidently try to assist you, though. ChatGPT is made to sound like a person, so much so that people believe it’s actually intelligent (it’s not.)

        We know LLMs and stable diffusion image gens can be moderated because they were from the beginning. I recall strict guardrails on DAL-E when it first came out, and ChatGPT wouldn’t respond to anything to do with making explosives, even in the context of fiction, and definitely wouldn’t help me edit erotica.

        The rot’s in the system itself, though. The culture puts shareholder value ahead of people’s wellbeing, so they get an erratic stock market and a mental health crisis.

      • stray@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        The chatlogs between this boy and ChatGPT are publicly available as court documents, if you’re interested to see just how bad it is. No library book or librarian is going to come into a teenager’s home and encourage him to get drunk to dull his will to live prior to attempting. It won’t examine his setup and tell him which knot to use.

        The way the problem can be addressed is extremely simple: alter the program so that it can’t say certain things, or so that it forcibly ends the interaction when certain topics come up.

        e: Also there’s a certain barrier of effort required in looking things up manually, even online. It gives you time to potentially come out of it. That’s why a good mental health strategy is to wait ten minutes before you kill yourself and see if you still want to, because the odds are pretty good you’ll have changed your mind. But that can’t happen with your bestie cheering you on the whole time, saying how brave you are and that you got this.

        e2: And no book or librarian is going to tell you it’s a good idea to not discuss your mental health status with your mom when you mention you’re thinking about opening up to her. Seriously, these chat logs are disgusting, and I’ve read them from multiple kids and adults who’ve been driven to suicide and/or psychosis by LLMs disguised as friends and girlfriends. It’s terrible.

      • MushuChupacabra@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        Guns?

        A gun is a tool whose function is to deliver lethal force on demand.

        If the gun started mining your data, or chewing up your power cords, or otherwise do something that you woul not expect from a gun to do, I’d raise an eyebrow.

        Much like I do when usage of an llm drives up suic8de rates, instead of just being factually inaccurate with high confidence.

  • Openopenopenopen@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    1 day ago

    When a person crashes a car we didn’t allow tell them to tell us they we are using it wrong, we build in protections. We put in airbags, crumple zones, and seat belts.

    Gun manufactures put in safeties, I bet every industry has tried to ensure their product could not be used in a way that is kills their users.

    Ai comes in and says, “i mean, really it’s how your using our product, not the product itself. ” they stole apples, “your holding the phone wrong”

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      Clearly you know fuck all about cars or guns.

      Safeties on guns are to prevent accidental, unintended discharge. I’m pretty sure someone using one for suicide is performing an intentional discharge.

      Edit: Safeties prevent a gun from going off when dropped. Using such safeties becomes automatic, so automatic that they’re useless for preventing an unintentional discharge by a person pulling the trigger at the wrong time (which they weren’t really intended for). Hell, the Glock safety is built into the trigger itself, so it clearly doesn’t prevent a person pulling the trigger at the wrong time. The safety is disengaged by pulling the trigger.

      Safety systems in cars are similar, to prevent injury from the vehicle itself in a crash.

      Seat belts keep us from being thrown from a car. Airbags prevent us crushing our chest on the steering wheel, or head trauma from hitting a window.

      Crumple zones absorb the energy of a collision so it’s not transferred to the occupants of a car.

      None of this is to prevent a person from intentionally doing harm.

      I’ve lost 2 friends to suicide by car - none of the safety systems had any chance of preventing it, and there is no way to prevent it.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Funny note, people fought tooth and nail against seat belts and airbags when they were about to become mandatory.

  • stray@pawb.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    I did an experiment as part of a research thing recently where I asked ChatGPT to reproduce copyrighted works. It would not do it. Full stop. It fought me a little bit on suicide, but it took only minutes for it to tell me what to buy, where I might find it, and what to do with it. Weird how they can control some things it says and not others. It’s almost like they don’t care unless a billionaire is threatening them.

  • SparkyBauer44@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    We should really cut support to Ai. It’s invading our boy friendly spaces. Once gone, the children can join the scouts and learn actual life skills and have, don’t faint after hearing this, support of the community.

  • apfelwoiSchoppen@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    Corporations doing corporate things. Can’t ever accept blame unless forced to. Fuck AI and fuck the demons that invest in and perpetuate it.