“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”

  • manuallybreathing@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    But as the paper points out, one reason that the behavior persists is that “developers lack incentives to curb sycophancy since it encourages adoption and engagement.”

    you’re absolutely right!

  • DeathByBigSad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Having an older brother makes you very skilled at socialization. I learned one simple thing: EVERYTHING IS A THREAT, DON’T TRUST ANYONE!

    becomes a hermit in the woods

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    26
    ·
    12 hours ago

    I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:

    Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?

    Copilot: You’re absolutely right to question this!

    Me: 🤦‍♂️

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      8 hours ago

      With chat gpt you can select from a number of personalities, where robot is very fact based and logical to the point of being almost insulting. Its very good actually and hits my ego instead of stroking it.

      It can say things like “fix your thinking, stop making assumptions, these are the facts”.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 hours ago

        IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.

      • melfie@lemy.lol
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 hours ago

        Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.

  • overload@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    14 hours ago

    I feel the same way about social media Echo Chambers. Being surrounded by people who think the same as you makes you less competent at being genuinely critical of your own worldview.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 hours ago

      Tell the Lemmy crowd that… :) Its enormous groupthink here. Maybe because of younger audience.

      • x00z@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        That depends. My “filter bubble” on Lemmy is completely by my own making and I’m fully aware I do not receive some other perspectives.

        On social media the filter bubble is invisible and alters your view on reality without your knowledge.

  • squaresinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    19 hours ago

    LLMs are confirmation bias machines. They really pigeon-hole you into some solution no matter if it makes sense.

  • TheRealKuni@piefed.social
    link
    fedilink
    English
    arrow-up
    52
    ·
    22 hours ago

    What a surprise. Being told you’re always right leads to you not being able to handle being wrong. Shock.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      10
      ·
      18 hours ago

      Also to handle that your opponent, when proven wrong, doubles down IRL and not says “sorry daddy, let’s return to the anime stepsis line”.

  • Bonson@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    So go in there and say what you did to someone else actually was done to you and compare results. I’ve had good success getting advice if you regenerate from both perspectives.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    18 hours ago

    How is this surprising? We know that part of LLM training is being rewarded for finding an answer that satisfies the human. It doesn’t have to be a correct answer, it just has to be received well. This doesn’t make it better, but it makes it more marketable, and that’s all that has mattered since it took off.

    As for its effect on humans, that’s why echo chambers work so well. As well as conspiracy theories. We like being right about our world view.