• ProbablyBaysean@lemmy.ca
    link
    fedilink
    arrow-up
    19
    arrow-down
    67
    ·
    2 days ago

    If you have a llm, then you have something to have a conversation with.

    If you give the llm the ability to have memory, then it gets much “smarter” (context window and encoded local knowlege base)

    If you give the llm the ability to offload math problems to an orchestrator that composes math problems then it can give hard numbers based on real math.

    If you give a llm the ability to use the internet to search then it has a way to update its knowledgebase before it answers (seems smarter)

    If you give a llm a orchestrator that can use a credit card on the internet, it can deliver stuff via doordash or whatever. (I don’t trust it).

    Definitely not general intelligence, but good at general conversation and brainstorming and able to get extended modularly while keeping the conversational interface

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 hours ago

      Heres the main problem:

      LLMs don’t forget things.

      They do not disregard false data, false concepts.

      That conversation, that dataset, knowledge base gets too big?

      Well the LLM now gets slower and less efficient, has to compare and contrast more and more contradictory data, to build its heuristics out of.

      It has no ability to meta-cognate. It has no ability to discern, and disregard bullshit, both as raw data points, and bullshit processes for evaluating and formulating concepts and systems.

      The problem is not that they know too little, but that they know so much that isn’t so is pointless contradictory garbage.

      When people learn and grow and change and make breakthroughs, they do so by shifting to or inventing some kind of totally new mental framework for understanding themselves and/or the world.

      LLMs cannot do this.

    • lordbritishbusiness@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      They make a good virtual intelligence, and they do a very good impression of it when given all the tools. I don’t think they’ll get to proper intelligence without a self updating state/model, which will get into real questions about them being something that is being.

      I’m not sure the world is quite ready for that.

    • Nalivai@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      16 hours ago

      you have something to have a conversation with.

      No more that you can have a conversation with a malfunctioning radio. Technically true, you’re saying words, and the radio sometimes is saying words in between all the static. But that’s not what we’re talking about.
      Same for the rest of the points. “I’m feeling lucky” button on google that takes input from /dev/random is basically one step below what you’re talking about. With the same amount of utility, or intelligence for that matter.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      71
      arrow-down
      2
      ·
      edit-2
      2 days ago

      It’s essentially a rubber duck. It doesn’t need to be intelligent, or even very good at pretending to be intelligent. Simply explaining an problem to an object is enough to help people see the problem from a different angle, the fact that it gibbers back at you is either irrelevant or maybe a slight upgrade from your standard rubber duck.

      Still a lot of resources to expend for what can be done with a much lower tech solution.

      • ProbablyBaysean@lemmy.ca
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        15 hours ago

        Talking about rubber duck intelligence, there is a two step “thinking then respond” that recent iterations of llms have started using. It is literally a rubber duck during the thinking phase. I downloaded a local llm with this feature and had it run and the cli did not hide the “thinking” once done. The end product was better quality than if it had tried to spit an answer immediately (I toggled thinking off and it definitely was dumber, so I think you are right for the generation of llms before “thinking”

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          14 hours ago

          That’s why I’m saying this might be an upgrade from a rubber duck. I’ll wait for some empirical evidence before I accept that it definitely is better than a rubber duck, though, because even with “thinking” it might actually cause tunnel vision for people who use it to bounce ideas. As long as the LLM is telling you that you’re inventing a new type of math you won’t stop to think of something else.

      • Prunebutt@slrpnk.net
        link
        fedilink
        arrow-up
        33
        arrow-down
        5
        ·
        edit-2
        1 day ago

        I think a more apt comparison would be a complicated magic 8 Ball, since it actually gives answers that seem to be relevant to the question, but your interpretation does the actualy mental work.

        • daannii@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          https://en.wikipedia.org/wiki/Barnum_effect

          -The Barnum effect, also called the Forer effect or, less commonly, the Barnum–Forer effect, is a common psychological phenomenon whereby individuals give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically to them, yet which are in fact vague and general enough to apply to a broad range of people.[1] This effect can provide a partial explanation for the widespread acceptance of some paranormal beliefs and practices, such as astrology, fortune telling, aura reading, and some types of personality tests.[1]

        • corsicanguppy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          Nice. We can also reductively claim LLM as just an expensive magic-8-ball and make LLM-bros mad. :-)

      • ProbablyBaysean@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        15 hours ago

        Well a local model responding to a prompt on less than 20gb of vram (gaming computer) costs less power than booting up any recent AAA high fps game. The primary cost in power is r&d. Training the next model to be “the best” is an Arms race. 90% of power consumption is trying to train the next model in 100 different ways. China was able to build off of chat cpt tech and built a model that has similar abilities and smartness for only 5 million. I think I won’t update my local model until there is actually more abilities in the next one.

        • bthest@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          4 hours ago

          And there is also a cost in the brain atrophy that these are causing in people who use them regularly. LLMs will make a huge segment of the population mentally and emotionally stunted. Who knows what these will do to this generation of children (many of whom will virtually raised by LLMs). Television and smartphones have done similar brain damage, particularly to attention spans, but these things will really wreak havoc on humanity on another level entirely. Like potentially a Fermi paradox/great filter solution level of harm.

    • lemmy_outta_here@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      1 day ago

      I don’t know why you’re getting downvoted. i think you hit on an interesting observation.

      If all a person had was broca’s and wernicke’s areas of the brain, their abilities would be pretty limited. you need the rest: cerebellum for coordination, prefrontal cortex for planning, hippocampus for memory regulation or whatever (i am not a neurologist), etc.

      LLMs might be a part of AGI some day, but they are far from sufficient on their own