It’s essentially a rubber duck. It doesn’t need to be intelligent, or even very good at pretending to be intelligent. Simply explaining an problem to an object is enough to help people see the problem from a different angle, the fact that it gibbers back at you is either irrelevant or maybe a slight upgrade from your standard rubber duck.
Still a lot of resources to expend for what can be done with a much lower tech solution.
Talking about rubber duck intelligence, there is a two step “thinking then respond” that recent iterations of llms have started using. It is literally a rubber duck during the thinking phase. I downloaded a local llm with this feature and had it run and the cli did not hide the “thinking” once done. The end product was better quality than if it had tried to spit an answer immediately (I toggled thinking off and it definitely was dumber, so I think you are right for the generation of llms before “thinking”
That’s why I’m saying this might be an upgrade from a rubber duck. I’ll wait for some empirical evidence before I accept that it definitely is better than a rubber duck, though, because even with “thinking” it might actually cause tunnel vision for people who use it to bounce ideas. As long as the LLM is telling you that you’re inventing a new type of math you won’t stop to think of something else.
I think a more apt comparison would be a complicated magic 8 Ball, since it actually gives answers that seem to be relevant to the question, but your interpretation does the actualy mental work.
-The Barnum effect, also called the Forer effect or, less commonly, the Barnum–Forer effect, is a common psychological phenomenon whereby individuals give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically to them, yet which are in fact vague and general enough to apply to a broad range of people.[1] This effect can provide a partial explanation for the widespread acceptance of some paranormal beliefs and practices, such as astrology, fortune telling, aura reading, and some types of personality tests.[1]
It’s essentially a rubber duck. It doesn’t need to be intelligent, or even very good at pretending to be intelligent. Simply explaining an problem to an object is enough to help people see the problem from a different angle, the fact that it gibbers back at you is either irrelevant or maybe a slight upgrade from your standard rubber duck.
Still a lot of resources to expend for what can be done with a much lower tech solution.
Talking about rubber duck intelligence, there is a two step “thinking then respond” that recent iterations of llms have started using. It is literally a rubber duck during the thinking phase. I downloaded a local llm with this feature and had it run and the cli did not hide the “thinking” once done. The end product was better quality than if it had tried to spit an answer immediately (I toggled thinking off and it definitely was dumber, so I think you are right for the generation of llms before “thinking”
That’s why I’m saying this might be an upgrade from a rubber duck. I’ll wait for some empirical evidence before I accept that it definitely is better than a rubber duck, though, because even with “thinking” it might actually cause tunnel vision for people who use it to bounce ideas. As long as the LLM is telling you that you’re inventing a new type of math you won’t stop to think of something else.
I think a more apt comparison would be a complicated magic 8 Ball, since it actually gives answers that seem to be relevant to the question, but your interpretation does the actualy mental work.
https://en.wikipedia.org/wiki/Barnum_effect
-The Barnum effect, also called the Forer effect or, less commonly, the Barnum–Forer effect, is a common psychological phenomenon whereby individuals give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically to them, yet which are in fact vague and general enough to apply to a broad range of people.[1] This effect can provide a partial explanation for the widespread acceptance of some paranormal beliefs and practices, such as astrology, fortune telling, aura reading, and some types of personality tests.[1]
Nice. We can also reductively claim LLM as just an expensive magic-8-ball and make LLM-bros mad. :-)