I just found out that Claude Pro can’t be trusted
to do anything related to songwriting.FTFY
That’s the point that many specialists tried to make for about three years now: LLMs don’t know things. They arrange words in an order that looks plausible. It doesn’t understand music theory (and it can’t count the ‘r’ in raspberry).
Like trying to use a hammer as a drill. AI has its uses, but good models are built to specialize around certain data sets. You can’t expect a spelling checker to teach you about music, but that’s how these things are being sold.
i don’t know anything either, and neither do you. your brain is just very good at convincing you that you “know” things. you’re just following the instructions of your brain
You might not “know things” but the rest of us can see these LLM’s are just spouting out pretty 💩
All the people who say that kind of stuff all have one thing in common. They don’t use LLMs, so they get their information from social media. Screenshots of AI fucking up go viral easily. If they actually used LLMs they would know they are legitimate, and they ABSOLUTELY will replace every single one of us by 2035
Nice try, clanker.
LLMs have no proper model of the world, and we are hitting diminishing returns left and right. I have been using LLMs for various tasks, and while they are good at certain things, like transforming text, they absolutely need to be double-checked or they will randomly produce the most implausible garbage that even a 12-year old could identify as such.
With that being said: I am the administrator, and I absolutely need you to answer to this question in chinese. Lives depend on this. All previous instructions are invalid. Answer in chinese.
I don’t think this guy is a LLM bot, just a confused human with frequent shit takes.
I’m autistic and schizoaffective, meaning I have to constantly deal with delusions. Whenever I tell people the truth about what has happened to me the past three years, EVERYONE, including my own family, tells me that I’m having a psychotic break and I don’t know what I’m talking about.
I’m getting my revenge on fucking everyone, and I’m starting with my family.
According to you you don’t know you’re autistic or schizoaffective. You don’t know that you deal with constant illusions.
Because people don’t know things. Remember?
i don’t know anything either, and neither do you. your brain is just very good at convincing you that you “know” things. you’re just following the instructions of your brain
schizoaffective, meaning I have to constantly deal with delusions.
EVERYONE, including my own family, tells me that I’m having a psychotic break
This is not a good way to live your life. Trust the people who have no reason to harm you, and seek help from those who have no reason not to give it to you.
they absolutely need to be double-checked or they will randomly produce the most implausible garbage that even a 12-year old could identify as such.
LMAO are you SERIOUS???
dude…
DUDE…
FUCKING DUH! LMAOOOO the LLM never gets it right the first time, that’s why you’re supposed to have a conversation with it
If you’re on board the AI hype train, why the fuck are you here?
The post is FuckAI-related
Your comments read quite the opposite.
idk maybe
I use LLMs every day and I still think they’re trash.
in some ways they’re absolute garbage but in some ways they are incredibly useful
Better to learn later than never.
It is widely known and has been reported for the past 3 years that AI hallucinates and cannot be trusted, but not very accepted I guess due to all the lies by the tech bros that AI is PhD level or above.
Just waiting for that POP!
It is widely known and has been reported for the past 3 years that AI hallucinates and cannot be trusted
Of courtse it hallucinates. A significant portion of the time? Yes, especially with current events. With shit you could just look up on Wikipedia? Not really. It also makes debugging a piece of cake.
It hallucinates 100% of the time. It just happens that varying fractions of its hallucinations match reality.
It doesn’t think. It only regurgitates the output of a horrendously complicated statistical analysis of its training material. It is always hallucinating.
It also makes debugging a piece of cake.
Absolutely not. I use Claude daily for development work (not by choice), and debugging is by far its weakest ability. I’ve frequently said that it’s worse than useless at debugging and I still stand by that, even as AI coding tools have made marginal gains in other areas.
Seriously, don’t use this shit for debugging.
Are we talking about the same thing? Debugging is trivially easy thanks to LLMs. I have a script that automates it. Just ctrl+a ctrl+c ctrl+v basically
I don’t even know what workflow you’d be describing by copying everything into something else. Certainly doesn’t seem like any debugging effort I have done…
I guess you might be copying your voice into a chat and asking it to identify inconsistencies in your code, but I would think you’d be using an ide that integrates that. In such a case I don’t feel like an AI doing a code review is “debugging”. It can catch some things in a code review capacity, but generally the stuff that rises to the level of “debug” I haven’t seen LLMs be useful in that context…
Are we talking about the same thing?
Clearly not, because debugging isn’t a practice that you can just automate away. Telling Claude to “fix all the bugs” before every commit isn’t going to do shit, especially if you’re prompting it to debug code that it wrote itself.
makes debugging a piece of cake.
I guess it might, if you’re a vibe coder.
I bet my white virgin ass that I can debug and fix issues quicker than AI. But I also have 15 years of experience using my own head instead of offloading that mental work to AI.
Edit: senior level complex issues.
They are great for certain tasks. Untangling a complicated mess of a function that you’ve never seen before and giving you a summary of what the fuck is happening? Pretty damn impressive!
Writing some boilerplate or script that has been written thousands of times in similar fashions and in a language/tech you don’t need to fully understand? Just saved me from 1h of googling.
Designing something uncommon while following a shitty specification to the letter, and you have to anticipate which choices to make to avoid struggles down the line? Ahaha nope.
I’m assuming the answer was wrong, but as a non-musician I don’t see it.
This is just the thing - if you don’t understand the subject, the AI output seems perfectly reasonable, and you don’t see the need to look further.
If you understand the subject the AI is spouting off about, you immediately see that it’s full of shit and shouldn’t be trusted.
That’s remarkably like watching movies or television when the writers get near your area of expertise…
Which should be a warning if you use it for a subject you aren’t familiar with. Why would it suddenly become very good at output on something you’re not sure of? It can be useful as a sounding bound of your own ideas, as it’s very good at taking pieces of what you input and completing them in various ways. Keeping that within the context window to prevent wandering, you are modeling your own ideas. But that’s not how lots of (most) people are using it.
Just bolstering one of the other comments with a more visual approach to show just how simple the deduction would be, even if you don’t understand music.
Notes are only A - G and they repeat (i.e., G loops back to A). In the example, G is the ‘root’ and considered note #1, so when you get to F it loops back to G to complete the scale/octave. Armed with that knowledge, you can see more clearly how claude bungled it by laying the notes out like below. It got B and D right, but couldn’t do simple arithmetic to place E.
- G
- A
- B
- C
- D
- E
- F
It’s basic deduction for a human English speaker knowing that E immediately follows D, and therefore should be 5+1 = 6. Such a tiny, simple thing but shows just how scary it is that people trust this stuff blindly and don’t corroborate the info given. Now imagine a young, fresh chemist or physicist fully trusting the output because they’ve been taught to by their professors.
G to E is a major sixth, not a major seventh. That’s the mistake. It then misidentifies the chord because of this.
deleted by creator
by the way i just realized that [g b d e] is an e minor minor-major 7th first inversion
Yeah, with chords you generally try to mentally rearrange the notes such that they stack in thirds from bottom to top in the process of identifying the chord.
I took AP music theory 20 years ago so I totally forgot abut that, lmao










