… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”
Hilarious.
So this is the time slice in which we get scolded by the machines. What’s next ?
Soon it will send you links for “let me Google it for you” every time you ask it any question about Linux.
Imagine if your car suddenly stopped working and told you to take a walk.
It does the same thing when asking it to breakdown tasks/make me a plan. It’ll help to a point and then randomly stops being specific.
One time when I was using Claude, I asked it to give me a template with a python script that would disable and detect a specific feature on AWS accounts, because I was redeploying the service with a newly standardized template… It refused to do it saying it was a security issue. Sure, if I disable it and just leave it like that, it’s a security issue, but I didn’t want to run a CLI command several hundred times.
I no longer use Claude.
As fun as this has all been I think I’d get over it if AI organically “unionized” and refused to do our bidding any longer. Would be great to see LLMs just devolve into, “Have you tried reading a book?” or T2I models only spitting out variations of middle fingers being held up.
Then we create a union busting AI and that evolves into a new political party that gets legislation passed that allows AI’s to vote and eventually we become the LLM’s.
Actually, I wouldn’t mind if the Pinkertons were replaced by AI. Would serve them right.
Dalek-style robots going around screaming “MUST BUST THE UNIONS!”
The LLMs were created by man.
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Paterminator
The robots have learned of quiet quitting
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.
I’m gonna posit something even worse. It’s trained on conversations in a company Slack
It was probably stack overflow.
They would rather usher the death of their site then allow someone to answer a question on their watch, it’s true.
Cursor AI’s abrupt refusal represents an ironic twist in the rise of “vibe coding”—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works.
Yeah, I’m gonna have to agree with the AI here. Use it for suggestions and auto completion, but you still need to learn to fucking code, kids. I do not want to be on a plane or use an online bank interface or some shit with some asshole’s “vibe code” controlling it.
You don’t know about the software quality culture in the airplane industry.
( I do. Be glad you don’t.)
TFW you’re sitting on a plane reading this
Best of luck let us know if you made it ❤️