Already on ollama.
I’ve found Qwen preferable to DeepSeek for coding so I can’t wait to try this out
I’ve not used Qwen yet, but I have noticed deepseek, specifically r1, is kind of a lazy coder. Lot of ‘step 5 draw the rest of the owl’ type responses.
Unrelated, but does anyone else’s internet speed come to a screeching halt when trying to download models from ollama? I swear I’m being throttled by xfinity.
That might just be LLMs in general. ChatGPT does the same. Copilot is a little more well-tuned, but I really only ever have it do boilerplate.
I’ve had really good luck with chatgpt 4o, and, to be fair, I have teased some decent responses out of deepseek 3 (iirc). Different ways of expanding on the basic principles of asking it to ‘step back and visualize different options before moving forward and fully implementing them with all necessary code, following best practices, etc.’ tends to get pretty good results.