While Im still personally skeptical on the ability of these tools to produce a GOOD software engineer, it’s something I should probably consider testing in a limited capacity.
I’ve noticed Deepseek has a few integrations, both official and hobbyist, with text editors like Claude Code. Plus, I’d rather not pay £20/mo for any of this stuff, let alone to any AI company NOT linked to the CPC.
I might consider a locally hosted model but the upfront cost for anything that can run it decently fast/high params are quite prohibitive. My home server isn’t really set up for good cooling!


Honestly I have not been super impressed with Kimi K2. Maybe the thinking model is better, but in my experience GLM has been much better. I’ll still give it a shot though.
Do you remember what their setup was? My guess would be CPU inference with a metric fuckton of RAM if they were running it at the full quantization, which could work but would be pretty slow. But for $6k it’d be impossible to buy enough VRAM to run it at full quant on GPUs.