While Im still personally skeptical on the ability of these tools to produce a GOOD software engineer, it’s something I should probably consider testing in a limited capacity.
I’ve noticed Deepseek has a few integrations, both official and hobbyist, with text editors like Claude Code. Plus, I’d rather not pay £20/mo for any of this stuff, let alone to any AI company NOT linked to the CPC.
I might consider a locally hosted model but the upfront cost for anything that can run it decently fast/high params are quite prohibitive. My home server isn’t really set up for good cooling!


I’ve only ever really used LLMs through ollama and chatgpt when google seo crap was too much. I have experimented with different neovim AI plugins with different results.
Most AI text editor plugins will basically just put the chat window in the editor as a sidebar and facilitate you copying code between the buffers.
Then there’s Cursor and editors that support “Tooling” meaning limited access to execute commands on your computer. That gets fucking nuts because it will crap out something functional. Like it will get all the boilerplate for starting an express server with a database connection and slap a react UI on it.
Running the simpler editor plugins against ollama is perfectly functional. You don’t need super high parameters. A 16b model will work fine. The problem will be the start up time because ollama unloads the model after 5 min of inactivity. I have not gotten ollama to work with any cursor like editors yet.
Also, when it comes to running ollama, RAM is the real bottleneck. I’ve found my laptop runs larger ollama models better than my desktop because its using system RAM with integrated graphics. Its not as fast as a dedicated card but a 64g macbook will run deepseek 70b.