Marius (windsheep) :donor:<p>The switch from <a href="https://infosec.exchange/tags/claudecode" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claudecode</span></a> to Open-Source tools and models works fine for me.</p><p>1. The CLI client I use is llxprt . This is a fork of the <a href="https://infosec.exchange/tags/gemini" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gemini</span></a> cli . </p><p><a href="https://github.com/acoliver/llxprt-code" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/acoliver/llxprt-code</span><span class="invisible"></span></a></p><p>The Desktop client is <a href="https://infosec.exchange/tags/anythingllm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>anythingllm</span></a> </p><p><a href="https://anythingllm.com/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">anythingllm.com/</span><span class="invisible"></span></a></p><p>Both have MCP support, and work for macOS and Linux.</p><p>2. For large code bases I am using ck search in my prompts and make llxpert to use it via Bash tool calls.</p><p><a href="https://github.com/BeaconBay/ck" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/BeaconBay/ck</span><span class="invisible"></span></a> </p><p>My experiences with claude-context are that it's expensive and not effective for the cost (embeddings):</p><p><a href="https://github.com/zilliztech/claude-context" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/zilliztech/claude-c</span><span class="invisible">ontext</span></a> </p><p>I stopped using it. </p><p>3. I mostly use <a href="https://infosec.exchange/tags/OpenRouter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenRouter</span></a> </p><p>gpt-oss 120b as a coding model</p><p>kimi-k2-0905 or glm-4.5 for select tasks, like finding a function or simple test output summaries.</p><p>These models are on par with <a href="https://infosec.exchange/tags/sonnet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sonnet</span></a> or <a href="https://infosec.exchange/tags/opus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opus</span></a> </p><p><a href="https://artificialanalysis.ai/models/glm-4.5?model-filters=open-source" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">artificialanalysis.ai/models/g</span><span class="invisible">lm-4.5?model-filters=open-source</span></a></p><p>4. I mostly optimize my prompts, tools, and workflows. </p><p>I write my own MCP tools, and use specific RAG approaches to establish the context.</p><p>5. I self-host more and more. But I haven't found my stack here yet. I actually want gpt-oss, but it's too expensive. </p><p>6. <a href="https://infosec.exchange/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> codex cli is a myth to me. </p><p><a href="https://github.com/openai/codex" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/openai/codex</span><span class="invisible"></span></a></p><p>Yes, it's slow. But in numerous instances the output is great. It can be used with the same OpenRouter models.</p>