Why Nostr? What is Njump?
2025-06-07 17:54:14

ChipTuner on Nostr: I keep seeing people #vibing projects talking about running out of token, then I ...

I keep seeing people #vibing projects talking about running out of token, then I realized some of my friends have just been building their apps through the online chat interfaces, with Anthropic or OpenAI directly.

I have some suggestions. Not everyone likes the rate or bleeding edge quality of GitHub Copilot, but its $100/year for "unlimited" usage, plus access to like 10 models in chat, with about 5-6 models available in agent mode, where it basically writes code in your IDE for you. I've hit some rate limits when letting 3 Claude 4 agents cook in the background just rabidly clicking continue while testing this. That was only for a single model, I was able to switch to Gemini and continue for a while longer. On my regular daily usage, I never run into limits.

I've heard similar results with Cursor, who seems to be more on the bleeding edge but you have to use their IDE which is not for me. I also have no idea how the payment system is structured.

If you want to keep using your token based LLMs, you can use Continue.dev or Cline linked up to most of the big providers you already pay for in your IDE. VS Code is the most supported.

Finally you can use Cline or Continue with your own LLM server like ollama, or ollama + owui an the even recommend the models to use, but you'll need some serious hardware to get anywhere near the quality of the paid LLMs. My hardware is a little too dated to really use it full time. I love the privacy but it's not practical for me yet. Once LLM prices go up compared to hardware, I may invest. These two also have far more tuning abilities than Copilot or others IMO, just lack the inlegence.

Author Public Key
npub1qdjn8j4gwgmkj3k5un775nq6q3q7mguv5tvajstmkdsqdja2havq03fqm7