JetBrains Plugin, Faster Autocomplete, and Provider Updates
We're expanding Kilo Code to more developer platforms.
Before diving into our usual updates, we wanted to share some exciting news about an alpha release:
Kilo Code for JetBrains - Alpha Program
We started Kilo Code with a simple promise: build a powerful AI tool that actually respects developers. Here's what that means:
Fully Open Source: No black boxes. Our code is your code.
Your Data Stays Yours: We never train on your private code. Ever.
Transparent Pricing: No commission on API usage. You pay exactly what the model providers charge.
Now we're bringing that same philosophy—plus our most powerful features—to the JetBrains ecosystem. This isn't just a product test; it's a collaboration. Your feedback will directly shape what we build.
Ready to join us? Sign up for the alpha in #alpha-jetbrains on our Discord!
And now, back to our regular extension updates.
Our extension updates include a performance improvement for our autocomplete (an experimental feature we have that’s still in beta), a few provider updates and merged features from Roo Code v3.25.21, v3.25.22 and v3.25.23.
Autocomplete Performance Improvements
We’ve improved our (beta) inline assist by making it parse chunks individually (thanks @catrielmuller!)
You can enable our experimental autocomplete by going to Settings → Experimental → Inline Assist. After you do this, you'll see a new 'Inline Assist' tab in Settings:
You can now filter by ‘installed’ modes in the Kilo Code marketplace
To do this, just visit the Marketplace section and click on ‘All items’:
Provider updates
We’ve added a few updates/fixes to the various providers that we support:
We added dynamic model fetching & prompt caching to the DeepInfra provider (thanks @Thachnh!)
We removed the forced override for the context limit of the Ollama API (thanks @mcowger!)
We added the ability to add a custom OAuth credential path for the Qwen-Code provider (thanks @nitinprajwal!)
In addition, we merged these provider updates from Roo Code v3.25.23:
Added FeatherlessAI as a provider (thanks @DarinVerheijke)
Updated the DeepSeek models context window to 128k (thanks @JuanPerezReal!)
Added DeepSeek v3.1 to the Chutes provider (thanks @dmarkey!)
Added prompt caching support for Kimi K2 on Groq (thanks @daniel-lxs and @benank!)
Ensured subtask results are provided to GPT-5 in OpenAI Responses API
Improved context window error handling for OpenAI and other providers
Join Our Discord to Stay in the Loop
To test the alpha version of our JetBrains plugin, as well as to be one of the first who learns about new updates, join our Discord server.