It's pretty surprising that they're willing to charge a flat rate rather than by token, but great news for users. It's inevitable that you get annoyed at AI when it consumes tokens and generates a bad answer, or starts reading files that aren't fully relevant. The flat rate takes away that bad taste. The business modelling behind it must be quite intense, I hope this doesn't blow up in JetBrains' face if Junie's usage patterns change over time.
JetBrains are in a great position to do this though, perhaps the best position. Whereas a tool like Claude Code or Aider can give the LLM grep and little else [1], Junie can give the LLM a kind of textual API to the IDE's own static analysis database. If Claude/GPT want to understand what a function is doing and how it's used, it could issue a tool call that brings up nicely rendered API docs and nothing else, it could issue a tool call to navigate into it and read just the function body and so on. And they can use the IDE to check whether the code complies with the language rules more or less instantly without needing to do a full compile to pick up on hallucinated APIs or syntax errors.
So much potential with this kind of integration, all that stuff is barely the start.
[1] Aider attempts to build a "repo map" using a PageRank over symbols extracted by tree-sitter, but it never worked well for me.
>The business modelling behind it must be quite intense, I hope this doesn't blow up in JetBrains' face
Historically... this tends to work out. Reminds me of Gmail initially allowing massive inbox. YouTube doing free hosting. All the various untethered LAMP hosting...
If necessary they'll add an anti-abuse policy or whatnot to mitigate the heavy users.
The sophisticated modeling is basically "just get going" with a guesstimate and adjust if needed.
I doubt that pricing structure will sink any ships. It's going to be about utility.
> Historically... this tends to work out. Reminds me of Gmail initially allowing massive inbox. YouTube doing free hosting. All the various untethered LAMP hosting...
One difference I see: storage capacity and compute performance aren't increasing like they had in the past, so companies can't rely on these costs to dramatically drop in the future to offset bleeding cash initially to gain market share.
The cost of inference[0] for the same quality has been dropping by nearly 10x year over year. I’m not sure when that trend will slow down, but there’s still been a lot of low-hanging fruit around algorithmic efficiency.
Sure. I agree that usage/demand is likely to outgrow compute performance.
But.. a lot of the other dynamics that make this game winnable still stand. Maybe they will need to go with a meter eventually or some other pricing structure... but it will work out.
It's odd that they don't seem to let you pay for overages, it looks like you are just shit out of luck past a certain point even on the most expensive plan.
Were you able to figure out what constitutes a "credit"? I initially assumed they were following Cursor's (early) model of 1 prompt = 1 credit, with the tokens used to fulfill the prompt not costing anything. If that's how they're doing it that still leaves a bad taste when you waste a credit on something that doesn't work, but it does remove the need to care about how the tool gets there.
Token based is a pretty strong downside for me that would be enough to get me to use other tools like Cursor (even though I love JetBrains IDEs). I get actively stressed watching an automated system burn through my money on its own recognizance. If I'm going to have quotas or usage-based pricing I want the metrics used to be things that I have direct control over, not things that the service provider controls.
TANSTAAFL. With flat pricing, companies have incentives to downgrade you to cheaper models - which currently strongly correlates with worse quality of output - or, more likely, to trim context significantly and hope you won't notice.
But yes, there should absolutely be ways to track usage, ideally before the prompt is even submitted for processing (maybe for >N tokens per query, where N can be specified in settings but has a reasonable default).
Junie has been amazing for me, completely replaced my payments for Claude Code and Cursor. And it was free (until today). It's the least aggressive agent i've used, no complete re-writes or even close, And is able to achieve about 95% of what I ask of it
The only downside - which might be fixed in the newest release - is that it completely forgets context between messages - even in the same message window. But that feels like both cost cutting and easy to fix
My biggest issues with Claude Code and Cursor for what its worth:
Claude Code: Price, plus when it doesnt get things right, within a few messages it ALWAYS just creates a new file entry point with some demo console.logs that do nothing but show messages, and claims to have succeeded in what I asked
Cursor: Will break some functionality in my web application while fixing or creating others, about 80% of the time
Cursor results are going to depend heavily on the model; Gemini 2.5 pro exp seems the overall strongest. You’re probably defaulting to 3.7 sonnet which is completely unusable; it was good at first but I am convinced anthropic “updated” (degraded) it behind the scenes to lower their inference costs. OpenAI did the same with GPT-4o for a bit a while back before making it better again.
3.7 also seems to have converged more on the hybrid reddit user/npr listener/HR lady tone and manner of speaking that makes me want to punch a wall. Genuinely people could probably increase LLM usage just by fixing this problem and banning r*fit from the training set.
I've seen evidence that suggests this is false, and that it's more likely that cursor degraded the experience in their context window to save on costs.
The date stamped models haven't had any evidence of ever changing or degrading, to my knowledge. Aider did a test for this as well.
Is there a way to use Claude within the Jetbrains IDEs? I have a Jetbrains IDE license, and a Claude subscription, but I couldn't find an integration. To use Claude and have it integrated I need to subscribe to Jebtbrains AI instead, but then I don't have Claude in the browser anymore.
We made the choice to integrate directly with third party providers for a few reasons, but the major one is to do with how the providers can use our user's data. We have very restrictive agreements which don't allow the providers to use the data for training or any purpose other than validating the requests.
> Restrict usage of AI Assistant for a project
Create an empty file named .noai in the root directory of the project.
> When this file is present, all AI Assistant features are fully disabled for the project. Even if this project is opened in another IDE, the AI Assistant features will not be available.
I teach and absolutely must be able to disable AI for my student projects otherwise the students learn very little and are lead down false paths constantly
Of course. As I tell them: I am a good teacher and my job is to teach you how to program computers well. It is possible to cheat and get a good grade in this class. If I catch you, I will report you to the university. But I'm not going to work hard to catch people, that's not my job: my job is to teach. You should learn.
My concern isn't for students who are cheaters, there isn't much I can do about them. Rather it is for students who don't know any better and start having AI auto-completion thrown at them.
How many students rely on AI, is that number really going up?
My son is 14 and knows about AI (I'm a researcher in the space, so I've been mentioning advances in it for years). He seems to code with some of his peers, and it seems like normal to me (python scripts, HTML, js type stuff written the old fasioned way [written by hand or copy pasting into a notebook.exe equivalent :P]). I try to be super honest with him, and I tell him AI is incredible, but we also joke about it and I explain the incredible drawbacks of vibe coding, especially while learning.
I wonder overall though how LLMs are going to effect CS education. Will students avoid using the tools, or will they be accepted? CS homework projects were always easier to cheat on vs say fine art, since of the ease one can copy and paste code, but AI tools makes trivial work of many homework exercises that would in theory be harder to implicate someone.
Yes, it is going up. In my classes the problem is that AI is just there in VSCode or IntelliJ, and they start using it almost by accident. That is what I want to avoid.
Tons of cheaters existed pre-ai too. They'll always exist, whether they use ai or just share copy pasted code from students who attended the semester before. Probably half of my graduating class couldn't program because they cheated their way through.
I used JetBrains AI for about a year, it was pretty good to basically help me to scaffold things, it felt like instructing a Junior developer, which isn't bad, saved me time for side projects.
* They say free for all IDEs except the community version of PyCharm and IntelliJ.
* Looks like if you want to use your own LLM you need to be an enterprise user? None of the lower tiers allow for it, I find this really, really dumb, if I'm paying for compute, why can't I also run my own LLM? Am I misunderstanding this?
* ReSharper and Android Studio don't fall under the credit system? I really would like to know what that means.
The free tier now supports connecting to local AI models running on LM Studio or Ollama, but it still doesn't actually function without an internet connection.
If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.
Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:
>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.
So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.
I disconnect from the internet sometimes and noticed this morning that my previous night's chat was invisible. I could only see it once I connected again.
This puts me off a bit to finally try local models. Anyone know what kind data is collected in those rare instances of cloud usage?
Hi, here are our data collection policies for the cloud-based LLMs. We've worked out agreements that heavily restrict how third party companies can use your data, including not storing it or using it for model training: https://www.jetbrains.com/help/ai/data-collection-and-use-po...
- JetBrains AI tools (AI Assistant and Junie coding agent) are now available under a single subscription starting with version 2025.1.
- There are three tiers: AI Free, AI Pro, and AI Ultimate.
- The free tier offers unlimited code completion, local AI models, and credit-based access to cloud AI assistance and Junie.
- AI Assistant supports Claude 3.7 Sonnet and Google Gemini 2.5 Pro.
- Junie is powered by Anthropic's Claude and OpenAI models.
It's not clear whether AI Free will be available in Community Edition IDEs or not.
Update: From the AI Plans & Pricing page, there's a tooltip that says: "The free tier is not available in the Community Editions of PyCharm and IntelliJ IDEA."
> The AI Free tier gives you unlimited code completion and access to local AI models
Looking forward to giving this a try.
Work provides me with tooling and requires that I stick to approved AI tools, and my hobby-coding alone is just not important or regular enough to justify a paid subscription.
It's been a little annoying that I can have ollama running locally, enable ollama and configure it in my IDE, but still (seemingly?) not be able to make use of it without activating a paid AI Assistant license.
It makes perfect sense that cloud models would require payment, and that JetBrains would make some margin on that.
But I'm already paying for an IDE whose headline features have recently been so AI-focused, and if I'm also providing the compute, then I should really be able to use those features.
You are getting the AI FRee tier with any paid license for a JetBrains IDE and as you stated it should work with local AI models. I looked through our internal documentation, and I couldn't find anything that stated anything different. If you run into issues, please open a YouTrack ticket and we can have a better discussion/look at what's going on, but with everything I see, I'd expect it to work the way you think.
I think I'll be more than happy to try it out then cause I have that pack and compare it against the Github Copilot plugin, or the likes of Continue.dev (which was pretty good in VS Code, but kind of buggy in JetBrains IDEs).
> On top of all that, the All Products Pack and dotUltimate subscriptions will now come with AI Pro included.
Well, colour me surprised. I've used JetBrains as an an example of a pretty decent company in the past (e.g. the way they remind you your subscription is up for renewal a couple of months in advance, so you have all the time in the world to unsub if you like), but I wasn't expecting them to just add this to the existing subscriptions.
They’ve bumped up the price once in all the years I’ve been subscribed to the All Tools pack. When they did that, they gave existing subscribers the option to buy two years’ subscription at the old price.
Yeah, JetBrains has so far been incredibly good to their users, and it sure seems like they know that that has been their primary competitive advantage in a landscape dominated by free editors. Hopefully that calculation stays the same in the AI world.
That's cool that you're looking at those things. I hope we've made progress on "Apply" (and we're doing more.) And as heads-up, as you can imagine, we're looking at NEP.
I don't know when "AI Pro" became bundled with the "all products pack" but I don't think it was there last year. I've been paying for it separately for a bit now, but a couple days ago I noticed 2 licenses showing up in the 'AI Assistant' tab. Was able to just cancel the monthly one, and use the one from 'all products pack'. May look to upgrade again when the Junie stuff becomes available in the other IDEs.
As far as I know, that rule doesn't apply to verbs. Do you have a style guide that indicates otherwise? I know it looks like I'm trying to correct someone's grammar online, but I'm legitimately trying to learn. Your comment made me curious so I searched for a bit but couldn't come up with anything.
I'd like to know more about what is powering Junie under the hood.
> According to SWEBench Verified, a curated benchmark of 500 developer tasks, Junie can solve 53.6% of tasks on a single run
That's nice I guess, but why isn't this an entry on the actual https://www.swebench.com/#verified website? (Also: 53% isn't that impressive these days, Claude Sonnet can reach 63%)
“JetBrains and Anthropic share a commitment to transforming how developers work. Developers rely on Claude’s state-of-the-art performance in solving complex, real-world coding tasks. We’re excited to see how Junie, powered by Claude, will help the global developer community create innovative things within the trusted JetBrains IDEs that customers love.”
Mike Krieger, Chief Product Officer, Anthropic
Kind of confusingly, in today's release of Rider 'Junie' is mentioned nowhere I can find. The AI assistant tab, which was already available (paid), just has options to pick from popular models (4o, o1, o3, Gemini, Claude) or LM Studio / Ollama
In my experience working on JB extensions, Rider is the most different of the IDEs. Most people think of just IntelliJ and that’s the same code base as eg PyCharm. But Rider seems substantially different.
The biggest difference between the language-specific IDEs in my experience is how they expose the project structure, with GoLand, PyCharm, etc. providing a much more directory centric workflow while Rider by nature has to work around .sln and .*proj files.
But Rider is uniquely weird in its use of ReSharper for code analysis.
Junie isn't available for all of the IDEs yet, so it's not yet available in Rider. As of today it's available for PyCharm, IntelliJ, WebStorm and GoLand: https://www.jetbrains.com/junie/
I have been playing with it for a couple of hours. I have the All Product Pack which includes the AI Pro tier for free. You can hook up local models easily as well using Ollama or LM Studio. This seems better than both Continue.dev and Copilot. I will probably be cancelling my copilot subscription before the current year is up.
JetBrains has outlasted most of the fully open source IDEs out there. Open source IDEs tend to suck and be abandoned after a while because there's no incentive to keep maintaining it, and it's expensive. See: NetBeans, Eclipse.
AFAIK the only IDEs in wide use today are open-core, like JetBrains' IDEs.
no. junie is decent as an agent, despite it being slow (i’d put it between cursor and windsurf/copilot on quality).. but the autocomplete is anemic. they have to improve their ability to generate suggestions at all before they can start recommending next edits.
It’s a little confusing. I have the all-products pack but a month ago paid $100 for the ai package which is supposed to be included now in the all-products pack. So I should get a $100 credit or bumped up to the next level ai package.
Correct, MCP is not used in Junie yet, but it's something we are looking into. Comments like this help us better gauge general interest, so highly appreciate this!
You can use Ollama or LM Studio locally. There is also code completion running on local models which is built into the IDE and comes bundled for free with IntelliJ.
Since the ukraine war, jet brains products have gone way down hill. Their best engineers were in ukraine. The IDEs have become buggy and slow, sad to see this once very polished software lose relevance
Most of JetBrains engineers was russians located in st. Petersburg. Since the war started, JetBrains claimed they've relocated all workforce from russia.
That's not true. At least not based on their R&D locations back then. Most of those were in Russia. They quickly - and rightfully so - closed these locations down when the war started and moved their activities elsewhere.
Visual Studio Code was first released in 2015; Intellij (the original JetBrains IDE) was first released in 2001. Even Atom -- the editor that Microsoft forked to make VSCode -- had its first public release in 2014.
It's safe to say that JetBrains IDEs are something other than "crappy VSCode clones."
You don't _have_ to use the AI stuff, personally I've disabled all of it because my fan was spinning like crazy. Maybe in a year or two I'll try it again.
I like a good conspiracy but based on what? Jetbrains have no incentive to force that, they make money based on providing flexible tools that people will pay for. And their IDEs are desktop apps, you could always just... not upgrade. Unlike web or cloud-based "IDEs".
I was already a satisfied paying customer. I don't need that new stuff but I understand they have to go where the market goes if they want to stay relevant vs competitors (Microsoft VSCode/Github/Copilot) in the eyes of prospective customers who judge products using comparative feature grids.
If you don't want to use an IDE or pay for your tools that's fine. You don't have to look for reasons to hate on it. No one cares what you don't use.
> I don't need that new stuff but I understand they have to go where the market goes if they want to stay relevant vs competitors
Looking at the people racing to jump off the cliff and saying "let's maybe consider not doing that" can be an competitive advantage, see https://procreate.com/ai
Thanks for the reminder! I was looking for a modern editor without AI stuff (I do like AI things but sometimes you'd want an off.) Didn't notice it became open source. Nice!
It's automatically installed and bundled with the IDE. You can disable it, but to uninstall fully you have to manually delete files from each and every IDE installation
As mentioned in another comment, you can add a .noai file to the root of your project to disable AI support.
As to deleting files that ship with the app but that aren't used if you disable them... that feels awfully 1980's "gosh disk space is expensive" thinking.
It's pretty surprising that they're willing to charge a flat rate rather than by token, but great news for users. It's inevitable that you get annoyed at AI when it consumes tokens and generates a bad answer, or starts reading files that aren't fully relevant. The flat rate takes away that bad taste. The business modelling behind it must be quite intense, I hope this doesn't blow up in JetBrains' face if Junie's usage patterns change over time.
JetBrains are in a great position to do this though, perhaps the best position. Whereas a tool like Claude Code or Aider can give the LLM grep and little else [1], Junie can give the LLM a kind of textual API to the IDE's own static analysis database. If Claude/GPT want to understand what a function is doing and how it's used, it could issue a tool call that brings up nicely rendered API docs and nothing else, it could issue a tool call to navigate into it and read just the function body and so on. And they can use the IDE to check whether the code complies with the language rules more or less instantly without needing to do a full compile to pick up on hallucinated APIs or syntax errors.
So much potential with this kind of integration, all that stuff is barely the start.
[1] Aider attempts to build a "repo map" using a PageRank over symbols extracted by tree-sitter, but it never worked well for me.
>The business modelling behind it must be quite intense, I hope this doesn't blow up in JetBrains' face
Historically... this tends to work out. Reminds me of Gmail initially allowing massive inbox. YouTube doing free hosting. All the various untethered LAMP hosting...
If necessary they'll add an anti-abuse policy or whatnot to mitigate the heavy users.
The sophisticated modeling is basically "just get going" with a guesstimate and adjust if needed.
I doubt that pricing structure will sink any ships. It's going to be about utility.
> Historically... this tends to work out. Reminds me of Gmail initially allowing massive inbox. YouTube doing free hosting. All the various untethered LAMP hosting...
One difference I see: storage capacity and compute performance aren't increasing like they had in the past, so companies can't rely on these costs to dramatically drop in the future to offset bleeding cash initially to gain market share.
The cost of inference[0] for the same quality has been dropping by nearly 10x year over year. I’m not sure when that trend will slow down, but there’s still been a lot of low-hanging fruit around algorithmic efficiency.
[0] https://www.reddit.com/r/LocalLLaMA/comments/1gpr2p4/llms_co...
> One difference I see: storage capacity and compute performance aren't increasing like they had in the past
Companies stopped increasing free storage tiers ~10 years ago or even more, while the cost of storage has dropped significantly in that time period.
Both hard drives and SSDs.
Sure. I agree that usage/demand is likely to outgrow compute performance.
But.. a lot of the other dynamics that make this game winnable still stand. Maybe they will need to go with a meter eventually or some other pricing structure... but it will work out.
The tokens are not unlimited though, Pro and Ultimate plan seems to differ essentially on the amount of tokens you get. [1]
Anyway I think that for the average developer (i.e. not enterprise customers) this is easier to reason about, so I am fine with that.
[1] https://www.jetbrains.com/ai-ides/buy/?section=personal&bill...
It's odd that they don't seem to let you pay for overages, it looks like you are just shit out of luck past a certain point even on the most expensive plan.
Also, "no credit system" for resharper and android studio. I am wondering if for the latter Google is footing the bill...
I can't imagine it is healthy to consume anywhere near the limit.
Were you able to figure out what constitutes a "credit"? I initially assumed they were following Cursor's (early) model of 1 prompt = 1 credit, with the tokens used to fulfill the prompt not costing anything. If that's how they're doing it that still leaves a bad taste when you waste a credit on something that doesn't work, but it does remove the need to care about how the tool gets there.
It is by token - you get quota of tokens that once used up, disables your cloud integration
Where are you reading this? I see there's a concept of "credits" but don't see any explanation of what a credit is.
it is token based AFAIK, but we will provide some more information around credits/quota/tokens for the different price tiers in the coming days/weeks
Token based is a pretty strong downside for me that would be enough to get me to use other tools like Cursor (even though I love JetBrains IDEs). I get actively stressed watching an automated system burn through my money on its own recognizance. If I'm going to have quotas or usage-based pricing I want the metrics used to be things that I have direct control over, not things that the service provider controls.
TANSTAAFL. With flat pricing, companies have incentives to downgrade you to cheaper models - which currently strongly correlates with worse quality of output - or, more likely, to trim context significantly and hope you won't notice.
But yes, there should absolutely be ways to track usage, ideally before the prompt is even submitted for processing (maybe for >N tokens per query, where N can be specified in settings but has a reasonable default).
Junie has been amazing for me, completely replaced my payments for Claude Code and Cursor. And it was free (until today). It's the least aggressive agent i've used, no complete re-writes or even close, And is able to achieve about 95% of what I ask of it
The only downside - which might be fixed in the newest release - is that it completely forgets context between messages - even in the same message window. But that feels like both cost cutting and easy to fix
My biggest issues with Claude Code and Cursor for what its worth:
Claude Code: Price, plus when it doesnt get things right, within a few messages it ALWAYS just creates a new file entry point with some demo console.logs that do nothing but show messages, and claims to have succeeded in what I asked
Cursor: Will break some functionality in my web application while fixing or creating others, about 80% of the time
Cursor results are going to depend heavily on the model; Gemini 2.5 pro exp seems the overall strongest. You’re probably defaulting to 3.7 sonnet which is completely unusable; it was good at first but I am convinced anthropic “updated” (degraded) it behind the scenes to lower their inference costs. OpenAI did the same with GPT-4o for a bit a while back before making it better again.
3.7 also seems to have converged more on the hybrid reddit user/npr listener/HR lady tone and manner of speaking that makes me want to punch a wall. Genuinely people could probably increase LLM usage just by fixing this problem and banning r*fit from the training set.
I've seen evidence that suggests this is false, and that it's more likely that cursor degraded the experience in their context window to save on costs.
The date stamped models haven't had any evidence of ever changing or degrading, to my knowledge. Aider did a test for this as well.
Is there a way to use Claude within the Jetbrains IDEs? I have a Jetbrains IDE license, and a Claude subscription, but I couldn't find an integration. To use Claude and have it integrated I need to subscribe to Jebtbrains AI instead, but then I don't have Claude in the browser anymore.
Ther is a bunch of wrapping plugins where you can provide your own API key, like proxyAI or Supermaven. Look in the plugins marketplace.
the Jetbrains AI assistant plugin lets you choose which model to use.
Yes, but you need to pay the license to them. I guess I'll switch, it's just silly that there isn't an official integration
Hi, from JetBrains here :)
We made the choice to integrate directly with third party providers for a few reasons, but the major one is to do with how the providers can use our user's data. We have very restrictive agreements which don't allow the providers to use the data for training or any purpose other than validating the requests.
You can add a .noai file to the root of your project to disable AI support.
https://www.jetbrains.com/help/idea/disable-ai-assistant.htm...
Errr, that 404s now. I hope the functionality still works.
Here's the cached version:
https://web.archive.org/web/20250329143832/https://www.jetbr...
> Restrict usage of AI Assistant for a project Create an empty file named .noai in the root directory of the project.
> When this file is present, all AI Assistant features are fully disabled for the project. Even if this project is opened in another IDE, the AI Assistant features will not be available.
Working link: https://www.jetbrains.com/help/ai-assistant/disable-ai-assis...
The AI feature is also opt-in, so you have to take steps to enable it.
The .noai file is helpful when you have specific projects that need to be excluded from AI tools.
I teach and absolutely must be able to disable AI for my student projects otherwise the students learn very little and are lead down false paths constantly
I guarantee your students are going to learn how to delete the .noai file
Of course. As I tell them: I am a good teacher and my job is to teach you how to program computers well. It is possible to cheat and get a good grade in this class. If I catch you, I will report you to the university. But I'm not going to work hard to catch people, that's not my job: my job is to teach. You should learn.
My concern isn't for students who are cheaters, there isn't much I can do about them. Rather it is for students who don't know any better and start having AI auto-completion thrown at them.
How many students rely on AI, is that number really going up?
My son is 14 and knows about AI (I'm a researcher in the space, so I've been mentioning advances in it for years). He seems to code with some of his peers, and it seems like normal to me (python scripts, HTML, js type stuff written the old fasioned way [written by hand or copy pasting into a notebook.exe equivalent :P]). I try to be super honest with him, and I tell him AI is incredible, but we also joke about it and I explain the incredible drawbacks of vibe coding, especially while learning.
I wonder overall though how LLMs are going to effect CS education. Will students avoid using the tools, or will they be accepted? CS homework projects were always easier to cheat on vs say fine art, since of the ease one can copy and paste code, but AI tools makes trivial work of many homework exercises that would in theory be harder to implicate someone.
Yes, it is going up. In my classes the problem is that AI is just there in VSCode or IntelliJ, and they start using it almost by accident. That is what I want to avoid.
Tons of cheaters existed pre-ai too. They'll always exist, whether they use ai or just share copy pasted code from students who attended the semester before. Probably half of my graduating class couldn't program because they cheated their way through.
Yep.
Sin is its own punishment.
like phones, they will make you take the AI.. it is a collar around your neck!
... also posted today -- Microsoft Copilot can now 'see' what's on your screen in Edge
What? iPhones let you disable all the AI stuff with a single toggle.
I used JetBrains AI for about a year, it was pretty good to basically help me to scaffold things, it felt like instructing a Junior developer, which isn't bad, saved me time for side projects.
Some observations from the pricingpage:
https://www.jetbrains.com/ai-ides/buy/?section=personal&bill...
* They say free for all IDEs except the community version of PyCharm and IntelliJ.
* Looks like if you want to use your own LLM you need to be an enterprise user? None of the lower tiers allow for it, I find this really, really dumb, if I'm paying for compute, why can't I also run my own LLM? Am I misunderstanding this?
* ReSharper and Android Studio don't fall under the credit system? I really would like to know what that means.
From the blog post, it seems to say that the free tier of the AI assistant and Junie both allow using local models. I haven't tried it myself, though.
Ah thank you, I'll have to experiment with this when I get home. I mostly use this stuff for personal projects.
The free tier now supports connecting to local AI models running on LM Studio or Ollama, but it still doesn't actually function without an internet connection.
If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.
Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:
>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.
So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.
[1] https://www.jetbrains.com/help/ai-assistant/disable-ai-assis...
I disconnect from the internet sometimes and noticed this morning that my previous night's chat was invisible. I could only see it once I connected again.
This puts me off a bit to finally try local models. Anyone know what kind data is collected in those rare instances of cloud usage?
Hi, here are our data collection policies for the cloud-based LLMs. We've worked out agreements that heavily restrict how third party companies can use your data, including not storing it or using it for model training: https://www.jetbrains.com/help/ai/data-collection-and-use-po...
Summary:
- JetBrains AI tools (AI Assistant and Junie coding agent) are now available under a single subscription starting with version 2025.1.
- There are three tiers: AI Free, AI Pro, and AI Ultimate.
- The free tier offers unlimited code completion, local AI models, and credit-based access to cloud AI assistance and Junie.
- AI Assistant supports Claude 3.7 Sonnet and Google Gemini 2.5 Pro.
- Junie is powered by Anthropic's Claude and OpenAI models.
It's not clear whether AI Free will be available in Community Edition IDEs or not.
Update: From the AI Plans & Pricing page, there's a tooltip that says: "The free tier is not available in the Community Editions of PyCharm and IntelliJ IDEA."
Under "availability in products", they specifically say that the Free tier is not available on Community Edition.
https://www.jetbrains.com/ai-ides/buy/
Good catch. I updated my comment.
"JetBrains Junie is now publicly available to all our IDE users."
Only for IntelliJ, WebStorm and PyCharm, so far.
I've been paying for the basic AI Assistant for months now, and am glad to see a bit more clarity around this.
> The AI Free tier gives you unlimited code completion and access to local AI models
Looking forward to giving this a try.
Work provides me with tooling and requires that I stick to approved AI tools, and my hobby-coding alone is just not important or regular enough to justify a paid subscription.
It's been a little annoying that I can have ollama running locally, enable ollama and configure it in my IDE, but still (seemingly?) not be able to make use of it without activating a paid AI Assistant license.
It makes perfect sense that cloud models would require payment, and that JetBrains would make some margin on that.
But I'm already paying for an IDE whose headline features have recently been so AI-focused, and if I'm also providing the compute, then I should really be able to use those features.
You are getting the AI FRee tier with any paid license for a JetBrains IDE and as you stated it should work with local AI models. I looked through our internal documentation, and I couldn't find anything that stated anything different. If you run into issues, please open a YouTrack ticket and we can have a better discussion/look at what's going on, but with everything I see, I'd expect it to work the way you think.
AI assistant license is now included even in Free tier so you should be able to use Ollama without any problem after 2025.1 release.
Oh hey, they've worked some more on the features: https://www.jetbrains.com/ai-assistant/
And it's available with the all products pack: https://www.jetbrains.com/ai-ides/buy/?section=personal&bill...
I think I'll be more than happy to try it out then cause I have that pack and compare it against the Github Copilot plugin, or the likes of Continue.dev (which was pretty good in VS Code, but kind of buggy in JetBrains IDEs).
We've come a long way with AI Assistant in the last few months. Lots more planned.
That's awesome to hear, best of luck!
> On top of all that, the All Products Pack and dotUltimate subscriptions will now come with AI Pro included.
Well, colour me surprised. I've used JetBrains as an an example of a pretty decent company in the past (e.g. the way they remind you your subscription is up for renewal a couple of months in advance, so you have all the time in the world to unsub if you like), but I wasn't expecting them to just add this to the existing subscriptions.
you can read this as no-one was buying the addon
they'll eat the cost for a year or two, then bump up the subscription price to pay for it
I honestly doubt that’s the plan.
They’ve bumped up the price once in all the years I’ve been subscribed to the All Tools pack. When they did that, they gave existing subscribers the option to buy two years’ subscription at the old price.
Yeah, JetBrains has so far been incredibly good to their users, and it sure seems like they know that that has been their primary competitive advantage in a landscape dominated by free editors. Hopefully that calculation stays the same in the AI world.
I did;) unsubscribed every second month and resubscribed to see if it got better. now included, nice!
How does JetBrains AI compare to Cursor?
I think they're getting there but missing big features like a high quality "Apply" workflow and next-edit predictions.
I'm working with two of my friends to fill the missing pieces as a JetBrains plugin: https://docs.sweep.dev/
That's cool that you're looking at those things. I hope we've made progress on "Apply" (and we're doing more.) And as heads-up, as you can imagine, we're looking at NEP.
Separate note...
I don't know when "AI Pro" became bundled with the "all products pack" but I don't think it was there last year. I've been paying for it separately for a bit now, but a couple days ago I noticed 2 licenses showing up in the 'AI Assistant' tab. Was able to just cancel the monthly one, and use the one from 'all products pack'. May look to upgrade again when the Junie stuff becomes available in the other IDEs.
I got confused by the Go in the title; "But their Go IDE is called GoLand?" They should not have capitalized it.
Anyway, has anyone compared Junie with competing products?
I understand the confusion as I was confused myself for the same reason. I also attributed that to my english language level.
But I think they just used title-case[1]
[1] https://en.m.wikipedia.org/wiki/Title_case
Short, minor words are usually not capitalized, as stated in the wiki article; esp. if they would cause confusion!
As far as I know, that rule doesn't apply to verbs. Do you have a style guide that indicates otherwise? I know it looks like I'm trying to correct someone's grammar online, but I'm legitimately trying to learn. Your comment made me curious so I searched for a bit but couldn't come up with anything.
I'd like to know more about what is powering Junie under the hood.
> According to SWEBench Verified, a curated benchmark of 500 developer tasks, Junie can solve 53.6% of tasks on a single run
That's nice I guess, but why isn't this an entry on the actual https://www.swebench.com/#verified website? (Also: 53% isn't that impressive these days, Claude Sonnet can reach 63%)
This quote claims it is 'powered by Claude'
“JetBrains and Anthropic share a commitment to transforming how developers work. Developers rely on Claude’s state-of-the-art performance in solving complex, real-world coding tasks. We’re excited to see how Junie, powered by Claude, will help the global developer community create innovative things within the trusted JetBrains IDEs that customers love.” Mike Krieger, Chief Product Officer, Anthropic
Kind of confusingly, in today's release of Rider 'Junie' is mentioned nowhere I can find. The AI assistant tab, which was already available (paid), just has options to pick from popular models (4o, o1, o3, Gemini, Claude) or LM Studio / Ollama
In my experience working on JB extensions, Rider is the most different of the IDEs. Most people think of just IntelliJ and that’s the same code base as eg PyCharm. But Rider seems substantially different.
The biggest difference between the language-specific IDEs in my experience is how they expose the project structure, with GoLand, PyCharm, etc. providing a much more directory centric workflow while Rider by nature has to work around .sln and .*proj files.
But Rider is uniquely weird in its use of ReSharper for code analysis.
Junie isn't available for all of the IDEs yet, so it's not yet available in Rider. As of today it's available for PyCharm, IntelliJ, WebStorm and GoLand: https://www.jetbrains.com/junie/
I have been playing with it for a couple of hours. I have the All Product Pack which includes the AI Pro tier for free. You can hook up local models easily as well using Ollama or LM Studio. This seems better than both Continue.dev and Copilot. I will probably be cancelling my copilot subscription before the current year is up.
I set 'codebase off' but it keeps adding random files as attachment!?! This (?) also makes requests VERY slow!
Early days! Code changes break by including meta information, like:
The provided snippet is a modification of the original code, with some changes …
Here's the complete code file after applying these changes: ```python
IMO I much prefer to have my dev tools be completely FOSS so I'm not building my career on skills that are tied to a proprietary sw provider.
JetBrains has outlasted most of the fully open source IDEs out there. Open source IDEs tend to suck and be abandoned after a while because there's no incentive to keep maintaining it, and it's expensive. See: NetBeans, Eclipse.
AFAIK the only IDEs in wide use today are open-core, like JetBrains' IDEs.
Me too, though I think it's a personal preference rather than an economically rational decision.
> rather than an economically rational decision
Depends on the time horizon
Does JetBrains have an equivalent of Next Edit Suggestions (VSCode) / Edit Prediction (Zed) / "Tab, tab, tab" (Cursor)?
(I'm from JetBrains.) Let's just say, watch this space.
not yet - but it's fairly straightforward to implement the UI once you get the AI down. i've been working on an MVP here: https://docs.sweep.dev/autocomplete#next-edit-prediction-in-...
no. junie is decent as an agent, despite it being slow (i’d put it between cursor and windsurf/copilot on quality).. but the autocomplete is anemic. they have to improve their ability to generate suggestions at all before they can start recommending next edits.
I currently have the $100/yr license for IntelliJ ai integration. I assume nothing changes? (Very happy with it)
It’s a little confusing. I have the all-products pack but a month ago paid $100 for the ai package which is supposed to be included now in the all-products pack. So I should get a $100 credit or bumped up to the next level ai package.
I’ll send a support request out tomorrow.
Any MCP? … Yes and it works great!
(except that it's currently pretty slow probably because servers are overloaded)
Except for Junie. MCP not available for/in Junie?
Correct, MCP is not used in Junie yet, but it's something we are looking into. Comments like this help us better gauge general interest, so highly appreciate this!
I just downloaded IntelliJ community edition yesterday to try Kotlin. I wonder if this applies to it or only to premium version.
Community editions don't get the free tier, only paid editions.
I see. Thanks!
Dumb question? How come in PyCharm when I go to update, it only offers me v2024.3.5 and not v2025.1?
Where are you updating from? Toolbox, or from the website? I'm seeing 2025.1 in both, so you might have some caching if you're checking the website.
The Help..Check for Updates in PyCharm. I got Toolbox, so, problem worked around. Thanks.
You're very welcome! Enjoy the new release, there's quite a few fun new features.
Does IntelliJ provide coding models that run offline? Or third party provider should be used?
I don't think they provide any directly, but let you connect to a local ollama or lm studio instance.
https://www.jetbrains.com/ai-ides/buy/
The 'ai models' section indicates this.
You can use Ollama or LM Studio locally. There is also code completion running on local models which is built into the IDE and comes bundled for free with IntelliJ.
Great --now it’s going to use even more memory.
Since the ukraine war, jet brains products have gone way down hill. Their best engineers were in ukraine. The IDEs have become buggy and slow, sad to see this once very polished software lose relevance
Most of JetBrains engineers was russians located in st. Petersburg. Since the war started, JetBrains claimed they've relocated all workforce from russia.
There was no JetBrains offices in Ukraine.
That's not true. At least not based on their R&D locations back then. Most of those were in Russia. They quickly - and rightfully so - closed these locations down when the war started and moved their activities elsewhere.
Wah? From my perspective it's only continued to be great but I look at intellij and Rider only.
Putin is killing one of the best software products out there?!
I was hoping Junie is very slow only because it's overwhelmed by the first wave of users trying it out.
[dead]
I'd be a lot more excited about this if I weren't paying for a crappy VSCode clone.
Visual Studio Code was first released in 2015; Intellij (the original JetBrains IDE) was first released in 2001. Even Atom -- the editor that Microsoft forked to make VSCode -- had its first public release in 2014.
It's safe to say that JetBrains IDEs are something other than "crappy VSCode clones."
I think OP might be hinting at cursor.
I happily pay.
And I continue to use NetBeans, VS Code, Eclipse as needed (or desired).
eg Mike Lischke's (awesome) ANTLR4 plugin for VS Code has features not in Terr Parr's (et al) IntelliJ extension (and vice versa).
Switching back and forth is nothing sauce.
Calling JetBrains IDEs "VS Code clones" is the most batshit insane thing I've read regarding editors and IDEs as a whole.
Happy I never jumped on the JetBrains bandwagon...
I was curious about Z̶i̶g̶ Zed, until they too started adding AI garbage. Sad.
Luckily, I can be fairly confident that my trusty neovim will never add AI garbage unless I specifically go out of my way to install plug-ins.
EDIT: Zed, not Zig
Zig? You mean Zed right?
Sorry, yes, Zed
You don't _have_ to use the AI stuff, personally I've disabled all of it because my fan was spinning like crazy. Maybe in a year or two I'll try it again.
For now, I pressume. I guess it's a matter of time until the AI features can no longer be disabled.
I like a good conspiracy but based on what? Jetbrains have no incentive to force that, they make money based on providing flexible tools that people will pay for. And their IDEs are desktop apps, you could always just... not upgrade. Unlike web or cloud-based "IDEs".
You're still stuck paying for the garbage regardless, and funding an organization that thinks this is an acceptable business direction.
I was already a satisfied paying customer. I don't need that new stuff but I understand they have to go where the market goes if they want to stay relevant vs competitors (Microsoft VSCode/Github/Copilot) in the eyes of prospective customers who judge products using comparative feature grids.
If you don't want to use an IDE or pay for your tools that's fine. You don't have to look for reasons to hate on it. No one cares what you don't use.
> I don't need that new stuff but I understand they have to go where the market goes if they want to stay relevant vs competitors
Looking at the people racing to jump off the cliff and saying "let's maybe consider not doing that" can be an competitive advantage, see https://procreate.com/ai
Thanks for the reminder! I was looking for a modern editor without AI stuff (I do like AI things but sometimes you'd want an off.) Didn't notice it became open source. Nice!
Don’t know about neovim, but the profiler and debugger in IntelliJ are amazing. Couldn’t live without them.
Have you used Avante + MCPHub in Neovim? I would say that it's far from AI garbage.
That looks like exactly the kind of AI garbage that I'm happy isn't included in neovim!
Which Zig are you talking about?
just don’t install the plugin
It's automatically installed and bundled with the IDE. You can disable it, but to uninstall fully you have to manually delete files from each and every IDE installation
As mentioned in another comment, you can add a .noai file to the root of your project to disable AI support.
As to deleting files that ship with the app but that aren't used if you disable them... that feels awfully 1980's "gosh disk space is expensive" thinking.
that was a bug that we used to have in an early version of AI Assistant, that has been fixed. You can normally disable and uninstall the plugin now