This week, TechCrunch reported that Claude Code users were hit with unexpectedly restrictive usage limits. As one frustrated developer put it, "Just be transparent. The lack of communication just causes people to lose confidence in them."
Sound familiar? It should.
Déjà Vu All Over Again
Just two weeks ago, we wrote about Cursor's pricing disaster, where their "unlimited" plan suddenly became 225 requests for many users. The timeline was predictable: announce generous limits, users get hooked, quietly tighten the screws, face backlash, then scramble to do damage control.
Here's the thing that bugs me: this isn't just about bad communication or growing pains. It's about a fundamental misalignment built into how AI coding tools make money.
The Incentive Problem
AI coding tools face a fundamental misalignment: they sell flat-rate subscriptions ($20/month) but use expensive language models that charge per token.
The tension:
Users want unlimited access to powerful models
Providers need to minimize AI costs to stay profitable
Result: Constant pressure to throttle usage
Even when the AI company owns the coding tool directly - like Anthropic with Claude Code - the unit economics don't magically disappear. Running Claude 3.5 Sonnet costs real money in compute resources, and when users pay a flat $20/month but consume hundreds of dollars worth of inference, someone has to absorb that loss. The service either operates at a loss or finds ways to limit usage to sustainable levels.
The Real Cost of "Unlimited"
The Cursor situation perfectly illustrates this dynamic. They initially promised 500 requests, then moved to "unlimited-with-rate-limits" (which is just marketing speak for "limited"), then quietly switched to per-token pricing when the economics didn't work.
But here's what that user journey actually looks like for developers:
Adoption: You integrate the tool into your workflow
Productivity: You become dependent on it
Throttling: Suddenly you hit limits or get downgraded
Frustration: Your workflow breaks, trust erodes
The TechCrunch article mentions that Claude Code's $200 Max plan users were making over $1,000 worth of API calls daily. While that sounds excessive, it highlights the core problem: there's no sustainable way to offer "unlimited" access to expensive AI models at consumer price points.
Bottom Line: The industry needs honest pricing from day one, not bait-and-switch tactics that destroy developer trust.
Beyond Individual Failures
This isn't about picking on Claude Code or Cursor specifically. Every AI coding tool faces the same economic reality—real costs that scale with usage intensity.
The subscription model worked great for SaaS tools where the marginal cost of serving another user was essentially zero. But AI tools have real, variable costs that scale with usage intensity.
What This Means for Developers
Reality Check: $20/month "unlimited" AI coding is too good to be true when the underlying models cost more than that for heavy use.
The Honest Pricing Models: Tools that charge per usage or let you bring your own API keys.
Why it works: You pay what you use, providers aren't fighting your productivity to protect margins.
For Serious Development Decisions
Usage Patterns: How much AI assistance does your team actually need daily?
Cost Transparency: Can you predict and budget for real usage costs?
Fallback Planning: What's your backup when you hit limits during crunch time?
Bottom Line: Choose tools with aligned incentives. If the provider wins when you use more (not less), you've found sustainable pricing.
The Path Forward
The industry needs usage-based pricing for AI tools. Yes, it's less predictable than flat fees. But it's honest about real costs and aligns incentives.
Until then: promises of unlimited access → inevitable restrictions → user backlash → eroded trust.
The only winners? Marketing teams putting "unlimited***" on pricing pages.
I have a $200/max plan, and Anthropic has changed the terms multiple times in the past few months without warning or notice.
This article is how I found out. So I tried to write customer service via their chat. After telling the bot how upset I was with them, they asked if I wanted a human. It then told me, "While our Support team is unable to manually reset or work around usage limits, you can learn about best practices here. If you’ve hit a message limit, you’ll need to wait until the reset time, or you can consider purchasing an upgraded plan (if applicable)." It has not responded to anything else.
No one wants to be hoodwinked, let alone repeatedly. The co-dependent nature of using the Max plan with repeated false promises would be called an abusive relationship in any other situation.
I'm trying Kilo Monday.
That's an awesome insight, thank you!