Security Blindspots in Closed Source Forks
There may be hidden security costs of closed source AI-powered code editors
I've used Cursor quite a bit, and have been pretty impressed by its AI-assisted coding features that purport to supercharge my workflow. I even wrote about trying "all" of the AI coding assistances out there (really 10 of them) and that led to me joining Kilo Code. Thus, there could be some bias here - though I'm always okay to being accused of bias towards open source, to be honest. But recently, I stumbled across something concerning that I think every developer using closed source versions of these tools should be aware of.
Here's the deal: Cursor is essentially a closed-source fork of Visual Studio Code with some AI magic sprinkled on top. And while that formula has created a pretty slick developer experience, it's introduced a serious security blind spot that's not getting enough attention.
Python Extension CVE Example
Let me paint the picture with a concrete example. In November 2024, a critical remote code execution vulnerability (CVE-2024-49050) was discovered in VS Code's Python extension. Microsoft quickly patched this in version 2024.20.0 of the extension. Great! Security system working as intended, right?
Not so fast. Months later (as of writing in May 2025), Cursor users remained stuck on Python extension version 2024.13.0 (or earlier) — a version confirmed to be vulnerable to this high-severity CVE. Even more concerning, this wasn't a one-off issue. Users across GitHub issues and forums were reporting that multiple extensions in Cursor were consistently lagging behind their VS Code counterparts.
This meant that while VS Code users were protected after applying the patch, Cursor users continued to be vulnerable to a known RCE vulnerability. The attack path was documented and public, yet users remained exposed.
When "Closed Source" Means "Security Lag"
This problem goes deeper than just one extension or CVE. It highlights a fundamental issue with closed-source forks of open-source projects: security updates don't automatically flow downstream.
In VS Code's open-source model, security fixes are transparent. The community can see when vulnerabilities are discovered, watch how they're fixed, and immediately benefit from patches. But when a company forks the codebase, closes the source, and then sells it as a product, that transparent security model breaks down.
Look, I get it. Innovation sometimes requires proprietary additions to open-source projects. But when those additions create barriers to timely security updates, we have a problem. Especially when we're talking about development tools that, by their nature, have access to our source code (and with MCPs a LOT more) and often run with elevated permissions.
The "But It Has AI!" Distraction
Don't get me wrong - I'm all for innovation that helps developers be more productive. But when that innovation comes at the cost of security, we need to ask hard questions.
The irony is that many developers adopt tools like Cursor to be on the cutting edge, only to unknowingly use extensions that are months behind on critical security patches. This creates a peculiar vulnerability window where attackers could specifically target users of these closed-source forks, knowing they're running vulnerable versions of popular extensions
What makes this particularly dangerous is that once a vulnerability is patched in the open-source upstream project, it essentially becomes a zero-day for any fork that remains unpatched. The exploit details are public, but the fix hasn't been applied to the fork. It's the worst of both worlds from a security perspective.
Don't Let Good Get in the Way of Better
I'm reminded of something I wrote about previously: "Don't let good get in the way of better." The current situation with closed-source VS Code forks feels like a case where we've accepted "good" (AI features) at the expense of "better" (security).
Here's what, I think, needs to happen:
Transparency about extension versions - Closed-source forks should clearly communicate which extension versions they bundle and how they compare to upstream versions. (I'm not the first person to ask for this)
Priority for security patching - When security vulnerabilities are fixed upstream, these patches should be fast-tracked into the fork, even if other features lag.
Manual update options - Users should always have the ability to manually update extensions to newer versions, even if the fork vendor hasn't officially supported them yet.
Independent security audits - If a company is going to maintain a closed-source fork of open-source software, they should invest in regular security audits to catch potential issues that the open-source community might have already fixed.
The Security Debt of Forking
This reminds me of an issue I've seen repeatedly in my career: technical debt that turns into security debt. When you fork an actively maintained project, you're essentially taking on the burden of merging all upstream changes—including crucial security fixes. Missing even one critical patch can leave your users vulnerable to exploits.
The more a fork diverges from its upstream source, the harder it becomes to integrate those fixes. This is especially true for closed-source forks where the community can't help spot issues or contribute fixes.
What's a Developer to Do?
If you're using Cursor or another VS Code fork, here are some practical steps:
Keep an eye on security announcements for VS Code and its popular extensions
Check which extension versions you're running against the latest available in the official VS Code marketplace
Consider manually installing critical extension updates if your fork is lagging
Apply pressure by raising these issues in community forums and with the vendors directly
Will this slow down your cutting-edge AI-assisted workflow? Maybe a little. But speaking from experience, dealing with a security incident will slow you down a lot more.
The Open Source Way
The way I see it, this is a reminder of why the open-source model works so well for security in the first place. When many eyes are watching the code and anyone can contribute fixes, vulnerabilities get patched quickly. When we move away from that model, we need to be exceedingly careful about what we give up in the process.
I'm still excited about AI-assisted development tools, and I think platforms like Cursor are onto something important. But as our industry races to integrate AI into every development tool, we can't afford to sacrifice the security foundations that open-source has built for us.
What do you think? Have you encountered security issues with VS Code forks? How do you balance cutting-edge features with security concerns? Let me know in the comments or join our Discord server.