Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

Cursor admits its new coding model was built on top of Moonshot AI's Kimi

Cursor launched its new Composer 2 model this week with bold claims of “frontier-level coding intelligence.” Within hours, a sharp-eyed developer on X punched a hole in that story the model appeared to be built directly on top of Kimi K2.5, an open-source model released just two months ago by Moonshot AI, a Chinese startup backed by Alibaba. What happened next reveals something uncomfortable about how AI companies market themselves and how the global open-source AI ecosystem is quietly reshaping everything.

Key Points

  • Cursor launched Composer 2, calling it “frontier-level coding intelligence” without mentioning its base model
  • An X user exposed the model as built on Kimi K2.5, a Chinese open-source model by Moonshot AI
  • Cursor’s VP of developer education confirmed the Kimi foundation but said 75% of compute came from Cursor’s own training
  • Kimi said the use was fully licensed under an authorized commercial partnership via Fireworks AI
  • Cursor co-founder Aman Sanger called the omission “a miss” and promised transparency with future models
  • Cursor is valued at $29.3 billion and reportedly exceeds $2 billion in annualized revenue

What Cursor Actually Launched

On March 22, Cursor the AI coding assistant that has become one of Silicon Valley’s most-watched developer tools published a blog post announcing Composer 2. The post was confident in tone: a new model, frontier performance, built for developers who need serious coding intelligence. No caveats. No credits. No mention of where the model actually came from.

The gap between that framing and reality didn’t last long. An X user named Fynn noticed something in the model’s code what appeared to be a Kimi model identifier sitting in plain sight. Their post was blunt: Composer 2 was “just Kimi 2.5” with some reinforcement learning on top. “At least rename the model ID,” they wrote. It was the kind of technical detective work that the developer community is exceptionally good at, and the post spread quickly.

Cursor VP of developer education Lee Robinson responded the same day, acknowledging the foundation but contextualizing it: roughly one-quarter of the compute used to build the final Composer 2 came from the Kimi K2.5 base, with the remaining three-quarters coming from Cursor’s own continued pretraining and reinforcement learning work. Robinson argued that this makes Composer 2’s performance on benchmarks genuinely different from Kimi’s baseline and that the use was fully compliant with Kimi’s license terms.

“Yep, Composer 2 started from an open-source base! Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training.” Lee Robinson, VP of Developer Education, Cursor (via X)

What Is Kimi K2.5, and Why Does This Matter?

Moonshot AI released Kimi K2.5 on January 27, 2026 just about two months before this story broke. Moonshot is a Chinese AI startup backed by Alibaba and HongShan (the firm formerly known as Sequoia China), and Kimi K2.5 is its most capable open-source model to date.

Much like DeepSeek when it stunned Western AI labs earlier last year, Kimi K2.5 arrived with benchmark numbers that demanded attention. The model runs on a Mixture-of-Experts architecture with one trillion total parameters but only activates around 32 billion per request, making it efficient enough to run locally. Trained on approximately 15 trillion mixed visual and text tokens, it’s a natively multimodal system that can process text, images, and video inputs together without treating vision as an afterthought grafted onto a language model.

What really set Kimi K2.5 apart was its agentic architecture. The model features an “Agent Swarm” capability that can coordinate up to 100 specialized sub-agents working in parallel Moonshot’s research claimed this approach cut execution time by 4.5x compared to single-agent execution on complex tasks. In coding benchmarks, the model posted an 80.9% resolution rate on SWE-Bench Verified, outperforming Google’s Gemini 3 Pro. On video understanding benchmarks, it beat both GPT-5.2 and Claude Opus 4.5.

Critically, all of this was released under a Modified MIT License the kind of open-source terms that explicitly allow commercial use and modification. That’s what made Cursor’s use technically legal. It’s also what made the omission so puzzling to many developers watching the story unfold.

A $29 Billion Company That Didn’t Build Its Own Model

Here’s the context that makes this story more than a simple attribution failure. Cursor raised a $2.3 billion round in fall 2025 at a $29.3 billion valuation one of the most eye-catching funding rounds of the year. The company is reportedly now exceeding $2 billion in annualized revenue, a number that makes it one of the fastest-growing developer tools in history.

For a company at that scale and valuation, launching a model that’s built substantially on top of a two-month-old open-source release without saying so creates a perception problem even if the technical work on top is genuine and significant. Developers, especially the type who use Cursor every day, pay close attention to what’s happening under the hood. The community’s reaction to the Fynn post wasn’t just about attribution ethics; it was about trust between a tool and the people who depend on it.

What’s worth noting is that this kind of foundation-model building is actually standard practice across the AI industry. Dozens of companies take open-source base models Llama, Mistral, now Kimi and apply further training to specialize them. The technical contribution can be very real even if the weights didn’t start from scratch. What’s not standard is announcing that model as if it were built entirely in-house, particularly when the company involved has raised billions in part on its model development capabilities.

Kimi’s Graceful Response and the Fireworks AI Connection

What was perhaps most interesting about the aftermath was how Moonshot AI’s Kimi account handled it on X. Rather than expressing frustration over the lack of attribution, Kimi posted a congratulatory message confirming that Cursor had used the model “as part of an authorized commercial partnership” with Fireworks AI, a model inference platform that serves as the commercial intermediary.

“We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor’s continued pretraining and high-compute RL training is the open model ecosystem we love to support.” Kimi (@Kimi_Moonshot) on X

This response was strategically smart for Moonshot. Having a company valued at nearly $30 billion build a product on your open-source model is a form of validation that money can’t buy. It demonstrates that Kimi K2.5’s architecture is production-worthy at the highest levels of the industry. The framing also signals that Moonshot sees itself as an open-source platform play the kind of position that turns a model into infrastructure.

The Geopolitics Underneath the Story

There’s a reason Cursor’s initial announcement didn’t mention Kimi. Beyond the question of wanting to appear self-sufficient in model development, there’s a geopolitical sensitivity that’s hard to ignore. The US-China dynamic in AI has become one of the central frames through which the industry is discussed in Washington and Silicon Valley alike. The panic that rippled through the Valley when DeepSeek released its competitive models earlier last year wasn’t purely technical it was existential anxiety about Chinese labs closing the gap.

Building a flagship product feature on top of a Chinese open-source model, however legitimately licensed, is the kind of detail that some investors, enterprise customers, and policymakers might scrutinize. Cursor didn’t create a legal problem Kimi’s MIT license was clear. But from a narrative standpoint, for a company that operates entirely in the US developer market and has attracted billions in American venture capital, the optics of a Chinese model powering its headline new feature were probably something the team weighed carefully before announcing.

This is the tension at the heart of the open-source AI moment we’re in. Models like Kimi K2.5 are genuinely excellent, released under permissive licenses, and available to anyone who wants to build on them. But “available to anyone” increasingly includes competitors, geopolitical rivals, and startups building products that will compete directly with the original lab’s own commercial offerings. The ecosystem benefits everyone until it creates uncomfortable headlines.

⚠ Still Unconfirmed

Cursor has not published a detailed technical breakdown comparing Composer 2’s benchmark performance against the Kimi K2.5 baseline. The claim that 75% of compute came from Cursor’s own training is based on a public statement from the company, not independently verified evaluation data. Actual performance differentiation from the base model remains to be fully established.

What This Tells Us About the AI Coding Race

The broader context here is that the AI code tool market is moving extraordinarily fast, and the pace is starting to create pressure on companies to ship faster than their internal research timelines allow. Anthropic’s Claude Code reportedly crossed $1 billion in ARR within months of launch. Cursor is already past $2 billion. The revenue numbers in this space are almost comically large given how young the category is.

That competitive heat creates real incentives to cut time-to-ship and building on top of a strong open-source base is a rational way to do that. If Cursor’s additional training genuinely improved Composer 2’s performance beyond Kimi K2.5’s baseline, then there’s a real product here, just one that borrowed its foundations from a very good starting point. The question is whether transparency about that foundation changes the value proposition for users and honestly, for most developers, it probably shouldn’t. What matters is whether the model writes better code. The provenance of its base weights is secondary to that.

The pattern here also tells you something about what’s happening in the open-source model ecosystem. Labs like Moonshot are deliberately releasing powerful models under permissive licenses as a strategy knowing that US companies will build on them, that use cases will proliferate, and that this creates both commercial relationships and validation that helps in fundraising and market positioning. It’s the open-source playbook, applied to frontier AI. And it’s working. As we’ve covered, how developers use AI tools increasingly involves stacking multiple AI systems not just relying on one model from one provider.

Sanger’s Admission, and What Comes Next

Cursor co-founder Aman Sanger took the most direct stance of anyone from the company. His X post was short and direct: it was “a miss” not to mention the Kimi base in the original announcement, and they would fix that with future models. It’s the right thing to say, and it matters that the co-founder said it rather than a PR account.

The question going forward is whether Cursor treats this as a policy change or an isolated correction. The AI coding space is going to see many more models built on top of open-source foundations from Kimi, from Llama, from whatever comes next from labs in China, France, or anywhere else. The teams that build the best products on these foundations have real technical value to offer. But claiming you’ve built frontier intelligence without acknowledging whose shoulders you’re standing on is the kind of thing that erodes trust with the developer community exactly the community Cursor depends on most.

“It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model.” Aman Sanger, Co-founder, Cursor (via X)

For what it’s worth, the open-source AI ecosystem came out of this episode looking strong. Moonshot AI released a genuinely impressive model under permissive terms, a major US company used it to ship a product, and the community’s detective work forced the kind of transparency that leads to better norms across the industry. As OpenAI consolidates its products into a unified desktop superapp, and as the competition in AI tooling intensifies, expect more of these moments and more pressure on companies to be upfront about what they’re actually building and where it came from.