Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

There’s an old management saw: What you measure matters. And, usually, you get more of whatever you measure.
Software engineers have been debating productivity metrics for decades, starting with lines of code. But since the new generation of AI coding agents provide more code than ever before, what their managers should measure is less clear.
Large token budgets — essentially, the amount of AI processing power a developer is allowed to consume — have become a badge of honor among Silicon Valley developers, but that’s a weird way to think about productivity. Measuring a process input is pointless if you tend to care more about the output. It might make sense if you’re trying to encourage more AI adoption (or sell tokens), but not if you’re trying to be more efficient.
Consider the evidence from a new class of companies operating in the “developer productivity insight” space. They found that developers using tools like Claude Code, Cursor, and Codex can create more acceptable code than ever before. But they also know that engineers have to come back to change the accepted code more often than before, undercutting claims of increased productivity.
Alex Circei, the CEO and founder of waydevbuilds an intelligence layer to track these dynamics; his company works with 50 different customers that employ more than 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter has never met.)
He said engineering managers see code acceptance rates of 80% to 90% — meaning the portion of AI-generated code that developers approve and keep — but they miss the churn that occurs when engineers must change that code in the next few weeks, pushing the real-world acceptance rate to between 10% and 30% of generated code.
The rise of AI coding tools led Waydev, founded in 2017 to provide developer analytics, to completely rework its platform in the last six months to address the proliferation of fast coding tools. Today, the company is releasing new tools that track metadata created by AI agents, offering analytics on the quality and cost of their code to provide engineering managers with more insight into both AI adoption and effectiveness.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While analytics companies have an incentive to highlight the problems they find, evidence is growing that large organizations continue to think about how to use AI tools effectively. Big companies take notice – Atlassian acquired DX, another engineering intelligence startup, for $1 billion last year, to help its customers understand the return on investment in coding agents.
Data from across the industry tell a consistent story: A lot of code is written, but a disproportionate amount of it doesn’t stick.
GitClearanother company in this space, published a report in January found that AI tools increase productivity, but also that its data showed that “regular AI users averaged 9.4x higher code churn than their non-AI counterparts” – more than double the productivity gained with the tools provided.
Faros AI, an engineering analytics platform, draws on two years of customer data for its March 2026 report. The finding: code churn – lines of code removed vs. lines added – increased 861% under high AI adoption.
Jellyfish, which bills itself as an intelligence platform for AI-integrated engineering, collected data of 7,548 engineers in the first quarter of 2026. The company found that engineers with the largest token budgets made the most pull requests (proposed changes to a shared codebase), but productivity improvements did not increase. They achieve double the throughput for 10 times the amount of tokens. In other words, tools produce quantity, not value.
These types of statistics are true when you talk to developers, who find that code reviews and technical debt are increasing, even if they are happy with the freedom of new tools. A common finding is the difference between senior and junior engineers, with the latter accepting more AI-generated code, and facing a greater rewrite as a result.
However, even though the developers are working to understand what their agents’ goals are, they don’t expect to return anytime soon.
“It’s a new era in software development, and you have to adapt, and you’re forced to adapt as a company,” Circei told TechCrunch. “It’s not like it’s a cycle that’s going to pass.”