Airbnb says AI writes 60% of its code. The other 40% is the whole story.
During Airbnb's Q1 2026 earnings call, CEO Brian Chesky dropped a number that made headlines: AI now generates 60% of all new code written by Airbnb's engineers. Revenue is up 18% year-over-year. The customer support bot resolves 40% of issues without escalating to a human. Everyone's been writing variations of the same take: AI is working.
That 60% figure deserves a harder look. Airbnb is an extreme case (a mature tech company with hundreds of engineers and years of AI investment), but the shift it describes shows up at smaller companies too, just with different numbers. And if you're running a team that builds software, the more interesting number is the 40% that humans still write.
The 60% is the easy part
When AI handles 60% of code, the reasonable assumption is that it's handling the more predictable work: boilerplate, integrations, repetitive logic, standard implementations. Chesky's framing supports this: he described AI giving leverage on "supervised tasks," things where an engineer sets direction and agents execute. Airbnb hasn't published a breakdown of what specifically AI writes versus what humans write, but the language points at routine execution.
That's a real productivity gain. Airbnb is building more tools for API partners, expanding capabilities, moving faster. Some of that 60% probably represents things they couldn't have shipped at all before. AI lowering the cost to execute expands what's possible, not just what's cheaper. The leverage is real.
But here's what the headline hides: the code AI writes well is the code that already has a clear answer. The 40% humans write is where the hard questions live. What should we build? What's the right trade-off? What do we leave out? What does the customer actually need?
The bottleneck has shifted
Before AI coding tools, the constraint in software development was often execution. Writing code took time. Implementation had friction. You needed more people to ship more things.
That constraint is gone for the routine work. What remains is judgment: knowing what to build, when to stop, and how to evaluate what gets generated. An engineer who can rapidly produce code but can't evaluate it is a liability in this environment. An engineer who can set direction, review AI output critically, and make sound architectural decisions is worth considerably more than before.
Chesky described it as going from "a team of 20 engineers" to one engineer supervising agents. Take that as directional, not literal. Every tech CEO says something like this on earnings calls. But even at a more modest ratio, the underlying shift is real: the work that required execution headcount is getting absorbed by tooling, and the work that requires judgment is what's left.
The customer support number tells the same story
Airbnb's AI bot now handles 40% of support issues without escalating to a human, up from around 33% earlier this year. That's a real operational improvement.
Look at what the 60% still requires, though. The issues too complex, too ambiguous, or too high-stakes for AI to handle alone. When AI removes the routine work, the people who remain deal with harder problems, more often. The floor for what a competent support agent handles has gone up, not down.
The same logic applies to engineering. AI raises the floor for what your developers are expected to handle, because the floor now starts above the work AI already does well.
What Chesky said that got less attention
Buried in the earnings call: "I do not think anyone has figured out AI for travel or e-commerce yet." He listed four specific failure modes in customer-facing AI: too much text output, no direct manipulation, poor comparison tools, and single-player design in what is often a group decision.
This is the honest version of the AI story. Internal productivity tools are working. Customer-facing AI is harder than it looks. The gap between generating code efficiently and building products that actually solve customer problems is where most businesses are stuck right now.
What this means if you're running a team
The gains are real. If your team isn't using AI coding tools yet, they're slower than they need to be, and the gap will compound.
The skills that matter have shifted. This doesn't mean you should stop caring whether engineers can write code. You can't evaluate AI output if you don't understand code yourself. But the weight you give to raw coding speed in interviews is worth reconsidering. The ability to evaluate, direct, and make good trade-offs matters more now than it did two years ago. A few engineering leaders have started explicitly restructuring their interview loops around this; it's worth watching.
And don't expect AI to eliminate judgment problems. Airbnb's customer support bot handles 40% of issues. The other 60% still need a person. AI works where the answer is clear. It struggles where context, stakes, and nuance are required. Knowing the difference in your own operations is the work that doesn't get automated.
The 60% figure is impressive. The 40% is where your business actually lives.