The labor story inside BOND Capital’s May 2025 AI report is easy to flatten into a slogan. Either AI replaces people, or it “augments” them. Both readings miss the harder part.
What the report really points to is a change in baseline. In a growing number of firms, AI use is no longer being treated as a clever side skill or a private productivity trick. It is starting to look more like spreadsheet literacy once did: part of the expected operating standard.
The shift is not from humans to machines. It is from optional AI use to mandatory AI fluency.
That sounds abstract until it lands in ordinary places: hiring, performance reviews, workflow design, headcount decisions, and the quiet difference between workers who can turn AI into output and workers who still treat it as an experiment. Once that difference becomes legible inside the firm, the argument changes.
| Common framing | What the 2025 evidence suggests instead |
|---|---|
| AI is mainly a productivity add-on | AI is becoming part of the firm’s operating baseline |
| All workers benefit in roughly the same way | Gains are uneven and often strongest where the gap to competence is still large |
| Buying tools is the main decision | Redesigning the workflow is where much of the value appears |
| The big question is replacement | The nearer-term question is who begins to look like higher-output labor inside the same org chart |
What BOND Sees Sooner Than Most
BOND’s report matters because it gathers signals that are often discussed separately: reasoning gains, enterprise adoption, agentic software, labor-market churn, and the spread of specialized tools. Read together, they suggest that the future of work is not waiting for some clean break. It is already entering the building through process changes and expectations.
That is why the old debate feels stale. “Will AI take jobs?” is not useless, but it is now too blunt. A more useful question is this: what part of work is already being judged by speed, coverage, and adaptability rather than by the old prestige signals around effort?
- Measured productivity gains are now well documented in some settings.
- Firm-level operating language is shifting from experimentation to expectation.
- Specialized workflow tools are starting to matter as much as general-purpose chat interfaces.
That last point is easy to miss. General-purpose models get the attention, but much of the economic pressure builds one layer down, in the software that turns raw model capability into repeatable work. That is part of why CV3 has already argued that specialized tools are where some of the more durable value may collect.
Productivity Is Real, but Not Even
The cleanest number in this conversation still comes from the well-known customer-support study summarized by Stanford HAI. Workers using generative AI saw productivity rise by 14% on average. That figure travels widely for a reason. It is concrete.
But the more interesting part is the unevenness. The gains were strongest among less experienced and lower-skilled workers. In plain language, AI often helps most where the climb toward competence is still steep. It can shorten the distance between average and good more easily than it can shorten the distance between good and exceptional.
That matters because it changes how firms think. If AI lifts the floor faster than it raises the ceiling, then the managerial question is not simply whether to buy access. It becomes a question of training design, task design, review systems, and supervision. Put more simply: the tool matters, but the process around the tool matters more.
Which tasks benefit most from AI: expert judgment, or the climb toward competence? That tension runs through almost every serious workplace study now.
The 2025 BCG work report sharpens the point. The biggest returns do not come from scattering assistants across the company and hoping habits change by themselves. They appear when firms rework the sequence of labor: what gets drafted by software, what gets checked by humans, what gets escalated, and where judgment still lives. A lot of organizations say they are adopting AI. Fewer have actually rebuilt the workflow.
| Signal | What it means in practice |
|---|---|
| Stanford/MIT productivity evidence | AI can raise output now, especially in structured work with clear feedback loops |
| BCG workflow findings | Tool access alone is weaker than process redesign |
| Microsoft’s “Frontier Firm” framing | Organizations are moving from assistants toward software that can run bounded chunks of work |
| PwC’s jobs data | AI exposure is showing up in productivity, wages, and skill premiums rather than simple collapse |
The New Baseline Inside the Firm
This is where the article stops being about tools and becomes about labor formation. In 2025, some firms stopped talking about AI as a discretionary experiment and started treating it as part of how work is supposed to happen.
Microsoft’s 2025 Work Trend Index describes a move from assistants toward “digital colleagues” and agent-like systems that can handle bounded tasks inside larger business processes. The phrasing is corporate, but the implication is plain enough. Once software is expected to carry pieces of the workflow, the human role changes with it.
The clearest public signals came from company memos. At Shopify, AI use was framed as a baseline expectation rather than a curiosity. At Duolingo, the “AI-first” memo made the shift even more visible, tying AI use to hiring, performance review, and staffing logic, as reported by The Verge. Once a company says that out loud, something important has happened. The question is no longer whether AI belongs in the workflow. The question is who can operate inside the new one.
There is an awkward reality here. A worker can remain smart, diligent, and experienced, yet still begin to look slower in a system that has quietly changed its time assumptions. That is one reason the word “augmentation” can feel too soft. It hides the fact that norms are being rewritten at the same time.
When a firm says “AI-first,” is it changing tools, or changing headcount logic? Often it is doing both, even if it speaks only about one.
This is also why some older debates about job survival need to be updated. The more useful distinction may no longer be manual versus cognitive work, or creative versus routine work, but work that can be restructured into machine-readable stages versus work that resists tidy decomposition. That is a different map from the one many people still carry around. For a related angle, see why certain professions will survive the AI takeover.
Where Value Starts to Collect
Once AI fluency becomes part of the baseline, the value question moves. It does not vanish into a vague story about smarter workers. It starts to collect where work is routinized, audited, and packaged into repeatable software surfaces.
The macro evidence points in the same direction. In its 2025 Global AI Jobs Barometer, PwC found faster productivity growth, wage premiums, and stronger demand in roles exposed to AI. That does not mean all workers benefit equally, and it certainly does not mean every firm has figured it out. It means the pressure is already visible in the data.
BOND also points to a rise in AI-labeled job titles and a widening premium on AI-adjacent capability. Titles are not destiny, but they are a clue. They tell you what organizations want to signal, what they are budgeting for, and which capabilities are being made legible to management.
The ownership angle enters quietly here. If the firm that redesigns work first can get more output from the same payroll base, that matters. If the vendor that becomes the workflow surface can turn model capability into recurring software spend, that matters too. Not every page on CV3 needs to become a capital-markets note. But ignoring value capture altogether would miss the point of what is happening.
There is a temptation to treat all of this as a story about general intelligence becoming cheap. That is too neat. A lot of value is likely to sit in the messy middle layer: the interfaces, the approvals, the domain-specific prompts, the audit trail, the integration work, the trust architecture, the awkward places where one system hands off to another. For a stack-level companion to that argument, see why data pipelines are the new oil rigs of AI.
Normal competence is a moving threshold. The unsettling part is not that machines are getting better at isolated tasks. It is that firms are beginning to price that change into what counts as ordinary work.
What does BOND’s AI report say about jobs?
It points away from a simple replacement story and toward a more immediate workplace shift: AI use becoming normal inside the workflow, with gains showing up unevenly across roles, tasks, and firms. https://cv3.com/why-certain-professions-will-survive-the-ai-takeover/
Is AI really making workers more productive already?
Yes, in some settings the evidence is strong enough to take seriously. The better question is where those gains appear, and for whom. The early data suggests that structured work and workers still climbing toward competence may see the clearest lift.
What does “AI-first” mean inside a company?
Usually it means more than buying licenses. It means the firm is starting to assume AI will draft, search, summarize, or execute parts of work by default, and that workers will be assessed inside that new pattern.
Why do specialized AI tools matter more than generic chatbots in many firms?
Because the hard part is often not model access but workflow fit. Specialized tools package prompts, approvals, context, and auditability into something a company can actually rely on day after day.
Does this change who captures value from AI?
Very likely. Value tends to gather where capability turns into repeatable process: the workflow owner, the software layer that becomes hard to replace, and the organization that redesigns labor before others do.
