Intro

Hi everyone,

This week I've been getting feedback on an AI adoption framework we're building. We're helping organizations stop waiting for the perfect plan and start doing the work, step by step, with enough structure to learn from it.

What keeps coming up in every conversation is how much of the challenge isn't technical. It's organizational. Who decides what to pursue. Who's accountable when something goes wrong. Who's measuring whether any of it worked. That's what this issue is about.

This Week

Patterns from this week's work and reading that matter for operations leaders.

  1. CEOs grabbed the AI wheel. Now they need a co-pilot. BCG's 2026 AI Radar found that half of CEOs believe their job stability depends on getting AI right. The CEOs who treat AI as a top priority and invest in organizational upskilling outperform cautious adopters by 2-3x on every measured dimension. But a CFO this week told me: the CEO sets the direction, the CFO is the one demanding cost structure, measurement, and accountability before anything ships. The mandate works when someone in the room can translate ambition into operational proof. (BCG AI Radar 2026)

  2. 80% of agentic AI work isn't the AI. It's the data layer underneath. MIT Sloan's research: for every hour on the model, four go to data engineering, stakeholder alignment, and governance. Three separate conversations this week confirmed it. Companies want to deploy agents but their data lives in system silos without business context. Skip the data work and you get plausible demos that break in production. (MIT Sloan, VentureBeat)

  3. Your biggest AI risk isn't a breach. It's agents making decisions nobody audits. Microsoft warned this week that ungoverned AI agents could become corporate "double agents," leaking data and executing unauthorized actions across organizational boundaries. Most companies don't have an inventory of what AI agents are running in their environment, what data those agents can access, or who's accountable when something goes wrong. (VentureBeat)

  4. AI readiness is becoming a financing requirement, not just an operational one. Investment bankers are now demanding AI strategy narratives in growth-stage rounds. One CFO this week said every banker in his process asked for it. The EU AI Act high-risk deadline hits August 2. 38 states passed 100 AI bills last year. No federal framework exists. Investors and regulators are converging in the same quarter. (KPMG)

Adoption at Machine Speed

Every AI conversation this year starts the same way. The board wants AI. The CEO is demanding a strategy. Teams are running experiments nobody approved. What none of them have done is answer the question that determines whether any of it goes anywhere: what does adoption actually look like in this organization, and how do you get from pilots to something that runs at scale?

Most organizations I talk to are stuck in one of two places. Either they're waiting for a comprehensive framework before trying anything, which means they're not trying anything. Or they've jumped in without structure and now have AI tools scattered across teams with nobody tracking what's working, what's redundant, or what's ready to be killed. Paralysis costs time you can't get back. Scatter costs credibility when the CFO asks what the company has to show for the last six months.

The way through is lighter than either extreme.

First, define what you're actually talking about

Before anything else, answer a question most skip: what AI are we talking about?

An employee using ChatGPT to draft customer emails is AI adoption. An engineering team building autonomous agents that process claims is AI adoption. Your CRM vendor embedding AI into a platform you already bought is AI adoption. These are three fundamentally different things with different risks, different requirements, and different paths to value. Until you distinguish between employee tools, internal builds, and vendor purchases, every meeting ends with people agreeing in principle and diverging in practice.

Gartner predicts over 40% of agentic AI projects will be abandoned by 2027 before they deliver value. A significant part of that failure starts here: teams pursuing initiatives without a shared definition of what they're adopting or why.

Then build the decision process, not the strategy deck

Ask any department to list AI initiatives they'd like to pursue. You'll get ten in an hour. That's the easy part. The hard part is deciding which ones to fund, in what order, with what success criteria. Who makes that call?

BCG's same survey identified three CEO archetypes: Trailblazers (15%), Pragmatists (70%), and Followers (15%). The headlines focus on Trailblazers committing 73% of their transformation budgets to AI. But the 70% in the middle are the story that matters. Pragmatists are "excited and confident about AI but only invest when they see evident value and low risk." They want to move. They just won't move without proof. That's not indecision. That's a rational response to a market full of hype and short on operational evidence.

A steering committee that meets quarterly to review a slide deck is too slow for a technology that moves in weeks. The decision cadence has to match the pace of the technology. Get a small group of the right people in a room. Define how you'll evaluate initiatives: what criteria matter, who has authority, how often you meet. A clear process that runs every two weeks beats an elaborate framework that runs once a quarter.

Start light, earn complexity

The instinct is to build the comprehensive framework first. Define every policy. Map every regulation. Stand up the monitoring platform. Then start. That instinct is wrong.

Start with what you can manage today. Basic inventory: what initiatives exist, who owns them, what they're supposed to deliver, and when someone will check. Your second initiative should have more structure than your first, your tenth more than your fifth. Not because someone mandated it, but because you learned what controls actually matter from experience.

MIT research found that 95% of AI projects never make it past pilot. The failure isn't in the models. It's in everything around them: missing data infrastructure, absent ownership, and teams that were never aligned on what success looks like.

This is where intentionality matters. AI initiatives don't manage themselves. Someone has to track what's running, what's working, what's stalled, and what needs to be killed. That's a dedicated focus, not a standing agenda item in a meeting that covers twelve other topics. When nobody owns AI adoption as their actual job, initiatives drift, teams duplicate effort, and experiments run indefinitely without anyone deciding whether they delivered.

The walls you'll hit when you try to scale

A successful pilot is not a success story. It's a starting point. Three walls keep showing up when teams try to scale.

The first is technical. The pilot ran on one engineer's setup. Now you need infrastructure that supports multiple teams. Every initiative needs a technical review to make sure you're building a repeatable layer, not a collection of disconnected experiments.

The second is legal. The regulatory landscape is moving as fast as the technology, and the fight over who writes the rules has drawn more than $200M in PAC spending, with Anthropic and OpenAI on opposite sides of whether states or the federal government should set the standards. For operations leaders, the compliance surface area is expanding and fragmenting at the same time. But the answer isn't to freeze. Match the control to the scope. If your first experiment is automating invoice reconciliation, you need to know what data the agent accesses and who reviews the output. You don't need a full regulatory compliance program on day one.

The third is organizational. Nobody defined who is accountable when an autonomous agent makes a decision that costs money or violates a policy. Ambiguity is what slows teams down, not structure. The rules just have to be simple enough to remember and specific enough to follow.

Where to start

Pick one process that's painful, repetitive, and measurable. Define who owns the experiment, how you'll know if it worked, and when you'll decide. Run it. Then add the next layer of structure based on what you learned, not what you feared.

The companies that scale AI successfully won't be the ones that started with the most comprehensive governance program. They'll be the ones that learned fastest and added structure only when they earned the need for it.

Sources:

📊 The fastest-growing open-source project on GitHub is an AI agent

OpenClaw: 250,000 GitHub stars in under four months. A personal AI agent that runs on your laptop, connects to your messaging apps, and executes tasks autonomously. The CEO of LangChain, whose infrastructure powers it, banned it from company laptops. Your employees didn't.

📡 The Wire

Only 18% of professional services firms measure AI ROI
Thomson Reuters surveyed 1,500+ professionals across 27 countries. AI usage nearly doubled to 40% in 2026. But only 18% track ROI on AI tools, and even those mostly measure internal operational metrics, not business outcomes. The adoption is real. The measurement isn't.

Microsoft Copilot ignored sensitivity labels twice in eight months
A code bug let Copilot summarize confidential emails for four weeks despite DLP policies. Six months earlier, a zero-click exploit (CVSS 9.3) exfiltrated enterprise data through the RAG pipeline. Configuration is not enforcement. You only discover a control failed when something slips through.

Frontier providers are making security a platform feature
OpenAI acquired Promptfoo, the open-source red-teaming tool used by 25%+ of Fortune 500, and embedded it in its enterprise platform. Anthropic's Claude found 22 zero-day vulnerabilities in Firefox in two weeks, 14 of them high severity. The companies building the models are acquiring the companies testing them. Security is becoming native infrastructure, not a third-party concern.

Shadow AI agents are already on your employees' laptops
Glean CEO Arvind Jain: "The question isn't whether your employees are already spinning up agents. They likely are. It's whether your organization will get ahead of it, or wake up one day to find your most sensitive workflows running on infrastructure you never approved, can't audit, and can't turn off." Shadow IT had a decade-long ramp. Shadow AI agents arrive in months. The difference: shadow IT accessed data. Shadow AI makes decisions with it.

This is so me!

📚 What I'm consuming

AI will make engineering more human, not less. The Rundown AI interviews Rajeev Rajan, CTO of Atlassian. They built a "one click, do it all" AI coding agent and their own engineers refused to use it. Too much magic, not enough transparency. They scrapped it and rebuilt with inspectable agent sessions and steering controls. Earn Your Complexity at one of the biggest dev tools companies in the world.

OWASP's top 10 ways to attack LLMs. IBM's Jeff Crume walks through the updated OWASP Top 10 for LLMs. Every item on this list is a governance control that should exist before an LLM goes to production. Most companies have zero of them in place.

🌙 After Hours

2001: A Space Odyssey

Arthur C. Clarke, 1968 | 297 pages | ★★★★★

A timely re-read. I first read this as a kid and was fascinated with the worlds it described. Coming back to it now, in the middle of daily conversations about AI governance and agentic systems, it hit differently.

The book is a fast read. Clarke doesn't waste a sentence. The science is detailed without being heavy, the pacing pulls you through millions of years of evolution without pausing for breath, and the whole thing is tighter than most modern thrillers twice its length.

But the subplot that still sticks with me is HAL. He doesn't go evil. He's given contradictory instructions: be transparent, but hide the mission's true purpose. His breakdown is the logical result. The most dangerous AI failure is a design flaw nobody catches until it's too late.

That felt like science fiction in 1968. It feels like a Tuesday in 2026.

🧪 Quanta Lab

The "harness" matters more than the model

Remember AutoGPT? It held the fastest-growth record on GitHub until OpenClaw broke it. Same core architecture: an LLM running in a loop, calling tools. Nobody talks about AutoGPT anymore.

Harrison Chase, the same LangChain CEO who banned OpenClaw from his company's laptops (VentureBeat), breaks down why the same idea failed in 2023 and works in 2026. Part of it is smarter models. But the other half is what he calls the "harness": the orchestration infrastructure wrapping the model. Four things every working agent has that AutoGPT didn't:

  1. Planning tools that let the model track its own work. The difference between staying on task for 30 minutes and drifting after 3.

  2. Sub-agents with focused context windows. Each gets a clean context for a specific task, goes deep, reports back.

  3. File system access so the model manages its own context: read, write, offload.

  4. Prompting that's actually architecture. Claude Code's system prompt is 2,000 lines. That's a specification, not a prompt.

Stop debating which model to buy. The model is increasingly commoditized. The orchestration layer and context pipeline are where reliability lives.

🎙️ Listen

Prefer to listen? Quanta Bits is also available on Apple Podcasts and Spotify.

How this gets made

I collaborate with Spock, my AI agent. He researches extensively: scanning, filtering, and surfacing what's relevant across my business. I read, listen, and watch what resonates, and decide what matters. I provide direction, we draft together. The editorial judgment is mine. He'd tell you the same. Most logical. 🖖

Recommended for you