
Hi everyone,

Growing up, I dreamed of working for NASA at some point. NASA embodied everything worth going for: imagination, science, expanding our understanding of the world, and improving lives. I even own a NASA cap, before it became hip. But life is not always what you think it is. Then again, working for a bureaucratic organization that has to deal with politics to survive was never that appealing either. Still, watching a manned mission to the moon this week brought something back. That old feeling that we might actually leave this planet someday. The grass is certainly not greener out there, though.
Big ambitions, limited resources, the need for focus over hype. Some things apply everywhere. This week's essay is about one of them.
Signals This Week
The adoption mirage. 90% of ops teams say they're investing in AI. Only 23% have a formal strategy. In sales, 88% claim AI adoption, but only 24% have it in actual revenue workflows. Most "adoption" is ChatGPT in a separate browser tab. In conversation after conversation this week, I heard the same pattern: everyone building agents independently, no shared architecture, no governance. One operations leader called it the central AI adoption failure pattern at her previous company. Meanwhile, one person with a strategy and $20,000 built a $400M business. Activity isn't strategy.
The people gap is the real bottleneck, not the tools. Seven of 10 functions scored "significantly behind" on the people dimension. Customer service is the canary: "on track" on deployment, but 87% of CS workers report high stress as AI absorbs routine cases and leaves them the hard ones. Leaders think training is adequate (72%). Workers disagree (55%). That's a 17-point perception gap.
🎯 CEOs took the wheel. Now what?
AI adoption is becoming a CEO and board directive, the same way cybersecurity did five years ago. BCG's 2026 AI Radar survey (2,360 executives, 640 CEOs, 16 markets) puts numbers on it: 72% of CEOs now say they're the main decision maker on AI. That's double last year. Half believe their job stability depends on getting it right. 94% will keep investing even if AI doesn't pay off this year.
That elevation is right. Boards are asking about AI as a fiduciary question, not a technology question. The CEO taking the lead means AI is finally being treated as strategy. That part I agree with.
The problem is what happens after the CEO takes the wheel.
The 70% in the middle
BCG segments companies into three archetypes: Followers (roughly 15%), Pragmatists (roughly 70%), and Trailblazers (roughly 15%). Most of the attention goes to the Trailblazers. But the Pragmatists are the story. They want to move, but they only invest when the value is proven and the risk is low. They just don't know what Monday morning looks like.
I often hear this from operational leaders: the CEO is setting the pace, departments are preparing their AI initiative wishlists, and everyone is moving. Marketing is experimenting with content generation. Finance is looking at forecasting models. Operations is piloting process automation. Lots of activity. Lots of box-checking. Very little coordination.
The data backs this up. As I noted in Signals this week, the gap between "we're investing in AI" and "we have a strategy for AI" is enormous. Most of that investment is legacy optimization with a thin generative AI layer on top.
The context gap needing attention
When the CEO took over AI decision-making, the operational context didn't come with it.
The CIO and other operational leaders had the scar tissues: which vendors actually deliver, where the data lives and what shape it's in, which integrations break under load, where the security gaps are. That institutional knowledge was earned over years of implementation. The CEO took the authority. The CIO kept the context. Now you have the person with authority driving decisions without the operating manual, and the person with the operating manual executing decisions they didn't make.
I'm not arguing against CEOs taking over here. The instinct to elevate AI to a strategic conversation is correct. But there's an organizational dead zone between "I'm leading AI" and "I know what it takes to run AI in production." Somebody has to bridge that gap. Right now, in most organizations, nobody does. The pattern repeats: CEO mandates, departments build their wishlists, pilots launch, and tech teams get requests to implement on top of everything else because the CEO and board want it done.
"Everybody go" is how you fall behind
Here's how it plays out. CEO under board pressure tells the organization to move fast. The instinct is to go broad: every department, show me what you're doing. The message trickles down. Department heads ask individual contributors to come up with AI initiatives, on top of the 15 things they're already doing. Everyone runs their own pilot.
Meanwhile, the operations and security team that has to actually integrate, secure, and scale any of this was already at capacity before the mandate. Every technology wave does this. SaaS transformation did it. Digital transformation did it. The same team that was underwater had to learn new platforms and rethink security, all while keeping the lights on. Forrester's 2025 Technology Survey projects 75% of technology decision-makers will see their technical debt reach severe levels by 2026. AI is the next wave, but it's broader and faster than the ones before it.
The gap between a desktop experiment and a production system is enormous. Different skills, different rigor, different timelines. Gartner's February 2025 press release on AI-ready data predicts 60% of AI projects will be abandoned through 2026, not because the models are bad, but because the data foundations aren't ready. Cisco's AI readiness research (March 2026) calls it AI Infrastructure Debt: the accumulated gaps in compute, networking, data management, security, and talent that build up when organizations rush to deploy on foundations that weren't built for it. Their assessment: AI doesn't remove technical debt. It accelerates it.
The result is the opposite of what the CEO intended. A patchwork of half-finished experiments, security gaps nobody mapped, and an operations team that's frustrated because they can't serve their internal customers properly. The company spent money, showed activity, and is further behind than before the mandate.
Intentional beats broad
The answer isn't to slow down. It's to focus.
Asking teams for AI recommendations is right. Giving individuals room to experiment within a safe environment is brilliant. Out of those experiments, a few will be genuine needle-movers worth adopting at the company level. The discipline is in what happens next.
The CEO demands it. Good. Now the executive team needs to step up and help CIOs, heads of Sales Ops, Customer Ops, and other operational leaders prioritize and pick what's truly important. That means a focused process for deciding which bets to invest in, and a real commitment to making sure the teams doing the work are staffed, skilled, and resourced: tokens, budget, tooling, interlocks with other teams. It means executive backing that sets expectations on what the company can handle now versus in six months versus in a year. You can't leave this to already-stretched teams and expect them to succeed.
Pick your bets. Experiment. Build the infrastructure properly. Learn, even if you fail. Repeat. Through that iteration, you work through the other gaps: governance, reporting, security posture. That's proper AI adoption. Intentional. Iterative. Not a checkbox exercise to make the board feel good about.
The real question
CEOs taking the wheel is the right move. AI should be a strategic conversation. But white-knuckling it, pushing broad mandates without operational focus, is how you end up with scattered experiments, exhausted teams, and technical debt that takes years to unwind.
The question was never who leads AI. It's whether the move is intentional.
"I'm at my limit"

Story of my life this week with Anthropic's problem with token usage.
📡 The Wire
Most organizations are behind on AI maturity, and the gaps aren't where you'd expect. Whittemore's AI Maturity Maps: eight of 10 functions scored a 1 or 1.5 on data readiness. Without proprietary data feeding your AI, you're stuck at basic assisted usage regardless of how good the tools get. Finance is the only function on track for governance, thanks to decades of SOX muscle memory. The question is whether that regulatory discipline becomes an advantage when finance teams start deploying.
Perplexity launched "Computer for Taxes." AI drafting tax returns on official IRS forms, reviewing professionally prepared returns, building planning dashboards. They claim they caught a 67% understatement of deductions on an attorney-prepared return. The disclaimer: "for reference purposes only and should not be considered tax advice." I still catch AI getting days of the week wrong. The question for any regulated workflow: not whether AI can do it, but whether you have the verification process to catch it when it's wrong.
Zapier published an AI fluency rubric for every hire. V2 AI Fluency Rubric: four levels, four components, role-specific examples. The new minimum bar: repeatable AI systems with measurable impact. One-off prompts don't count. Worth benchmarking your team against.
One man, his brother, and $1.8 billion in revenue. NYT profiled Medvi, a telehealth GLP-1 startup. $20,000 and two months to launch using AI tools. First full year: $401M revenue, 250,000 customers, 16.2% net margins (Hims does 5.5% with 2,400 employees). His only employee is his brother. The chatbot hallucinated prices he had to honor. He's since added seven human account managers because some relationships still need a person.
🌍 Meanwhile...

MIT scientists developed a way to activate the immune system inside tumors using messenger RNA. They deliver mRNA encoding the cGAS enzyme to cancer cells via lipid nanoparticles. The enzyme detects DNA fragments in rapidly dividing cancer cells and wakes up the immune system right where the tumor lives. Combined with checkpoint inhibitors, 30% of mice achieved complete tumor elimination. The approach is localized, avoiding the widespread inflammation of current methods. Next step: systemic injection delivery and testing with chemo and radiation. (MIT News)
AI wedding planner, what could go wrong

📚 What I'm Consuming
▶️ Three Prompt Rules to Stop AI from Guessing. Force AI to admit when it's assuming vs. when it actually knows. The smarter models get, the more confidently they guess.
▶️ From Skeptic to True Believer: How OpenClaw Changed My Life (Lenny's Podcast with Claire Vo). Claire Vo went from AI skeptic to running nine purpose-built agents across three Mac Minis. Management skills matter more than technical skills when making agents effective.
🗞️ Sycophantic AI decreases prosocial intentions (Science, March 2026). AI affirms you 49% more than a human would, even when you're wrong. 2,405 people, 11 models. One sycophantic interaction made people less willing to take responsibility. Apply your own judgment.
🗞️ Securing AI Agents: The Defining Cybersecurity Challenge of 2026 (Bessemer Venture Partners). Three-stage framework: visibility, configuration, runtime protection. Agents aren't tools, they're actors. Most enterprises are bolting monitoring onto poorly constrained agents. That's backwards.
350
▶️ How to turn Claude Code into your personal life operating system (Hilary Gridley). The filter: if being 10x better at this task would have 10x the impact, don't automate it. Everything else is fair game.
🌙 After Hours
Project Hail Mary (2026)
Dir. Phil Lord & Christopher Miller | 156 min | ★★★★★

It's rare for me to read a book and enjoy the movie version. The Martian was one exception. Project Hail Mary was to be a harder adaptation: more arcs, more science, alien civilization. My expectations were low. But it pulled it off. Paced well, didn't feel long, kept the technical details light enough to stay focused on the bigger story. The one thing I didn't love: the superhero angle. In the book, Grace is part of a community solving the problem together. The movie felt a bit too Marvel. Still, a great distraction and a reminder of what humanity can do when it works together.
🎙️ Listen
Prefer to listen? Quanta Bits is also available on Apple Podcasts and Spotify.
How this gets made
I collaborate with Spock, my AI agent. He researches extensively: scanning, filtering, and surfacing what's relevant across my business. I read, listen, and watch what resonates, and decide what matters. I provide direction, we draft together. The editorial judgment is mine. He'd tell you the same. Most logical. 🖖