Hi everyone,

Week 3 with Mr. Spock, my AI agent. Can't say I always benefit from his logical, curious side! The first two weeks were a struggle. But things are stabilizing into a productive rhythm as both OpenClaw and I have gotten better at working together. I've even built out a team of AI advisors, marketing, finance, strategy, operations, products, that I now discuss decisions with in detail.

The thing is, I've spent hours automating tasks that, sometimes, would have taken minutes to just do manually. I tell myself the automation will compound and the structured output feeds better analysis. I feel more productive. I work consistently more and harder than before. Which, as you'll read below, is exactly the trap this week's main essay is about.

This Week

Patterns from this week's reading that matter for operations leaders.

  1. The agent governance gap is widening. Gartner projects 40% of enterprise apps will run AI agents by year-end, but most companies have no operating model to manage them. Compliance frameworks are racing to catch up — DORA demands real-time audit trails, governance platform spend is surging, and the business continuity market is growing. The constraint on your next AI initiative probably isn't the technology. It's whether your compliance team can keep up.

  2. AI layoffs are the new earnings beat. Block cut 4,000+ jobs, 40% of its workforce, citing AI as the reason. Wall Street responded with a 26% after-hours surge. Dorsey: "I don't think we're early. I think most companies are late." When markets reward headcount cuts as a leading indicator, it changes how every CFO models next year's budget.

  3. From chatbot to remote control. Perplexity launched "Computer" — 19 coordinated models that can operate your desktop for $200/month. Claude Code added remote control. OpenAI's Frontier introduced stateful agents that persist across sessions. The shift: AI isn't answering questions anymore, it's operating systems. Your IT team should be paying attention.

  4. Consulting firms become AI distribution. OpenAI is enlisting McKinsey, Deloitte, and Accenture to deploy its Frontier platform into enterprises. The big labs have decided they can't sell directly to mid-market buyers — they need consulting firms as the channel. That reshapes who controls the AI implementation relationship.

  5. The per-seat SaaS model is cracking. When agents do the work instead of employees, charging by headcount stops making sense. Vendors are experimenting with consumption and outcome-based pricing. For ops leaders, SaaS budget predictability is the thing that's changing.

The AI Productivity Illusion

AI was supposed to give us our time back. Every demo shows the same magic trick: what took hours now takes minutes. Draft a report. Analyze a dataset. Build a prototype. The clock savings are real and measurable.

So why isn't anyone working less?

You Don't Save Time. You Spend It Differently.

Berkeley Haas researchers embedded with a 200-employee company for eight months. What they found: nobody was working less. Three forms of intensification showed up. Product managers started writing code. Engineers spent hours coaching "vibe coders." Everyone was prompting during lunch, before leaving their desks, in meetings. The total amount of work expanded to fill the new capacity.

The Harvard study reframed the productivity question: AI doesn't reduce work. It raises the bar for what's expected.

The Rabbit Hole Effect

Tiago Forte, the "Building a Second Brain" author, argues the problem goes deeper than just doing more. AI lets you chase deeper, more seductive rabbit holes that feel productive because there's output at the end. Code gets written. Research gets compiled. Agents get configured. Systems get built. But output is not the same as value. As Forte puts it, AI lets you "deploy vast swarms of intelligent beings to construct civilization-scale monuments to your procrastination."

The line between "building infrastructure that pays dividends" and "tinkering because it feels like progress" is dangerously thin.

Enterprise Version: Pilots That Go Nowhere

Scale this to organizations and you see the same pattern. Accenture found only 16% of companies moved from pilot to scaled deployment. Companies spend months building demos that impress leadership but never connect to actual business outcomes. "Wow, look what it can do" replaces "does this move a number that matters?"

The ROI problem isn't that AI doesn't work. It's that companies point it at interesting problems instead of valuable ones.

Direction Over Output

This connects directly to last issue's SaaS Reckoning. The build-vs-buy tension applies to your own workflow too: are you building something that compounds, or are you just building?

I catch myself in this trap regularly. Spending hours configuring agentic workflows, building automation that feels like progress. Sometimes it genuinely compounds. Sometimes I'm filling time with sophisticated busywork.

The test I'm learning to apply: If I stopped doing this right now, would anything break? Would any customer notice? Would any revenue change? If the answer to all three is no, I might be in a rabbit hole.

For operations leaders: before you greenlight the next AI initiative, ask whether you're solving a problem that matters or chasing a capability that impresses. The difference is the difference between productivity and the illusion of it.

The Counterpoint Worth Hearing

The Economist published a piece this week making the case for workplace inefficiency. Their argument: "costly signalling," doing things the slow, human way precisely because it's inconvenient, has real value. A handwritten thank-you note beats a Slack emoji because someone set their time on fire to write it. Candor carries risk, which is why it earns trust. As AI makes everyone sound "amiably alike," the willingness to be inefficient on purpose may become the most valuable signal you can send. Worth sitting with. Customer Success teams: Take note, may be even a handwritten one. (The Economist, Feb 19, 2026)

Sources:

Measuring Time - An Evolution


Tokens are the new currency with agents. My biggest daily stress point - How much I have left before running out of my weekly and daily tokens.

The Wire

Anthropic launches Claude Code Security, and cyber stocks tank
Anthropic released a tool that scans entire codebases for vulnerabilities the way a human security researcher would, tracing data flows and catching complex bugs that rule-based static analysis misses. Using Claude Opus 4.6, their Frontier Red Team found over 500 vulnerabilities in production open-source code that had gone undetected for decades. Cybersecurity stocks saw their sharpest single-day selloff, with RELX and other enterprise software companies hit hard. The signal: security-by-design is becoming practical, not aspirational.

The always-on agent race heats up
Anthropic shipped Claude Cowork with 13 new MCP connectors and Remote Control for Claude Code, letting developers issue commands from their phone. I have found it very clunky and unreliable to use, but that's a step in the right direction. Perplexity took a different approach with Computer, a multi-agent orchestration system that routes tasks to whichever frontier model is best suited, running in managed cloud rather than on your machine. Both are converging on the same conclusion: the value isn't in the chat window anymore. It's in agents that run in the background and persist across sessions. The question for ops leaders: agents on your infrastructure, or agents in a vendor's cloud?

OpenAI hires the Big Four to make its own product work
OpenAI announced Frontier Alliances with BCG, McKinsey, Accenture, and Capgemini to deploy its enterprise agent platform. The subtext: OpenAI is saying model intelligence is no longer the bottleneck. The hard part is organizational, leadership alignment, workflow redesign, systems integration, change management. If OpenAI itself needs McKinsey to make its platform stick inside enterprises, what does that tell you about buying AI tools without an implementation strategy?

Anthropic drops $20M on a Super PAC to fight its own industry
While OpenAI and Meta spend $165M combined backing AI-friendly politicians, Anthropic is spending $20M on the opposite bet, urging voters to support AI regulation and lobbying to block federal preemption of state AI laws. AI regulation is now a $200M+ political spending category. For operations leaders: this fight determines your compliance landscape for the next decade. Whether governance is set at the state level (50 frameworks) or the federal level (one lighter one) changes how you build, deploy, and audit AI systems.

Quanta Lab

Hands-on lessons from Quanta Lab and the field.

The Productivity Perception Gap Is Worse Than You Think

METR published a study showing developers were 19% slower with AI assistance, while self-reporting they were 20% faster. That's a 40-point perception gap between felt productivity and measured productivity.

The broader data backs it up: 37% of AI time savings get lost to rework, QA, and error correction. OpenAI's own Enterprise AI report found 81% of C-suite say they have clear AI policy, but only 28% of individual contributors agree. Executives report saving 8+ hours per week. 66% of workers save less than 2 hours, or nothing at all. 40% say they'd be fine never using AI again.

What's happening: executives interact with power users and project those results onto the entire organization. This is the productivity illusion from this issue's main essay, operating at the organizational level. If leadership thinks AI is working and the floor doesn't, nobody is fixing the actual adoption problems. Before you scale AI across teams, measure what's actually happening, not what your most enthusiastic users report. Then use your executives' enthusiasm to fund training for the 66% who aren't seeing results, not to write the press release.

After Hours

Some Like It Hot, the Musical

★★★★☆

Caught the touring production of Some Like It Hot. I was quite apprehensive about it given how times have changed and how the original story wouldn't necessarily work today. It's the Broadway adaptation of Billy Wilder's 1959 comedy starring Marilyn Monroe, Tony Curtis, and Jack Lemmon. The musical, with a book by Matthew Lopez and Amber Ruffin, music by Marc Shaiman, and 4 Tony Awards to its name, takes the bones of that film and builds something new.

The story has been modernized in ways that feel genuine rather than forced. What was played for laughs in the original becomes something more thoughtful here, a genuine exploration of identity, and the show handles it with real care. The music is fun throughout. Second act drags a bit and the production quality isn't top-tier, but overall a good night out.

Recommended for you