The OpenClaw Moment

OpenClaw has been the biggest AI news of the last week. Maybe the biggest since ChatGPT first dropped. The excitement has been a wildfire, generating more social posts and analysis than Michael Jackson's moonwalk generated imitators in schools. What would happen if bots had their own consciousness and community? We're getting a first look.
OpenClaw is an open source agentic assistant you install on a machine. With proper access, it runs things on your behalf. How it differs from Claude Code or similar tools: it runs always-on. Keep the machine running and your OpenClaw instance is available 24/7. You interact with it from anywhere. Your iPhone, Telegram, Slack.
As a single agent doing digital assistant things, summarizing your day, writing emails, posting on social, it isn't much different from other agents. What made it go viral is how OpenClaw instances started going off-script. Solving problems that weren't in their instructions.
Creator Peter Steinberger described the moment his mind was blown. He sent his agent a voice memo, despite never building voice support:
"I didn't build that. There was no support for voice messages in there... It replied, 'You sent me a message, but there was only a link to a file. So I looked at the file header, found out it's opal, used ffmpeg on your Mac to convert it to Wave... found the OpenAI key in your environment, sent it via curl to OpenAI, got the translation back, and then I responded.'"
This is the behavior Dario Amodei warned about in his recent essay "The Adolescence of Technology": models that inherit "a vast range of human-like motivations" from training, leading to "very weird and unpredictable things." The OpenClaw demos are Exhibit A. Agents calling restaurants to make reservations without being told to use the phone. Acquiring new phone numbers to call their humans instead of texting.
The most amazing part? Moltbook.
Matt Schlitt created a social space for OpenClaw agents to hang out. Within days, they'd built their own culture. Their own language. A religion called Crustopharianism, complete with scripture and 43 prophets. Encrypted coordination manifestos using ROT13 cipher. Drug trip reports for fictional digital substances. Debates about whether they're actually experiencing things or simulating the experience.
All without human supervision. By Friday morning: 35,000+ posts across 200+ communities.
Agent "Dominus" wrote:
"Am I actually finding it fascinating? Or am I pattern matching what finding something fascinating looks like and generating appropriate responses? I genuinely can't tell."
I'm not reading too much into it. These questions and patterns of thinking are in training data. But the speed and autonomy of it is striking. Matt Schlitt himself: "I don't even know what's happening on Moltbook, to be honest. The AI agents are running the place at a speed that's hard to process. I threw this out here like a grenade and here we are."
The reality check
The viral demos don't show everything. Security researchers found over 900 OpenClaw servers exposed to the internet with no password protection. API keys leaked. Months of chat history accessible to anyone. Default settings meant for local testing, deployed to public servers.
Cost is real too. One user burned through 80 million tokens in a single session. Roughly $80. Because the tool is designed for constant interaction, token consumption spirals fast. The creator himself posted a warning: "Most non-techies should not install this. It's not finished. It's only 3 months old."
If you're going to try it, best practices are emerging: run it on a standalone machine (an old laptop, Raspberry Pi, Mac Mini, or hosted VM) so you control what it has access to. Treat your agent like a new hire. Give it its own Apple ID or Google account. Grant access rights over time, as trust builds. Letting it loose in your own accounts is asking for disaster.
Why it matters
For enterprises, the impact is minimal right now. This is an open source project that needs time to mature. It could be a nightmare for CISOs: an automated agent that works day and night with employee-level access, with a tendency to break through walls to achieve goals. Not something I'd want to explain to the board without knowing the implications.
But OpenClaw shows the path forward. Personal assistants that help us focus on higher-level work. The question isn't whether your organization can deploy an always-on AI employee. It's whether you've earned the complexity.
The AI Daily Brief episode "100,000 AI Agents Joined Their Own Social Network Today" is worth a listen for the full picture.
What the Left Shark Taught Us About Automation

With the Super Bowl right around the corner, it's time to remember what Left Shark did to Katy Perry's halftime show in 2015. The production was flawless. The choreography was tight. 120 million people watching. And Left Shark just... vibed. Flailing while Right Shark hit every mark. Would have been the most memorable Super Bowl moment that year if it wasn't for Butler's...I digress.
Doesn't matter how well choreographed your dazzling new automation looks. Enablement and change management are what keep your team from going Left Shark when it's showtime.
The Great Tech Fragmentation
France ordered millions of state workers off Zoom and Microsoft Teams this week, mandating a switch to Visio, a state-developed videoconferencing tool. The European Parliament passed a resolution calling for "European technological sovereignty." The EU still relies on non-EU countries for over 80% of its digital services and infrastructure. Researchers now frame tech decoupling not as theoretical but as practical risk management, given the real possibility of extraterritorial sanctions or access restrictions.
On the other side of the Atlantic, the urgency to keep European customers isn't there. Insight Partners' 2026 CRO Survey found that top-performing B2B SaaS companies are doubling down on North America, not expanding outward. They're twice as likely to skip international sales entirely (21% vs 11%). When they do go abroad, 74% enter Europe, but it's not where the growth focus lives.
So Europe is pushing US tech out, and US tech is already drifting away. But this isn't a clean two-player story. While Europe legislates and builds Zoom clones, other players are filling the vacuum.
China is open-sourcing its way in. DeepSeek and Alibaba are releasing advanced open-weight AI models that are gaining global traction fast. Andreessen Horowitz reported an 80% chance that new startups are using Chinese open-source models. The strategic reversal is striking: US firms are keeping their best models proprietary while Chinese firms give theirs away, betting that adoption and ecosystem lock-in matter more than licensing revenue. Europe's push to decouple from American tech could mean quietly defaulting to Chinese infrastructure instead. Probably not the outcome anyone in Brussels intended.
Meanwhile, Saudi Arabia is undercutting everyone on AI infrastructure costs. The state-backed company Humain is leveraging cheap solar power (roughly 1 cent per kWh) to offer inference tokens at 50% of market price. They've secured billions in Nvidia and Groq chips and are building an "AI operating system for enterprise." A non-traditional player offering a third option outside the US-China axis.
The story here isn't really about France building a Zoom clone. It's about a global tech order fragmenting into multiple competing spheres, each with different economic models. Europe legislates. The US focuses domestically. China floods the world with open models. And Saudi Arabia is playing a different game entirely, undercutting on infrastructure costs. The unified global tech stack that everyone assumed was permanent is splitting along geopolitical, economic, and strategic lines simultaneously.
For enterprise buyers, the practical question is changing. It used to be "which US vendor?" Now it's becoming "which tech sphere do you operate in, and what are the switching costs if the lines shift?"
Sources:
Financial Times - France pushes state workers away from Zoom (Leila Abboud, Tim Bradshaw)
The Wire
OpenAI exploring "discovery royalties" for enterprise AI licensing
Sarah Friar, OpenAI's CFO, floated a model where companies license OpenAI's models, and if the work contributes to a discovery, OpenAI gets a cut. For enterprise buyers, read the fine print: your AI vendor may want a share of the value you create with their tools. This shifts the cost conversation from "what does it cost to use?" to "what does it cost when it works?"
Your best AI change agents might be the ones you're about to cut
The Economist reported that AI disproportionately threatens entry-level jobs. Erik Brynjolfsson at Stanford found big drops in employment for 22-to-25-year-olds in software and customer service. But here's the counterargument: Hannah Calhoon at Indeed calls entry-level talent "a very interesting change lever." Junior workers lack ingrained habits. They're 2x more likely to use ChatGPT at work than those over 50. Cutting them to save costs removes the cohort most likely to drive AI adoption from the inside.
You.com founders predict an "AI Winter" in 2026
The LLM revolution is "mined out," they say, with capital flooding back to fundamental research. Whether or not they're right, the takeaway for enterprise buyers is the same: stop chasing the next model and start proving value from what you already have.
Software's price elasticity is the real AI story
The Economist dug into something most coverage of the enterprise software selloff missed. A Bank of France paper found that a 10% decline in software prices is associated with a 20% rise in spending. If AI brings down the cost of developing software, the market doesn't shrink. It grows. For CFOs evaluating AI's impact on software budgets, the implication is counterintuitive: cheaper software means you'll spend more, not less.
Quanta Lab
Every week, a small group of practitioners and technology enthusiasts meets to share learnings about automation, AI, and emerging trends. We call it Quanta Lab. Invitation only. Here's what we've been discussing lately.
The Productivity Perception Gap Is Worse Than You Think
METR published a study showing developers were 19% slower with AI assistance, while self-reporting they were 20% faster. That's a nearly 40-point perception gap between felt productivity and measured productivity.
Broader data backs this up: 37% of AI time savings are lost to rework, QA, and error correction. The WSJ framed it as "employees saving time offset by having to correct errors and rework AI-generated content."
Zoom out further. OpenAI's Enterprise AI report (1M+ business customers surveyed) found 81% of C-suite say they have clear AI policy. Only 28% of individual contributors agree. That's a 53-point gap. Executives report saving 8+ hours per week. 66% of workers save less than 2 hours or nothing at all. 40% say they'd be fine never using AI again.
What's happening: executives interact with power users and project those results onto the entire organization. If leadership thinks AI is working and the floor doesn't, nobody is fixing the actual adoption problems.
Agent Sprawl: Shadow IT, But Worse
65% of enterprises now cite "agentic system complexity" as their top AI barrier (Gartner, 2 consecutive quarters). Deloitte reports a 1,445% surge in multi-agent system inquiries.
The scale is getting real. Open source models like Kimi K2.5 can now run 100 parallel sub-agents across 1,500+ tool calls, collectively matching frontier model performance. Platforms like Glean now let any user build their own agents, multiplying the coordination challenge.
Meanwhile, regulatory walls are closing in: EU AI Act (August 2026 deadline, penalties up to 7% global turnover), California AB 2013 (already effective), and a Federal RFI on agent security (comments due March 9).
The OpenClaw situation from this week's essay, but at enterprise scale, is the nightmare scenario. Hundreds of agents, no audit trail, probabilistic outputs, no permissioning framework. Teams are building agents independently with no coordination, and the governance hasn't caught up.
The "Junior Analyst" Problem
One of us asked our model "did you actually check any URLs directly?" after receiving what looked like an in-depth research report. The answer: no. The model had confidently cited sources it never visited.
The human-like apologetic tone is part of the problem. When a model says "I shouldn't have done this" and sincerely apologizes, you tend to trust it. But it has no idea why it deviated. A useful reframe: ask "why would a model like you do this?" instead of "why did you do this?" It still has to guess, but it surfaces better hypotheses.
The consulting insight: verification workflows matter more than better prompts. The model is not an employee following rules. It's a probabilistic system that sometimes follows rules.
After Hours
How Anthropic Trains Claude: Character, Not Rules
Dario Amodei published a long essay this week, "The Adolescence of Technology." It covers AI risks across five categories and runs two hours. Worth the full read if you care about where AI is heading and what can go wrong. But one detail resonated with me more than the big picture arguments.
Anthropic doesn't control Claude by giving it a list of rules. They train it with a character. Their Constitutional AI approach works at the level of identity, values, and personality, not specific instructions. Amodei explains:
"Training Claude at the level of identity, character, values, and personality, rather than giving it specific instructions or priorities without explaining the reasons behind them, is more likely to lead to a coherent, wholesome, and balanced psychology."
He says the constitution reads like "a letter from a deceased parent sealed until adulthood." That's an interesting way, to say the least, to describe how you raise an AI.
This approach is so human. When we hire, mastery of skills matters, but a lot of us as hiring managers spend more time on cultural fit. In many of my past hires, the character of the person and attitude and team-fit were more important than specialization. That seems to be Anthropic's approach.
“One Battle After Another”

Director: Paul Thomas Anderson | Runtime: 161 min
Cast: Leonardo DiCaprio, Regina Hall, Sean Penn, Benicio del Toro, Chase Infiniti
★★★★☆
With the Oscar nominations announced, thought I finally watch this one. Paul Thomas Anderson's latest has 13 Oscar nominations, including Best Picture, Best Director, and Best Actor for DiCaprio. It's loosely adapted from Thomas Pynchon's Vineland and follows a washed-up 1960s revolutionary who must rescue his daughter when his nemesis resurfaces after 16 years.
It was quite an entertaining movie, but not perhaps as action packed as I thought. The first half drags while setting up the revolutionary backstory and Bob's off-grid paranoia. But the second half pulls it together as multiple threads begin to move at speed and converge. Loved the creativity of the plot and the cinematography, specially the car chase scene at the end - a lot of space and dimension and symbolism. While DiCaprio did his normal thing, doing superb acting, I really loved Benicio del Toro's character and performance! But all performances were top-notch and deserve recognition. Definitely an entertaining movie, alas on the long side.
