Welcome back to AI Leverage — your daily five-minute briefing on the AI developments that actually matter. No jargon. No hype. Just the stories you need to understand, explained clearly.
In Today’s Edition
A leaked internal document reveals Anthropic is quietly testing a model it calls “Mythos” that reportedly represents a generational leap in AI capability. OpenAI has pulled the plug on its Sora video generation API after concluding that video AI is not yet economically viable at scale. Amazon and OpenAI have struck a surprising partnership to build persistent memory infrastructure into Amazon’s cloud AI platform. The Model Context Protocol — the emerging standard that lets AI tools talk to each other — has crossed 97 million installs. Google is now feeding Gemini your Gmail, Photos, and YouTube data by default. And Manus has launched a desktop app that lets an AI agent operate directly on your computer.
The Lead
Anthropic’s “Mythos” Model: A Step Change Hidden in a Data Leak
Anthropic, the company behind the Claude family of AI models, is testing a new system internally called “Mythos” — and it was never supposed to be public knowledge yet. A misconfigured content management system left a draft blog post and internal performance data in an unsecured, publicly accessible data lake. By the time the company locked it down, the AI research community had already seen enough.
What the leaked materials describe is not an incremental improvement. Anthropic’s own internal language calls Mythos a “step change” in capability — meaning it does not simply perform a few percentage points better on benchmarks, but represents a qualitatively different level of performance. While the company has not confirmed specific details, early analysis of the leaked benchmarks suggests major advances in reasoning, instruction following, and autonomous task completion.
Why does this matter to you? The practical implication is that AI assistants are about to get meaningfully more capable at handling complex, multi-step work — the kind of tasks that currently require significant human oversight. If you use AI tools in your job, expect the next generation to handle substantially more of the thinking, not just the typing. For businesses evaluating AI strategy, this leak is a signal that the capability ceiling is rising faster than most planning cycles assume.
Five Stories Worth Your Attention
OpenAI Pulls the Plug on Sora’s Public API
OpenAI has announced the discontinuation of the Sora public API, its video generation service, with 30 days notice to developers. The reason is blunt: the economics do not work. Generating video with AI requires enormous computational resources, and OpenAI concluded that the cost per generated minute is not sustainable at current pricing. This is significant because it forces the entire video AI sector to confront a hard question — can AI-generated video ever be cheap enough to scale commercially? For now, the answer appears to be no.
Amazon and OpenAI Partner on Stateful Runtime for Bedrock
Amazon Web Services and OpenAI have announced a joint effort to build a Stateful Runtime Environment on Amazon Bedrock, Amazon’s cloud AI platform. In plain terms, this means AI models running on Amazon’s cloud will be able to maintain persistent memory and use tools across sessions — rather than starting fresh every time. This partnership positions memory and tool-use infrastructure as the foundation of the next generation of AI applications, and it signals that the competitive landscape is shifting from raw model performance toward the infrastructure that makes AI practically useful.
Model Context Protocol Crosses 97 Million Installs
The Model Context Protocol, or MCP, has hit 97 million installs in March 2026. MCP is a standard that allows AI models to connect with external tools, databases, and applications in a consistent way — think of it as a universal adapter that lets any AI system plug into any software. Every major AI provider now ships MCP-compatible tooling. This milestone marks the transition from experimental curiosity to foundational infrastructure. If you build software or manage technology decisions, MCP compatibility is rapidly becoming a requirement, not an option.
Google Rolls Out Personal Intelligence to All US Users
Google is making its “Personal Intelligence” feature available to all US users, including those on free accounts. This allows Gemini, Google’s AI assistant, to draw on data from your connected Google services — Gmail, Photos, YouTube, and more — to deliver responses that are aware of your personal context. The practical benefit is obvious: ask Gemini about a trip and it can reference your flight confirmation emails and hotel photos. The trade-off is equally obvious. You are granting an AI system deep access to your personal data.
Manus Launches Desktop AI Agent App
Manus, the AI agent startup, has released a desktop application that allows its AI to operate directly on your local computer. Unlike cloud-based AI assistants that work within a browser, the Manus desktop agent can interact with your files, applications, and workflows on your actual machine. This is a meaningful step toward AI that does not just answer questions but actively does work on your behalf. The desktop agent model raises important questions about trust and security, but it also previews a future where AI assistance is deeply integrated into how you use your computer.
What This Means for You
First, the Anthropic leak and the Amazon-OpenAI partnership both point in the same direction: AI systems are becoming more capable and more persistent. If you have been putting off learning how to integrate AI into your workflow, the window for getting ahead of the curve is narrowing.
Second, Google’s Personal Intelligence rollout is a reminder to audit your AI permissions. Check which services are connected to your Google account and decide deliberately what you are comfortable sharing. The convenience is real, but so is the data exposure.
Third, the death of Sora’s API is a useful reality check. Not every AI capability that generates excitement will survive contact with economics. When evaluating AI tools for your work, prioritize those with sustainable business models over those offering the most impressive demos.Third, the death of Sora’s API is a useful reality check. Not every AI capability that generates excitement will survive contact with economics. When evaluating AI tools for your work, prioritize those with sustainable business models over those offering the most impressive demos.
Tool Worth Trying
Manus Desktop Agent — Manus has just launched its desktop application, and it is worth exploring if you want to see the future of AI assistance. The app lets an AI agent work directly on your computer, handling tasks like file organization, document editing, and workflow automation. It is best suited for knowledge workers who spend significant time on repetitive computer-based tasks. Download it from the Manus website, start with low-stakes tasks to build trust in the system, and gradually expand its responsibilities as you understand its capabilities and limitations.
The Number
97 million. That is how many times the Model Context Protocol has been installed as of March 2026. To put that in perspective, it took Docker — the technology that revolutionized how software is deployed — roughly four years to reach similar adoption numbers. MCP achieved it in under two. The speed at which AI infrastructure is being adopted is outpacing nearly every precedent in enterprise technology.
Final Word
If this briefing helped you understand today’s AI landscape a little better, forward it to one person who would benefit. The best way to stay informed is to bring your network along.
— Kirubel, AI Leverage
Stay leveraged.