Welcome back to AI Leverage — your daily five-minute briefing on the AI developments that actually matter. No jargon. No hype. Just the stories you need to understand, explained clearly.

In Today’s Edition

GitHub just announced that starting April 24, it will use your Copilot interaction data to train AI models — unless you manually opt out. OpenAI has indefinitely shelved plans for an erotic chatbot after pushback from employees and investors. Apple has quietly secured complete access to Google’s most powerful Gemini AI models inside its own data centers. A humanoid robot from Figure AI stood alongside the First Lady at a White House press conference. And Google has unveiled a breakthrough algorithm that could put powerful AI on your phone without needing the cloud.

The Lead

GitHub dropped a policy update yesterday that every developer and technical professional needs to know about. Starting April 24, 2026, GitHub will begin using interaction data from Copilot Free, Pro, and Pro+ users to train AI models. If you do not actively opt out, your prompts, code suggestions, and associated context from Copilot sessions become training data for future AI models.

Here is what that means in plain English. Every time you use GitHub Copilot — the AI coding assistant built into many developers’ workflows — it generates “interaction data.” That includes the questions you ask, the code it suggests, and the snippets you accept or reject. Until now, GitHub did not use this data for model training on free and individual plans. That changes next month.

The critical detail: this is an opt-out system, not opt-in. If you do nothing, your data flows into the training pipeline. GitHub says the stored contents of your repositories are not affected — only the live interaction data generated while using Copilot. Enterprise and Business plan users are excluded from this change entirely. To protect your data, visit your GitHub settings under the Copilot features page and disable the AI model training toggle before April 24.

Five Stories Worth Your Attention

  1. OpenAI Shelves Erotic Chatbot Plans Indefinitely

  2. OpenAI has put its planned adult-oriented chatbot feature on hold with no release date in sight, according to the Financial Times. The feature faced mounting opposition from both employees and investors concerned about societal risks — including the potential to foster unhealthy emotional dependence and expose minors to explicit material. The company had struggled with the technical challenge of training models that had been specifically designed to avoid such content. This reversal suggests that even OpenAI recognizes there are lines the market is not ready to cross.

  1. Apple Secures Full Access to Google Gemini in Its Data Centers

  2. Apple has reportedly locked in complete access to Google’s Gemini AI models, running them inside Apple’s own infrastructure rather than routing data to Google’s servers. Apple controls roughly 1.5 billion active devices worldwide. By running Gemini locally in its data centers, Apple can offer powerful AI features while maintaining its privacy positioning. For iPhone and Mac users, this likely means a substantially smarter Siri is coming with upcoming software updates — powered by Google’s technology but governed by Apple’s privacy standards.

  1. Humanoid Robot Appears at White House Press Conference

  2. First Lady Melania Trump appeared alongside a humanoid robot built by Figure AI at a Wednesday press conference. Figure AI is one of the most well-funded robotics startups in the world, backed by billions in investment from major technology firms. This is not theater — it is a signal that humanoid robotics has moved from research labs to the political mainstream. These machines are being designed for physical labor in warehouses, factories, and eventually homes, on a timeline measured in years rather than decades.

  1. Google Unveils TurboQuant for AI Compression

  2. Google researchers introduced TurboQuant, an algorithm that dramatically shrinks AI models so they can run on smaller, cheaper hardware without significant performance loss. “Quantization” is the technical term for compressing a model’s mathematical precision to make it lighter and faster. TurboQuant makes this process more efficient than previous methods. The practical impact: more powerful AI running directly on your phone or laptop, which means faster responses, better privacy, and no dependency on an internet connection.

  1. Meta Cuts Hundreds More Jobs in AI Restructuring

  2. Meta has initiated another round of significant layoffs, impacting hundreds of employees across multiple departments as it continues restructuring around AI. This follows Amazon’s recent announcement of roughly 16,000 corporate reductions citing a similar AI-driven shift. The pattern is clear: major technology companies are replacing certain roles with AI-powered systems while simultaneously hiring for positions that build and manage those systems.

What This Means for You

First, if you use GitHub Copilot on a free or individual plan, go to your settings and decide whether you want your interaction data used for training before April 24. The default is opt-in, so inaction means consent.

Second, Apple’s Gemini integration confirms that the AI assistant on your phone is about to get significantly more capable. If you have an iPhone, pay attention to the next software update — the improvements will be substantial.

Third, the Meta and Amazon layoffs are not isolated events. They represent a structural shift in how large companies staff their operations. The professionals who invest time now in understanding AI tools will be better positioned as this restructuring accelerates.

Tool Worth Trying

NotebookLM by Google — a research assistant that lets you upload documents, websites, and notes, then ask questions across all of them simultaneously. It is particularly useful for professionals who need to synthesize information from multiple sources — research reports, meeting notes, technical documentation. Upload your materials, and NotebookLM creates an interactive knowledge base you can query in natural language. It can even generate audio summaries. Free to use at notebooklm.google.com.

The Number

1.05 million — that is the size, in tokens, of the context window in OpenAI’s new GPT-5.4 model, released earlier this month. A “context window” is essentially how much information the AI can hold in its working memory at once. To make that tangible: 1.05 million tokens is roughly equivalent to feeding the AI four full-length novels and asking it questions about all of them simultaneously. Two years ago, the standard was 4,000 tokens. The expansion is a 260-fold increase.

Final Word

If this briefing helped you understand today’s AI landscape a little better, forward it to one person who would benefit. That is how we grow — one informed reader at a time.

— Kirubel, AI Leverage

Stay leveraged.

Reply

Avatar

or to participate

Recommended for you