The Week AI Grew a Brain, a Voice, and a Memory

Inside April 2025’s AI Shift — Llama 4, ChatGPT Memory, Nova Sonic & More

Did anyone else feel the ground shift under their feet last week?
Following my post about Q1 2025 AI updates I thought I should start a tracker and add major AI news updates to it as they come. Well, it’s been only 9 days and I have already 7 major advancements written down. It felt less like another tech update and more like AI collectively woke up — with a brain that reasons across senses, a voice that sounds eerily human, and a memory that remembers you better than you remember it.

In the first week of April 2025, major updates from Meta, Amazon, OpenAI, Google, and Anthropic quietly ushered in the next phase of artificial intelligence: tools that feel more like collaborators than calculators.

I’ve been tracking these developments not just to stay informed — but to make sense of what this all means. Here’s a look at what changed, and why it matters.


🧠 A Brain: Meta’s Llama 4 Raises the Bar on Multimodal AI

Meta dropped two new models — Llama 4 Scout and Maverick — capable of processing text, images, audio, and video. Early benchmarks suggest Maverick outperforms both GPT-4o and Gemini 2 Flash in multimodal tasks.

Meta also teased Llama 4 Behemoth, a 288-billion-parameter model still in training — but clearly designed to power future AI agents.

Why it matters:
This isn’t just a model update — it’s a step toward generalist systems that can think across inputs, like we do. The creative tools of the near future won’t be siloed by media type — they’ll think in sound, sight, and language all at once. If you’re interested to dive deeper on AI thinking, somehow I’ve been really into that lately, check out my article on How LLMs Think and how Anthropic recently uncovered specifics on it’s model’s thinking process.

🔗 Read Meta’s Llama 4 announcement


🗣️ A Voice — and So Much More: Amazon Nova as a Multimodal Creation Engine

Amazon Nova Release Multimodal AI April 2025
Multimodel capabilities of Amazon Nova LLM

While the spotlight often shines on chatbots, Amazon has been quietly building one of the most versatile AI platforms on the market. With the release of Nova Micro, Lite, and Pro, Amazon’s Nova family now spans a wide range of multimodal capabilities — from content generation to image, video, and agent-based automation.

Unlike models optimized purely for conversation, Nova is built for creation, integration, and execution. It handles text, image, and video generation, supports tool use, and powers retrieval-augmented generation (RAG) and AI agents — all accessible via Amazon Bedrock APIs. Developers can even fine-tune Nova or use Nova Canvas and Nova Reel for visual and media workflows.

Why it matters:
Nova isn’t just an assistant — it’s infrastructure. This is Amazon’s bid to become the back end of the AI internet, offering the building blocks for enterprise tools, creative studios, and autonomous workflows. In a world of chat bubbles, Nova is thinking like an OS.

🔗 Learn more via AWS: Nova Sonic and Multimodal AI


🧬 A Memory: ChatGPT Remembers — For Real This Time

OpenAI rolled out ChatGPT’s memory feature to all users, including the free tier. It now remembers your name, preferences, ongoing tasks, and past conversations. Users can view, update, or disable these memories at any time. If, like me, you use it regularly to generate content – consider sharpening it’s output to become more human-like.

Why it matters:
We’re moving from one-off Q&A sessions to ongoing relationships with AI. The assistant you use today might remember the project you mentioned three weeks ago — and pick up where you left off tomorrow.

🔗 OpenAI’s blog post on memory


📚 A Backbone: NotebookLM Makes Research More Transparent

Notebook LM Discover Sources New Feature April 2025
Lookup 10 new sources based on an entry. The workflow for research is getting easier.

Google’s NotebookLM received two major upgrades:

  • Discover Sources — Describe a topic and get curated, linked references
  • Source Snippets — See exactly where the AI pulls its answers from

Why it matters:
NotebookLM is one of the few tools actively working on AI’s “black box” problem. By showing its sources, it builds trust — and becomes far more useful for researchers, students, and professionals who need verifiable info. Don’t sleep on the utility of creating a podcast out of the material to learn on the go or in the car.

🔗 Google’s NotebookLM update


⚠️ A Warning Sign: Claude Pro Launch Comes with Growing Pains

Claude Slow in Apr 2025
High Downtime for Claude in early Apr 2025

Anthropic introduced Claude Pro, a new subscription tier for Claude 3 Opus. But shortly after launch, users reported degraded performance, latency issues, and uneven results — even on the paid tier.

Why it matters:
It’s a reminder that scaling these models is hard — and that AI’s future might not be evenly distributed. As tools get smarter, the divide between free and premium access may grow.

🔗 Claude Pro announcement (Anthropic)


🔁 TL;DR – April 2025’s Major AI Advancements

CompanyUpdateWhy It Matters
MetaLlama 4 (Scout, Maverick); Behemoth previewMultimodal reasoning sets a new standard
AmazonNova Multimodal model releasedNatural, dynamic voice interaction arrives
OpenAIChatGPT memory rolled out to all usersPersonalized, persistent AI experiences begin
GoogleNotebookLM source discovery + citation snippetsTransparent, trustworthy AI research workflows
AnthropicClaude Pro tier; performance issues reportedRaises questions about scalability and fairness

🧭 Final Thoughts: From Tool to Companion

The theme across all these updates? AI is shifting from utility to relationship.

Whether it’s remembering your name, responding in your tone of voice, or citing its sources — the models aren’t just smarter. They’re more aware. And that changes everything.

I’ll keep tracking. Let’s see how close reality comes to the predictions AI made about itself earlier this year.

AI Walking alongside a Human
Exploring AI together with the help of AI.