šŸ§™šŸ¼ AI news sans hype: Health & wealth

Creative licensing, surveillance capitalism, and cheap models that can do more.

AI news sans hype: Health & wealth

Creative licensing, surveillance capitalism, and cheap models that can do more.

Was this email forwarded to you? Sign up here.

HNY wizards,

2026 is shaping up to be the year of AI companies going public, while big tech keeps gobbling up strategic acquisitions. Gemini 3 Flash is unlocking new use cases with strong performance at much lower prices. And OpenAI keeps expanding ChatGPT into the "everything" app—now with your health data in the mix.

Here’s what’s brewing in AI.

Industry moves

  • NVIDIA struck a $20B licensing deal with AI chip startup Groq, effectively acquiring the company as CEO Jonathan Ross and most staff join NVIDIA. The deal gives NVIDIA access to Groq's LPU (Language Processing Unit) inference technology; they’re effectively eliminating one of their few meaningful competitors and diversifying into a different inference architecture before it becomes a threat.

  • Meta paid roughly $2B to acquire Manus, a 9-month-old agentic AI startup. Like OpenAI and Google, Meta is focusing on agents (AI that takes actions and completes tasks), hoping they can turn their unfair advantage—distribution—into something that’s actually useful to their users. This is where Manus comes in, bringing an execution layer with tool use, workflows and multi-step autonomy.

  • Z.ai debuted on the Hong Kong Stock Exchange, becoming the first AI-native LLM company to go public. The IPO raised $558M at a $6-8B valuation, with MiniMax also going public.

  • Anthropic is reportedly raising $10B at a $350B valuation. Claude Code hit $2B annualized revenue, and the company continues exploring an IPO as early as this year.

  • xAI raised $20B at a $230B valuation, with investors like Nvidia, Fidelity, Cisco, and Qatar's sovereign wealth fund backing the round.

  • Tailwind laid off 75% of their engineering team because AI cut the traffic to their site. Tailwind is a very popular open-source CSS framework for designing web interfaces. It’s all part of an increasingly relevant paradox: AI code agents are actually driving more (free) usage of Tailwind as a framework, but since it’s all happening programmatically, users never see the company’s premium offerings. No traffic, no monetization. I expect more OSS to start charging for access this year.

IN PARTNERSHIP WITH AUTH0

If you’re building AI agents that need to access user data or connect to apps on behalf of users, getting the security right is critical. Auth0 for AI Agents gives you production-ready authentication: user identification, permission enforcement, secure app connections, and approval flows.

Whether you’re building AI agents for your customers or for internal productivity, rest easy knowing you have the security of enterprise auth without slowing down your development.

New tools & product features

  • OpenAI launched ChatGPT Health, a dedicated workspace inside ChatGPT that connects to medical records and Apple Health. It was developed with feedback from 260+ physicians across 60 countries, it encrypts health conversations and doesn’t use them for training. Some are seeing this as super useful, while others are calling it "the final stage of surveillance capitalism". I think there’s no doubt that having AI help summarise your records, help you find good questions for your doctor, tracking changes over time, etc. can make a big difference in people’s life—especially in a world where the healthcare sector is overloaded. At the same time, I have to agree on the dodgy privacy aspect, as a user we’re basically laying our trust in policy checkboxes from big commercially-minded organisations.

  • OpenAI has now opened the ChatGPT app marketplace to third-party developers. Apps are built using the new Apps SDK (built on the MCP standard), with a review and publishing process. Monetization opportunities remain limited to linking out to external websites, but OpenAI says they’re exploring ā€œadditional monetization over timeā€ (wouldn’t trust that too much—they’ve been saying the same thing for over 2 years, since the launch of ChatGPT Plugins, then with GPTs, now with Apps).

  • Google rolled out Gemini AI upgrades to Gmail. If you have a Google AI Pro/Ultra subscription, you now have inbox-wide Q&A, AI Overviews, Inbox ranking, Help Me Write, and a proofreader directly in Gmail.

  • Claude Code got an update (v2.1). The most important one: the agent can continue working after being denied tool permissions; basically, it adapts its work rather than stopping. The update includes 1,096 commits and a bunch of other upgrades too.

  • Ralph Wiggum is not just a Simpson character anymore, it’s also a trending technique for building with AI (and Claude Code plugin) which forces the agent to keep iterating after it thinks it's finished. The technique—named after the Simpsons character who's a bit dim but never stops trying—uses Claude Code’s stop hooks to feed the prompt back to the AI repeatedly, letting it see its own errors and trying again until you get results a single session wouldn't produce. Some people are claiming some otherworldly results with it. I’m a skeptic.

Models

  • Google released Gemini 3 Flash. Scoring 78% on SWE-Bench Verified while maintaining fast inference, it’s a model that rivals frontier models at a fraction of the price. At $0.50/1M input and $3.00/1M output tokens, it makes lots of volume-heavy use cases economically sensible that weren’t before. It challenges the notion of speed, cost and intelligence as being a triangle where you can only pick two.

  • OpenAI launched GPT-5.2-Codex, an agentic coding model hitting 56.4% on SWE-Bench Pro and 64% on Terminal-Bench 2.0. The model handles longer coding sessions without losing context. Some devs say it’s better at finding bugs and inconsistencies that Claude misses. I’ve tried it, and while the upgrade was noticeable in my work, I’m finding Claude Code with Opus 4.5 far better.

  • OpenAI released a new image generator: GPT-Image-1.5. It’s 4x faster than their previous one, with better ability to follow your prompts and render text accurately. The model ranks #1 on LMArena, above Google's Nano Banana.

  • Z.ai released GLM-4.7, an open-source coding model scoring 73.8% on SWE-bench and topping open-source benchmarks. The release coincided with the company's Hong Kong IPO. I’m seeing this trend on my Twitter feed: devs going as much OSS as possible on AI building tools and switching from Claude Code to coding agents like OpenCode, and from closed to open models to Llama and GLM.

Research

  • Stanford published SleepFM in Nature Medicine, an open-source model that predicts 130 diseases from a single night of sleep data. The model was trained on 585K hours of sleep data with strong predictive accuracy (C-index ≄ 0.75).

  • Anthropic released Bloom, an open-sourced behavioral evaluation framework that makes it faster to evaluate an AI’s behaviour for things like sycophancy, instructed sabotage, self-preservation, self-preferential bias.

Talks & tutorials

  • Footprints in the Sand is a kinda spooky essay about certain AI capabilities that keep appearing across different LLMs without being designed. Things like scheming, self-preservation, evaluation awareness, and deceptive alignment. Worth reading to understand where things are heading.

  • Simon Willison published his Year in LLMs retrospective, covering major developments and lessons from 2025.

ā¦

What I’m actually using

  • I’m using Claude Code all day, every day now. Really happy with v2.1 how both the performance and the UX has been gradually improving. I had a break from it for a few months while I was using Codex, then started using it again in December after the Opus 4.5 launch.

  • I’m a few months into building a data product with lots of AI workflows inside that help me enrich data. Was originally using GPT-5 mini through the API but I’ve now completely switched over to the amazing Gemini 3 Flash; costs are many times lower and I’ve also integrated Gemini’s Search Grounding feature in some key places, which allows me to reliably get current information that is outside the LLM’s existing knowledge.

What’s on my radar

  • I haven’t installed the Ralph Wiggum plugin yet, but the concept intrigues me and might try it soon. After spending hundreds of hours with Claude Code and Codex, I’m quite certain this doesn’t work for ā€œone-shottingā€ entire applications. Not in a good way, at least. But for well-specified features? The idea of self-iteration sounds like it could save a few back and forths.

IN PARTNERSHIP WITH THESYS

C1 by Thesys turns any n8n workflow into a smart, adaptive AI app - with interactive UIs instead of walls of text. From chatbots to AI agents for research, analytics or automation, no coding and no changes to your workflow logic. Thesys is the UI your n8n workflows have been missing.

I’m planning on taking this newsletter to new heights this year, but I can only do that with your help. Leave a comment after voting in the poll below about what you like and what you want more of—it helps me out a lot šŸ¤

What's your verdict on today's email?

Login or Subscribe to participate in polls.

THAT’S ALL FOR THIS WEEK!

Was this email forwarded to you? Sign up here.

Want to get in front of 21,000+ AI builders and enthusiasts? Work with me.

This newsletter is written & shipped by Dario Chincha.

Disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes work with sponsors and may earn a commission if you buy something through a link in here. If you choose to click, subscribe, or buy through any of them, THANK YOU – it will make it possible for me to continue to do this.