- What's brewing in AI
- Posts
- š§š¼ November AI news sans hype
š§š¼ November AI news sans hype
and what I'm actually using
November AI news sans hype
A monthly digest of the 1% of AI news and tools that mattered
Was this email forwarded to you? Sign up here.

Howdy wizards,
Google has been playing the long-game in AI: shipping without fanfare for a while, then suddenly pushing record-breaking LLMs and image models trained on their very own TPUs. Meanwhile, Nvidia is doing their best to look supportive.
In this email:
The essential models, tools, research and industry developments in AI from the last month
What Iām actually using and whatās on my radar
Hereās whatās brewing in AI.
Models
Google proves theyāre leading the game with Gemini 3, a model that topped the LMarena leaderboard and most key benchmarks. Gemini 3 being trained exclusively on their own TPUs came as a surprise to many. The recent cluster of releases is shaping up to a narrative where Google is winning the AI race. While some say Google is now a major threat to Nvidia, others argue Nvidiaās moat is way bigger than most people assume.
Anthropic launched Claude Opus 4.5āthe best coding model the world has seen so far (it tops the SWE-Bench chart). It uses only 1/3 of the tokens of Sonnet 4.5. That means it isnāt just frontier level but also affordable enough to be applied to broad use cases.
OpenAI rolled out GPT-5.1 and Codex-Max. GPT-5.1 is better at natural conversations than GPT-5 was, bringing back some of the friendly vibes from GPT-4o. The upgraded coding model, GPT-5.1-Codex-max, can do coding runs that span days, thanks in part to its new ability to compact the chat history. Also, Sam Altman woke up and decided to return to model names that look like Wi-Fi passwords.
Googleās new Nano Banana Pro is the best image model yet. Itās a big leap in terms of realism, handling reference images and text rendering, and has up to 4K output. Again, Google has truly turned the ship around lately.
xAIās Grok 4.1 is out. Grok seems to continue taking a different track than it peers, focusing more on playful/personalised communication than the big benchmarks. The new model is better at long reasoning and has fewer hallucinations.
DeepSeekās new reasoner is the best open-source model so far on math benchmarks, proving that serious innovation is happening also outside of the big labs.
Industry moves
Anthropic secured mega-funding with Microsoft and Nvidia, bringing Anthropicās valuation to $350B. Anthropic has committed to buying a ton of compute capacity from both Microsoft and Nvidia as part of the deal. The revenue stream is looking unmistakablyā¦circular: Microsoft pays Anthropic. Anthropic pays Microsoft. Microsoft pays Nvidia. Nvidia pays Anthropic.
OpenAI struck a $38B deal to buy Cloud services from Amazon. OpenAI has now pledged around $1.4āÆtrillion on cloud GPUs over 7 years. With around $13B in annual revenue, theyāre keeping the sentiment that we are indeed in a bubble alive and well.
Apple is working on integrating Gemini into Siri, paying $1B/year to Google. Seems like theyāve found a relatively cheap way to bring frontier level AI to their users without shipping their own LLM.
IN PARTNERSHIP WITH MURF AI
Murf Falcon Text-to-Speech API lets you build voice agents that are ultra-fast and expressive. Speed plus natural delivery creates voice interfaces people actually want to use.
They're also scalable and cost-efficient at just 1 cent per minute. At that price point, voice becomes realistic for the kinds of scale most products actually need.
Fastest streaming with 55 ms model latency and 130 ms time-to-first-audio across geographies
Extensive language coverage with 35+ languages and best-in-class code-mixing
Expressive, human-like delivery with 99.38% pronunciation accuracy
Data residency across 10+ regions
The most cost-efficient pricing at just 1 cent per minute.
Sign up now to grab the limited-period offer of 1000 free API minutes.
New tools & product features
Google launched Antigravity, a new IDE for agentic coding. I tested it for several days and found the browser integration to be an amazing feature; it also has a convenient agent manager view and an Artifacts system for faster communication with the agent.
ChatGPT got group chats andājust in time for the holidaysāshopping research. Group chats let up to 20 users collaborate with ChatGPT in the same thread. Shopping research builds personalised buyer guides from trusted sites tailored to the userās preferences. My take: shopping research is OpenAI slowly setting the stage for ads. Itās neutral looking advice now, but give it a few months, and I think weāll see sponsored results everywhere.
NotebookLM got Deep Research + images as sources. Lots of people are using NotebookLM for focused research, chatting to their notes, and turning them into outputs like slides. The new feature gives them the ability to uuse photos of handwritten notes, screenshots of textbooks, etc into part of the knowledge base.
Research
Project Suncatcher is Googleās ambitious project to launch solar-powered satellites with AI chips by 2027. In a nutshell: training AI is taking a ridiculous amount of energy and Google wants to throw the problem into outer space. Theyāre looking to put TPUs on satellites in orbit and link them with lasers so they act like a data center in space. Some say smart, others say greenwashing with a sci-fi flavour.
Kosmos is an AI scientist that might become a force multiplier for science. Apparently, it can do the equivalent of six months of research work in a day. It coordinates 200 agents to read thousands of papers, write and test hypotheses and create reports with citations; these agents have a shared memory for linking findings together. Itās launching at the price point of $200 per run, with some limited free tier usage for academics.
Talks & tutorials
Karpathy is urging teachers to stop trying to catch AI-generated homework. He suggests redesigning assignments around in-person work without AI, and letting grading happen in the classroom. We need kids not only to be proficient in AI, but also be able to think for themselves without it. (Not just kids, btw. Earlier this month, I wrote about how I realised Iām overusing AI and strategies Iāve started using to balance the downsides of AI.)
What Iām actually using
Gemini 3 is my new favourite AI for writing. It feels like it has the EQ of Claude, but a better sense of narrative. When it comes to writing, I find myself drifting towards the model that most often makes me laugh. The only annoying thing: Geminiās mental world seems stubbornly frozen in time; it has a knowledge cutoff from 2024 and often refers to mentions of current events as placeholders or an āinside jokeā.
For vibe coding frontends, Iām using Antigravity with Gemini 3 Pro. I find the combination of the native browser integration and the Artifacts system to be a faster way of communicating with the AI agent about visual features. I also wrote up a full review on Antigravity.
For research and factual answers, Iām using GPT-5.1 Thinking inside ChatGPT. Flipping on the Extended Thinking mode gives Deep Research-level answers in far less time. GPT-5.1 is my last choice for writing, though; it feels like a teacher who forgets to acknowledge the strong parts and is super nit-picky.
Iām using Codex-Max inside Cursor (for coding) with some ambivalence. At times it seems great and very thorough; it sets up these grand plans, checklists, and works on them for a long time. But lately Iāve caught it lying to me about having finished tasks, when it hadnāt. Could be due to AI model fatigue.
Whatās on my radar
I havenāt tried Claude Opus 4.5 yet, but itās on my to-do list. Just about everyone is calling it a big leap forward in terms of coding, so Iām curious to try it soon.
Since I rarely use image generation, I havenāt gotten around to testing Nano Banana Pro. But it looks next-level, and Iāll be trying it as soon as I have a need for it.
Iāve been focused on not jumping into trying all new releases just for the sake of it. It lets me stay focused on building and writingāthe things I care most about in my work. Earlier this month, I wrote about building a strong hype filter.
Iām doing my best to apply this personally, even as I keep you updated on the latest developments in AI.
IN PARTNERSHIP WITH UDACITY BY ACCENTURE
Earn a master's in AI for under $2,500
AI skills arenāt optionalātheyāre essential. Earn a Master of Science in AI, delivered by the Udacity Institute of AI and Technology and awarded by Woolf, an accredited institution. During Black Friday, lock in savings to earn this degree for under $2,500. Build deep AI, ML, and generative expertise with real projects that prove your skills. Take advantage of the most affordable path to career-advancing graduate training.

This is edition #4 of the monthly News sans hype. In between, I write essays on AIās impact and about how Iām building with AI.
I want to make sure this format actually serves you. Cast your vote in the poll below.
Click the option that best describes what you want:(this is regarding the monthly *news roundup*, specifically) |
|
THATāS ALL FOR THIS WEEK
Was this email forwarded to you? Sign up here. Want to get in front of 21,000+ AI builders and enthusiasts? Work with me. This newsletter is written & shipped by Dario Chincha. |
Disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes work with sponsors and may earn a commission if you buy something through a link in here. If you choose to click, subscribe, or buy through any of them, THANK YOU ā it will make it possible for me to continue to do this.



