The Sunday recap✨

Your weekly AI catch-up is here

Howdy, wizards.

⏪ I send out this weekly recap email on Sundays with all the best links I’ve shared during the week which you might have missed.

🤷🏻‍♂️ Don’t need the Sunday recap? You can update your subscriber preferences at the end of this email to pick what emails you’d like to receive.

Brush the dust off your reading glasses – let’s recap!

TOGETHER WITH NEXLA

According to Gartner, 86% of companies focus AI efforts on resource automation, yet most projects struggle due to complex data landscapes. Join us for a panel discussion exploring how organizations are overcoming top data challenges and building solid data foundations designed to enable and accelerate impactful AI and GenAI solutions.

Our panelists will dive into their top priorities and share actionable insights as they discuss democratization of data, data foundations that fuel AI, scalable AI infrastructure, and strategies for taking projects from concepts to production.

THE SUNDAY RECAP

THE MOST IMPORTANT DEVELOPMENTS IN AI THIS WEEK

1. ChatGPT’s upcoming features

This week some very interesting things was revealed about what’s coming to ChatGPT in the next months.

  • ChatGPT will soon be able to turn code into usable apps. An X user by the handle TestingCatalog, with a track record of consistently uncovering new ChatGPT features ahead of their release, has shown that OpenAI is testing a feature that allows users to execute code directly within ChatGPT’s interface—similar to how Claude’s Artifacts work. It seems that in terms of coding languages, Python, Javascript and Typescript will be supported initially.

    • Wouldn’t be surprise if this is a key part of the “very good releases” Sam Altman has hinted at for later this year. Wouldn’t be surprised if this is a key part of the “very good releases” Sam Altman has hinted at for later this year. ‎ Read the full newsletter here

  • ChatGPT’s Mac app can soon see what’s on your screen – starting with code editors. OpenAI is letting ChatGPT see your screen — a feature currently in Beta for MacOS desktop app. They’re starting with coding editors, but will expand the range of compatible apps over time. Right now, Work with Apps as the feature is called, is only compatible with VS code, Xcode and TextEdit, as well as the Terminal.

    • In its current format, Work with Apps is a convenient way to give ChatGPT context about what the apps you’re working with; but the bigger idea here is that ChatGPT will be able to see whatever you’re working with — and take actions on your behalf. ‎ Read the full newsletter here

  • OpenAI’s “Operator” is coming in January. OpenAI is working on an AI agent that can take actions on the user’s behalf, such as writing code or booking a tickets to a trip online. The company held a staff meeting on Wednesday where it announced that a new tool, codenamed “Operator” will launch in research preview and via the API for developers in January. Based on the current info, it seems like this it’s going to be something similar to Claude’s Computer Use, aligning with the shift from Q&A chatbots to agentic workflows that can carry out complex, multi-step tasks.

    • Sam Altman recently said that the next thing that will feel like a breakthrough isn’t a model but an agent. This is probably it. ‎ Read the full newsletter here

Varun Godbole and Ellie Pavlick, software engineers at Google Deepmind, shared their LLM Prompt Tuning Playbook – a practical guide with tips to improve prompting skills without requiring deep technical knowledge.

Top tips:

  • Be clear and specific. Clearly define what you want the AI to do, specifying the task, format, tone, and length.

  • Use positive language: Instruct the AI on what to do rather than what not to do to avoid confusion.

  • Keep prompts concise: Avoid overloading the AI with too many instructions at once; break complex tasks into smaller steps.

  • Provide context: Offer relevant background information to guide the AI's response effectively.

  • Iterate and refine: Test your prompts and adjust based on the AI's outputs to improve results.

Beyond quick tips, the full playbook dives into mental models for prompting, not just techniques, and distills years of experience into actionable advice for getting more useful outputs from LLMs.

‎ ‎→ ‎ Read the full newsletter here

FROM OUR PARTNERS

Start speaking a new language this fall with Babbel: the language learning app for real conversation. With award-winning lessons, immersive podcasts, addictive games, and bonus content, you can start speaking a new language in as little as three weeks. What’s brewing in AI readers can use this exclusive link to get up to 55% off today!

Source: Learning.google.com // DALL-E

Google has launched a new AI tool that helps you learn about anything. Supposedly, the answers it gives are more pedagogical and grounded in educational research. From what I’ve seen – the responses are more visual and interactive than what you’re used to with ChatGPT-style chatbots, akin to a school textbooks (think coloured boxes of info like glossary, common misconceptions, etc.).

Finding the optimal user interface for AI answers is a work in progress at all the major labs. What seems to be happening here is that Google is experimenting to find the optimal user interface for AI to deal with educational queries in an isolated environment, before potentially integrating them into their broader suite of tools.

‎ ‎→ ‎ Read the full newsletter here

Summarizing content while preserving semantic meaning is a challenging task – especially for small models. Apple's new iOS feature, which uses their (small) on-device AI model to summarises notifications, demonstrates these growing pains. The feature aims to distill key details from notifications, such as for active group chats, making them easier to scan. While often helpful and most of the time factual, the summaries often miss crucial context. This has led to numerous viral posts, such as this summary of a break-up message.

Could Apple's quirky AI summaries signal the end of notification overload? Though the current version doesn't always hit the mark, many users report that it already helps filter some notification noise – with an unexpected bonus of entertainment value. As a v1.0 release, I'm optimistic about this feature's potential.

‎ ‎→ ‎ Read the full newsletter here

AI development is slowing down. Ilya Sutskever, former chief scientist at OpenAI, just went on the record saying that results from scaling AI models that use vast amounts of unlabelled data to understand language patterns and structures has plateaued.

Lack of new training data is a big part of it. Current AI models have scraped all the easily accessible (and not so easily accessible) information on the web, and the effort to use synthetic data as a way to continue improvement has yet to make breakthroughs.

CEOs of big AI companies seemingly disagree. Sam Altman recently posted on X saying simply “there is no wall”. OpenAI's focus on reasoning models like o1, which spend longer time thinking without more training data, is potentially a sound strategy for continuing to increase AI's performance. Anthropic's CEO Dario Amodei recently said he believes the scaling will continue, based on what he’d seen historically, despite lacking proof that this is the case.

If you’ve been concerned about AI safety, these news might be reassuring. There’s a roadblock and it’s likely to slow down the speed of development. Maybe it will give us humans a needed breather to adapt to the new reality of having AI—at its current level of intelligence—all around us.

The terminology here matters, though. “AI development” is a bit of a vague term. For example, while Ilya's argument is echoed by Meta's Yann LeCun, he specifies that Deep Learning, the underlaying foundation of LLMs, is not hitting a wall. Rather, it’s a case of auto-regressive LLMs (the kind that guesses the next token) hitting a performance ceiling.

And the CEOs touting the notion that we haven’t plateaued? They may very well be right, although they’re probably too incentivized by raising money to take their word for it.

‎ ‎→ ‎ Read the full newsletter here

THAT’S ALL FOLKS!

Was this email forwarded to you? Sign up here.

The best way to support me is by checking out today’s sponsors: Nexla and Babbel

Want to get in front of 13,000 AI enthusiasts? Work with me.

This newsletter is written & curated by Dario Chincha.

What's your verdict on today's email?

Login or Subscribe to participate in polls.

Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU – it will make it possible for me to continue to do this.