🧙🏼 Meta's futuristic AI glasses

Also: The new Llamas on the block

Howdy, wizards.

Today’s food for thought: This statement from Sam Altman from 4 months ago on taking shares in OpenAI.

Let’s dive into the most important AI updates from the Meta Connect event which took place on Sep 25-26.

DARIO’S PICKS – META CONNECT 2024 SPECIAL

Meta augmented reality glasses Orion was the showstopper on the Meta Connect event this year. While Meta has poured billions of dollar into its development – it’s a prototype, and not yet a product. Orion is a 3-part system that does eye, hand and neural tracking.

The Orion prototype features a large holographic display integrated in its lightweight transparent lenses. It’s powered by Meta AI, which allows users to interact with content that’s overlaid on the physical world. You can access apps and games, browse social media, and make video calls—all through the glasses.

Meta has only made around 1,000 pairs of Orion glasses, to showcase what Meta has coming in terms of hardware over the next years.

ALSO, Meta Ray-Ban smart glasses are getting some AI upgrades. To be clear, this is the glasses that are actually for sale and not the new prototype AR version. These can now see what you see in real-time through video, remember things for you, and do live language translations.

‎ Why it matters‎ ‎ Orion shows us that it’s possible to create similar experiences as the Apple Vision Pro (though not as advanced and refined) but in a radically lighter and more convenient format. Meta readily admits their economics for selling these don’t work out at the moment, they’re simply too expensive to produce. For now, it gives us a peak into where today’s smart glasses are eventually going to end up.

TOGETHER WITH BELAY

The average person spends about six hours every day on email – leaving about 10 hours every week to do actual work.

And that doesn’t include calendar management, expense reporting, travel planning or any other administrative tasks the average person handles on any given day.

Because if you’re not delegating, you are the assistant, travel coordinator, data-entry specialist, schedule coordinator – and more.

With BELAY, you could gain an average of 15 hours every week by delegating those tasks to a U.S.-based Virtual Assistant.

BELAY’s flexible staffing solutions help you reclaim your time with exceptional, highly vetted Virtual Assistants who will step in to handle those frequent, time-consuming tasks.

In as little as one week, you can be intentionally matched with the right Virtual Assistant, so you can stop spending countless hours every week on tasks someone else can do for you.Focus on only your most important projects – and leave the rest to a BELAY Virtual Assistant.

DARIO’S PICKS – META CONNECT 2024 SPECIAL

Image: DALL-E/What’s brewing in AI

Meta also unveiled Llama 3.2, the latest in their series of open-source models. This release includes two multimodal models in 11B and 90B parameters sizes, that are capable of image understanding tasks. It also feature two lightweight text-only models (1B and 3B) that can locally mobile devices.

  • The 11B and 90B models can perform image reasoning tasks like document understanding, captioning, and visual grounding.

  • The 1B and 3B text-only models are optimized for on-device use; great for more simple use cases and security, as they don’t need to send your data to the cloud or anywhere.

  • Meta also introduced Llama Stack distributions – standardized APIs and tools that make it easier and more efficient to develop applications on top of their models.

‎ Why it matters‎ ‎ This is big news for anyone building AI products with open-source. Llama 3.2, particularly the 90B version, is probably the best open model so far. And the smaller models that can easily run locally on your phone? Apparently, they’re pretty good too.

FROM OUR PARTNERS

Doing the same boring work again and again is exhausting.

What if you had a personal AI assistant who could do the job for you?

DARIO’S PICKS – META CONNECT 2024 SPECIAL

There’s quite a few updates to Meta AI, which you can get the full breakdown on here. However, these are my three favourites:

  • Inside Meta AI, things are now multimodal, meaning it can understand and answer questions about photos you share in chats (similar to how it works in ChatGPT). You can also request AI edits to your photos, like adding or removing objects or changing backgrounds.

  • You can now talk to Meta AI using your voice on any of the platforms where Meta AI is available, ie Messenger, Facebook, WhatsApp, and Instagram DM. However, it’s not nearly as good as Advanced Voice mode for ChatGPT, as it's still text-to-speech and not natively multimodal.

  • They’re integrating AI into Meta’s social platforms in several new ways. They’re testing a translation tool that automatically translates and lip-syncs the audio of Reels, making content accessible across different languages. Also, when resharing photos to your Instagram Stories, Meta AI can now generate context-aware backgrounds to your posts.

‎ Why it matters‎ ‎ While I wouldn’t call the updates to Meta AI revolutionary at this point, they do have a huge distribution through all Meta’s social platforms, which makes their updates all the more impactful.

RECOMMENDED

Love Hacker News but don’t have the time to read it every day?

THAT’S ALL FOLKS!

Was this email forwarded to you? Sign up here.

Want to get in front of 13,000 AI enthusiasts? Work with me.

This newsletter is written & curated by Dario Chincha.

What's your verdict on today's email?

Login or Subscribe to participate in polls.

Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU – it will make it possible for me to continue to do this.