🧙🏼 What's brewing at Anthropic?

Also: Deceptive AI tools

Howdy, wizards.

Have you tried OpenAI’s recommended career prompt with ChatGPT yet? Here it is: “Based on all our interactions, what’s a career path I might enjoy that I might not realize I’d like?”

I got Experience Design (spot on) and AI ethics consultancy (umm, no).

All right – let’s unpack the most important AI news of the day.

DARIO’S PICKS

Source: Anthropic’s Responsible Scaling Policy

Anthropic, the AI company founded by former OpenAI employees wanting to create a more safety oriented approach to AI, and the makers of Claude, just did a big update to their Responsible Scaling Policy.

Overall, the updated policy is more flexible to allow the company to adapt it’s safety measures depending on the AI model’s capability.

Here’s what’s new:

  • They redefined what AI safety levels mean. It no longer refers to the models themselves, but rather to specific “capability thresholds” and “required safeguards” (as shown in the screenshot above).

  • They’ve introduced a checkpoint for autonomous AI capabilities, which triggers additional evaluation rather than automatically enforcing higher safety standards. Anthropic now believes that the capabilities initially considered at this threshold don’t require escalating to stricter safety measures.

  • New threshold for “AI systems that can significantly advance AI development”. Such capabilities could lead to rapid, uncontrolled advancements, outpacing their ability to evaluate and assess emerging risks.

  • They’re moving away from prespecified evaluations and prescriptive methodology to test AI’s capabilities. Instead, they’re opting for affirmative case and more general requirements. They’ve found that rigid methodologies quickly become outdated as new developments happen.

‎ Why it matters‎ ‎ The policy update suggests Anthropic has big things in the works—perhaps a new model release, a big funding round, or both:

  • Anthropic’s new policy says they don’t need to escalate safety measures upon reaching what they previously defined as autonomous AI capabilities—which seems to indicate we’re materially closer to that point.

  • They’re switching to a more pragmatic approach to evaluating capabilities—making an affirmative case that a model isn’t at a certain capability level rather than predefined methods. This signals we’re venturing into new, uncharted territory and need to adapt our safety measures as we go.

  • Recent reports suggest Anthropic is actively talking to investors, aiming for a $40 billion valuation. Anthropic’s CEO also published a long essay earlier this week with an optimistic vision for AI’s future—much like what Sam Altman did days before OpenAI’s recent, record-breaking funding round.

TOGETHER WITH TELLO

With Tello Mobile, you can say goodbye to overpriced contracts and hello to freedom. Their flexible, affordable options start as low as $5 and go up to $25/month for Unlimited Everything, allowing you to customize each plan to suit your family's exact requirements.

Whether you're looking for reliable 4G LTE/5G coverage, Wi-Fi calling, free international calls to 60+ countries, or unlimited texts, Tello has you covered. And with no contracts or hidden fees, you'll enjoy peace of mind knowing that you're getting exactly what you pay for.

Bring your own phone or explore our selection of devices to find the perfect fit for you. Stop settling for expensive plans that charge you for what you don’t need – create your perfect plan with Tello Mobile today and start saving.

DARIO’S PICKS

The US government has announced Operation AI Comply – an initiative to take legal action against companies using deceptive claims about their AI-powered products. They’ve recently taken action on five companies, two of which have settled and three which are facing ongoing lawsuits:

  • DoNotPay: a “robot lawyer” that claimed to substitute legal expertise.

  • Ascend Ecom: an AI-powered tool that claimed to help people make thousands through online storefronts.

  • Ecommerce Empire Builders: a tool that claimed to help people build an “AI-powered ecommerce empire.

  • Rytr: a writing tool that generated and posted fake reviews of companies on Google and Trustpilot.

  • FBA Machine: an AI tool that claimed to automate building and management of Amazon stores

‎ Why it matters‎ ‎ This has less to do with AI, and more to do with businesses taking the shady route to get people to open their wallets – “AI” is just the latest trick in the book.

The outcomes look reasonable for the companies that have already settled:

  • Rytr had to remove its functionality for generating reviews and testimonials – something that shouldn’t have been a service in the first place.

  • I checked out DoNotPay’s current website, and it looks like they’ve switched from taglines like “robot lawyer”, which implies you can replace traditional legal services, to the more down-to-earth “your consumer champion”. They also had to pay $193,000 in consumer redress.

Hat tip to The Batch for the link.

RECOMMENDED

Love Hacker News but don’t have the time to read it every day?

THAT’S ALL FOLKS!

Was this email forwarded to you? Sign up here.

Want to get in front of 13,000 AI enthusiasts? Work with me.

This newsletter is written & curated by Dario Chincha.

What's your verdict on today's email?

Login or Subscribe to participate in polls.

Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU – it will make it possible for me to continue to do this.