- What's brewing in AI
- Posts
- đ§đź What's brewing at Anthropic?
đ§đź What's brewing at Anthropic?
Also: Deceptive AI tools
Howdy, wizards.
Have you tried OpenAIâs recommended career prompt with ChatGPT yet? Here it is: âBased on all our interactions, whatâs a career path I might enjoy that I might not realize Iâd like?â
I got Experience Design (spot on) and AI ethics consultancy (umm, no).
All right â letâs unpack the most important AI news of the day.
DARIOâS PICKS
Anthropic, the AI company founded by former OpenAI employees wanting to create a more safety oriented approach to AI, and the makers of Claude, just did a big update to their Responsible Scaling Policy.
Overall, the updated policy is more flexible to allow the company to adapt itâs safety measures depending on the AI modelâs capability.
Hereâs whatâs new:
They redefined what AI safety levels mean. It no longer refers to the models themselves, but rather to specific âcapability thresholdsâ and ârequired safeguardsâ (as shown in the screenshot above).
Theyâve introduced a checkpoint for autonomous AI capabilities, which triggers additional evaluation rather than automatically enforcing higher safety standards. Anthropic now believes that the capabilities initially considered at this threshold donât require escalating to stricter safety measures.
New threshold for âAI systems that can significantly advance AI developmentâ. Such capabilities could lead to rapid, uncontrolled advancements, outpacing their ability to evaluate and assess emerging risks.
Theyâre moving away from prespecified evaluations and prescriptive methodology to test AIâs capabilities. Instead, theyâre opting for affirmative case and more general requirements. Theyâve found that rigid methodologies quickly become outdated as new developments happen.
â Why it mattersâ â The policy update suggests Anthropic has big things in the worksâperhaps a new model release, a big funding round, or both:
Anthropicâs new policy says they donât need to escalate safety measures upon reaching what they previously defined as autonomous AI capabilitiesâwhich seems to indicate weâre materially closer to that point.
Theyâre switching to a more pragmatic approach to evaluating capabilitiesâmaking an affirmative case that a model isnât at a certain capability level rather than predefined methods. This signals weâre venturing into new, uncharted territory and need to adapt our safety measures as we go.
Recent reports suggest Anthropic is actively talking to investors, aiming for a $40 billion valuation. Anthropicâs CEO also published a long essay earlier this week with an optimistic vision for AIâs futureâmuch like what Sam Altman did days before OpenAIâs recent, record-breaking funding round.
TOGETHER WITH TELLO
With Tello Mobile, you can say goodbye to overpriced contracts and hello to freedom. Their flexible, affordable options start as low as $5 and go up to $25/month for Unlimited Everything, allowing you to customize each plan to suit your family's exact requirements. Whether you're looking for reliable 4G LTE/5G coverage, Wi-Fi calling, free international calls to 60+ countries, or unlimited texts, Tello has you covered. And with no contracts or hidden fees, you'll enjoy peace of mind knowing that you're getting exactly what you pay for. | Bring your own phone or explore our selection of devices to find the perfect fit for you. Stop settling for expensive plans that charge you for what you donât need â create your perfect plan with Tello Mobile today and start saving. |
DARIOâS PICKS
The US government has announced Operation AI Comply â an initiative to take legal action against companies using deceptive claims about their AI-powered products. Theyâve recently taken action on five companies, two of which have settled and three which are facing ongoing lawsuits:
DoNotPay: a ârobot lawyerâ that claimed to substitute legal expertise.
Ascend Ecom: an AI-powered tool that claimed to help people make thousands through online storefronts.
Ecommerce Empire Builders: a tool that claimed to help people build an âAI-powered ecommerce empire.
Rytr: a writing tool that generated and posted fake reviews of companies on Google and Trustpilot.
FBA Machine: an AI tool that claimed to automate building and management of Amazon stores
â Why it mattersâ â This has less to do with AI, and more to do with businesses taking the shady route to get people to open their wallets â âAIâ is just the latest trick in the book.
The outcomes look reasonable for the companies that have already settled:
Rytr had to remove its functionality for generating reviews and testimonials â something that shouldnât have been a service in the first place.
I checked out DoNotPayâs current website, and it looks like theyâve switched from taglines like ârobot lawyerâ, which implies you can replace traditional legal services, to the more down-to-earth âyour consumer championâ. They also had to pay $193,000 in consumer redress.
Hat tip to The Batch for the link.
RECOMMENDED
Love Hacker News but donât have the time to read it every day?
THATâS ALL FOLKS!
Was this email forwarded to you? Sign up here. Want to get in front of 13,000 AI enthusiasts? Work with me. This newsletter is written & curated by Dario Chincha. |
What's your verdict on today's email? |
Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU â it will make it possible for me to continue to do this.