[gtranslate]
AI

OpenAI, Meta, are racing to build the future – just don’t ask about safety

The race for superintelligence is on, but major AI players might be sprinting before they can walk safely.

 A new report from the Future of Life Institute has slammed big names like OpenAI, Anthropic, xAI, and Meta, revealing their safety practices “fall far short of emerging global standards.”

Translation: these companies are innovating at breakneck speed but have no solid plan to control the very tech that could outthink us all.

The study comes amid heightened public concern about the societal impact of smarter-than-human systems capable of reasoning and logical thinking, after several cases of suicide and self-harm were tied to AI chatbots. “Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” said Max Tegmark, MIT professor and Future of Life president.

While Google DeepMind promises to “innovate on safety at pace with capabilities,” OpenAI says it’s testing models “rigorously” and investing in frontier safety research. Meanwhile, others – including Anthropic and Meta –kept quiet, leaving experts questioning if hype is outpacing precaution.

As voices like Geoffrey Hinton and Yoshua Bengio call for a pause on developing superintelligent AI, one thing is clear: the tech is accelerating, but the guardrails are still a work in progress.

Humanity might want to hope those brakes are applied before the engines hit full throttle.

Check out the full report here.