top of page

AI Leaders Warn of Impending Doom in an Open Letter

The AI Citizen

May 31, 2023

Experts Drop Vague Hints at the Apocalypse, While Ignoring Current AI Ethical Quagmires

In a stunning display of concern for the future of humanity, the Center for AI Safety (CAIS) has released a statement signed by esteemed figures in AI, warning about the perilous risks of this world-changing technology. Because, you know, pandemics and nuclear war aren't enough to keep us awake at night.

Geoffery Hinton, Yoshua Bengio, and other prominent researchers, alongside big shots from OpenAI and DeepMind like Sam Altman, Ilya Sutskever, and Demis Hassabis, have joined forces to spark discussions about the vague and urgent risks associated with AI. And who doesn't love vague discussions?

This statement, while short and sweet, conveniently lacks any specific details or strategies to mitigate said risks. But fear not! CAIS wants you to know that they are super serious about establishing safeguards and institutions to manage these risks, even if they can't really define them. Bravo!

OpenAI CEO Sam Altman, our superhero in the making, has been on a crusade, engaging with global leaders and pleading for AI regulations. He's so passionate about it that he even appeared before the Senate, relentlessly asking lawmakers to bring down the hammer on the AI industry. Oh, the dangers that lurk within those lines of code!

While this open letter has generated a buzz, some AI ethics experts are rolling their eyes, because apparently, discussing hypothetical doomsday scenarios is now the cool thing to do. Dr Sasha Luccioni, an expert in cynicism, thinks that mentioning the hypothetical risks of AI alongside real problems like pandemics and climate change adds credibility to the whole charade.

And let's not forget Daniel Jeffries, who believes that signing open letters about future threats has become a hip status game. It seems that everyone wants to hop on the bandwagon without actually doing anything substantial.

But hey, CAIS, the San Francisco-based nonprofit with a knack for safety, won't let these naysayers bring them down. They remain committed to reducing societal-scale risks through technical research and advocacy, because that's what nonprofits do, right?

In the end, while some researchers are losing sleep over the rise of superintelligent AI and its potential to annihilate humanity, there are those who insist on nagging us about AI ethics in the present. Surveillance, biased algorithms, and human rights violations are just not as exciting as pondering a future where robots rule the world. Can't we focus on the real problems already?

So, as researchers, policymakers, and industry leaders continue to balance the advancement of AI with responsible implementation and regulation, we can rest easy knowing that they have everything under control. Right?


bottom of page