Exec tells first UN council meeting that big tech can't be trusted to guarantee AI safety
An artificial intelligence company executive told the first U_N_ Security Council meeting on AI’s threats to global peace that the handful of big tech companies leading the race to commercialize AI can’t be trusted to guarantee the safety of systems we don’t yet understand and that are prone to “chaotic or unpredictable behavior.”
UNITED NATIONS (AP) — The handful of big tech companies leading the race to commercialize AI can’t be trusted to guarantee the safety of systems we don’t yet understand and that are prone to “chaotic or unpredictable behavior,” an artificial intelligence company executive told the first U.N. Security Council meeting on AI’s threats to global peace on Tuesday.
Jack Clark, co-founder of the AI company Anthropic, said that’s why the world must come together to prevent the technology’s misuse.
Clark, who says his company bends over backwards to train its AI chatbot to emphasize safety and caution, said the most useful things that can be done now “are to work on developing ways to test for capabilities, misuses and potential safety flaws of these systems.” Clark left OpenAI, creator of the best-known ChatGPT chatbot, to form Anthropic, whose competing AI product is called Claude.
He traced the growth of AI over the past decade to 2023 where new AI systems can beat military pilots in air fighting simulations, stabilize the plasma in nuclear fusion reactors, design components for next generation semiconductors, and inspect goods on production lines.