How do you know when AI is powerful enough to be dangerous? Regulators try to do the math
How do you know if an artificial intelligence system is so powerful that it poses a security danger and shouldn’t be unleashed without careful oversight
How do you know if an artificial intelligence system is so powerful that it poses a security danger and shouldn’t be unleashed without careful oversight?
For regulators trying to put guardrails on AI, it’s mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations must now be reported to the U.S. government and could soon trigger even stricter requirements in California.
Say what? Well, if you’re counting the zeroes, that’s 100,000,000,000,000,000,000,000,000, or 100 septillion, calculations to train AI systems on huge troves of data.
What it signals to some lawmakers and AI safety advocates is a level of computing power that might enable rapidly advancing AI technology to create or proliferate weapons of mass destruction, or conduct catastrophic cyberattacks.