Joe Biden has signed an executive order that could affect the development of artificial intelligence.
With this move, the US President essentially wants to prevent Skynet from becoming a reality, because if not properly managed, AI could really get out of control, and the decree will help ensure that systems are safe and reliable, thanks to the standards, tools and tests that will be developed. The US government will also have access to relevant data from artificial intelligence models.
Biden also drew attention to deepfakes, saying that a three-second voice recording is enough to make him say something he never did and experienced (he used the term) thanks to fraudsters. The order includes the most comprehensive measures yet to protect Americans from potential risks posed by AI systems.
The executive order requires companies developing AI systems that pose a serious risk to national security, national economic security, or national public health and safety to share safety test results and other “critical” data with the U.S. government under the Defense Production Act. A set of standards, tools, and tests that are defined and applied by multiple agencies, including the National Institute of Standards and Technology, the Department of Homeland Security, and the Department of Energy, must be developed to ensure that AI systems are safe, secure, and trustworthy, along with new standards for screening biological syntheses to protect against the risk of AI being used to create new and dangerous biological materials. There must also be standards and best practices for detecting AI-generated content and authenticating official content to protect against fraud and the spread of disinformation.
Algorithmic discrimination is also to be combated, but this quote (“develop best practices for the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessment, surveillance, crime prediction and predictive policing, and forensic analysis”) is a bit suspect. A national AI research resource is also planned, and relevant information will be available here for researchers and students. Biden has also called for bipartisan congressional action to ensure that privacy is protected in the training and use of AI systems.
The regulation was released on the same day that the G7 countries announced that they had agreed on guiding principles for the development of artificial intelligence and a voluntary code of conduct for developers as part of the so-called Hiroshima Process, which was launched in May to promote global safeguards for advanced artificial intelligence systems. “We believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment in which safe, secure and trustworthy AI systems can be designed, developed, deployed and used to maximize the benefits of the technology while mitigating its risks, for the public good worldwide, including in developing and emerging economies, with a view to closing the digital divide and achieving digital inclusion,” the G7 leaders said in a joint statement. The G7 also called on developers of AI systems to commit to an international code of conduct, and said the first signatories to these guidelines will be announced shortly.