US President Joe Biden signs executive order 'to protect Americans from the potential risks of AI systems'

US President Joe Biden signs executive order ‘to protect Americans from the potential risks of AI systems’

In a legislative attempt to stave off the inevitable demise of humanity at the hands of roaming hunter-killer robots, US President Joe Biden has issued an executive order establishing a new set of standards that will guide the development of AI. The order includes requirements to “develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy,” and to share relevant data about AI models with the US government.

Worries about potential negative outcomes of artificial intelligence development have grown in lockstep with the rapid rise of AI development itself. Skynet scenarios are fun to contemplate but the more immediate problem is the use of AI in the generation of exceedingly realistic audio and video clips that can be put to nefarious use, as Biden himself noted in a recent address.

See more

“With AI, fraudsters can take a three-second recording of your voice—I’ve watched one of me, I said, ‘When the hell did I say that?’,” Biden said. “All kidding aside, a three-second recording of your voice, and generate an impersonation good enough to fool—I was going to say fool your family, [but] fool you.”

The new executive order promises “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems,” and they definitely cover a lot of ground. Highlights include:

  • Requiring that companies developing AI systems “that poses a serious risk to national security, national economic security, or national public health and safety” share safety test results and other “critical” data with the US government in accordance with the Defense Production Act.
  • Developing a set of standards, tools, and tests that will be defined and applied by multiple agencies including the National Institute of Standards and Technology, the Department of Homeland Security, and the Department of Energy to ensure AI systems “are safe, secure, and trustworthy.”
  • Developing new standards for biological synthesis screening to protect against the risk of AI being used to create new and dangerous biological materials, which honestly isn’t something I’d considered until now, so thanks for that.
  • Establishing “standards and best practices for detecting AI-generated content and authenticating official content” in order to protect against fraud and the spread of disinformation.

The order also calls for action to address “algorithmic discrimination,” which is good, although the bit about “developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis” is perhaps a little too Minority Report for my liking. And of course there’s a plan to advance and accelerate AI research in the US through the establishment of the National AI Research Resource, which will make relevant information more readily available to researchers and students, and expanded grants for research in various AI-related fields.

Biden also called for bipartisan Congressional action to ensure privacy protections in the training and use of AI systems. “This executive order represents bold action, but we still need Congress to act,” Biden said in a speech (via CNBC).

The executive order comes the same day as the announcement of an agreement by G7 nations on guiding principles of AI development and a voluntary code of conduct for developers under what is known as the Hiroshima Process, which was established in May “to promote guardrails for advanced AI systems on a global level. The initiative is part of a wider range of international discussions.”

“We believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximize the benefits of technology while mitigating its risks, for the common good worldwide, including in developing and emerging economies with a few to closing digital divides and achieving digital inclusion,” the G7 leaders said in a joint statement.

The G7 nations also called on developers of AI systems to commit to an international code of conduct, and said that the first signatories to those guidelines will be announced “in the near future.”

Time Stamp:

More from PC Gamer