General United States

AI Requires National Security Standards Before Public Release


On October 30, U.S. President Joe Biden will sign the Defense Production Act, an executive order seeking to reduce risks of AI (artificial intelligence) that could pose to consumers, workers, minority groups, and national security before AI systems are released to the public on Monday.

Developers of AI systems need to have certain standards for U.S. national security, economy, public health, and safety, and to share the results of safety tests with the U.S. government.

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said. “In the wrong hands[,] AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.” According to Reuters, the standards will be limited regulation due to the risks of many factors: national security, economic crisis, public health, safety in public, and so on.

G7 countries established a code of conduct aiming to govern the way major countries handle AI technology and address concerns related to privacy, security risks, and potential misuse. According to, “One of the key elements [emphasized] in the code is the commitment and engagement of companies to take appropriate measures to identify, evaluate, and mitigate risks throughout the entire AI lifecycle. It also urges companies to address any incidents or patterns of misuse that may arise after AI products have been introduced to the market. The code also highlights the need for companies to publish public reports on the capabilities, limitations, and use of AI systems and to invest in robust security controls.”

NetChoice, a national trade association that includes major tech platforms, described the order as an “AI Red Tape Wishlist,” that will end up “stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation.”

As part of the order, the Commerce Department will “develop guidance for content authentication and watermarking” for labeling items that are generated by AI, to make sure government communications are clear, the White House said in a release.

The Group of Seven industrial countries on Monday will agree [on] a code of conduct for companies developing advanced artificial intelligence systems, according to a G7 document.

“The truth is the United States is already far behind Europe,” said Max Tegmark, President of Tech policy think tank Future of Life Institute. “Policymakers, including those in Congress, need to look out for their citizens by enacting laws with teeth that tackle threats and safeguard progress,” he said in a statement.

A senior administration official, briefing reporters on Sunday, pushed back against criticism that Europe had been more aggressive at regulating AI, saying legislative action was also necessary. Biden on Monday called on Congress to act, in particular by better protecting personal data.

U.S. Senate Majority Leader Chuck Schumer said he hoped to have AI legislation ready in a matter of months.

U.S. officials have warned that AI can heighten the risk of bias and civil rights violations, and Biden’s executive order seeks to address that by calling for guidance to landlords, federal benefits programs, and federal contractors “to keep AI algorithms from being used to exacerbate discrimination,” the release said.

The order also calls for the development of “best practices” to address harms that AI may cause workers, including job displacement, and requires a report on labor market impacts.

CoreeILBO copyright (c) 2013-2023. All rights reserved.

This material may not be published, broadcast, rewritten, or redistributed in whole, or part without express written permission.

Leave a Reply

Your email address will not be published. Required fields are marked *

Confirm that you are not a bot - select a man with raised hand: