The United States is in the process of developing rules and guidance for responsible AI usage, in accordance with an Executive Order by President Joseph Biden (see below). The most up to date information about AI regulation in the US can be found on the National Institute of Standards and Technology (NIST) guide on AI.
Executive Order On The Safe, Secure, And Trustworthy Development And Use Of Artificial Intelligence
In October 2023, President Biden's Executive Order, "On The Safe, Secure, And Trustworthy Development And Use Of Artificial Intelligence" calls for standards and best practices for developing AI. The Order prioritizes safety, security, equality, and fairness; it also considers threats from labor issues and worker compensation to biological weapon development. The Order includes concrete actions like mandating that large AI companies share evaluation and test data with the government, requiring special oversight of AI projects that work with biological data, and initiating funding initiatives and resources for smaller AI companies. While the Order does stipulate some action for these areas, its focus is on developing guidance and facilitating research for principles and best practices for safe and secure AI.
The Federal Trade Commission (FTC)
In October 2023, the FTC released comments to the Copyright Office raising issues of consumer protection and competition with AI tools. According to the FTC, AI tools like ChatGPT have the power to mislead consumers about their authenticity, as well as to hamper fair competition. The FTC comments explain that “Conduct that may violate the copyright laws . . . may also constitute an unfair method of competition or an unfair or deceptive practice, especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works, reveals private information, or otherwise causes substantial injury to consumers.” Critics of the FTC allege that its position "meddles with AI creativity."
The EU AI Act
In March 2024, the European Union released the "EU AI Act" that offers comprehensive regulation for all AI tools, companies, and development in the Union. The Act categorizes AI according to three levels of risk: "Unacceptable risk" indicates uses that are totally banned, like facial recognition for social credit systems; "High-risk" indicates usages that are legally regulated, like algorithmic software for job applications; and "Unregulated" indicates usages that pose no risk. The Act includes a "Compliance Checker" tool for individuals and companies to determine the category of their AI.