Recent Key AI Public Policy Documents

Since January 2020, here are the AI-related public policy documents that we view as significant. Some of these documents follow Executive Order 13859 signed by President Trump on February 11, 2019, which set high level policy for the federal government.

●      Executive Order 13859 required the Director of Office of Management and Budget (OMB), in coordination with the Director of the Office of Science and Technology Policy, the Director of the Domestic Policy Council, and the Director of the National Economic Council, to issue a memorandum that provides guidance to all Federal agencies to inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by AI and consider ways to reduce barriers to the development and adoption of AI technologies.

○      On January 13, 2020, OMB issued its draft memo entitled “Guidance for Regulation of Artificial Intelligence Applications” and asked for comments by March.[1]

○      The principles for agencies to follow in adopting AI applications suggested in the memorandum include promoting public trust in AI, promoting public participation, maintaining scientific integrity and information quality, applying a risk assessment and management methodology, weighing benefits and costs, maintaining flexibility in regulatory approaches (preferring non-regulatory approaches over regulatory approaches), promoting fairness and non-discrimination, making adequate and transparent disclosures, prioritizing safety and security, and working together with other agencies.

●      In January 2020, the Office of the Privacy Commissioner of Canada published a proposal document suggesting various legal changes to facilitate AI governance.[2]  It contains 11 proposed changes to the law to protect human rights (including a GDPR-style rights of explanation and human intervention in automated decision making), greater privacy protections, and the use of accountability mechanisms.

●      On February 19, 2020, the European Commission (EC) issued a white paper[3] committing Europe to becoming a leader in AI innovation and promoting Europe’s technological sovereignty.

○      The white paper calls for the EU to leverage existing funding, seize future opportunities, greater coordination among Member States, create my synergies in the research and development community, train workers, attract talented experts, promote collaboration between the public and private sectors, secure access to data and computing infrastructure, and coordinate with international organizations.

○      The white paper also discusses possible regulatory approaches and new rights-protective legislation. It distinguishes between certain “high risk” applications and those that are not. High-risk applications include those in the area of healthcare, transportation, and energy.

○      The EC stated that high-risk applications may need conformity assessments (pre-market approvals to use FDA terminology) before a company can offer them in the marketplace. The systems would need to meet certain objective standards.

○      Non-high-risk applications might only involve voluntary labelling standards.  Meeting those standards could lead to issuing a quality label (akin to a certification mark in the U.S., like the Good Housekeeping seal).

●      Also on February 19, 2020, the European Commission issued a report on safety and liability associated with AI and Internet of Things systems, as well as robots.[4]  The report emphasized the need for safety and security by design to make sure that products are verifiably safe during every design step. Connectivity, autonomous behavior, human mental health risk from exposure to the product, dependency on reliable data, the black box effect of AI systems that lack transparency, and supply chain management all pose challenges. Adjustments may be necessary for the Product Liability Directive to unfair or inefficient results.

●      The Federal Trade Commission issued guidance on Using Artificial Intelligence and Algorithms in April 2020.[5]  The guidance urged companies to be transparent about AI practices, explain the results of algorithmic decision-making to consumers, ensure fair outcomes such as by avoiding discrimination based on protected classes and providing consumers the ability to correct information used in making decisions about them, ensure that data and models are robust and empirically sound, and maintaining accountability, including through independent testing.

●      In July, for the second quarter of 2020, the National Security Commission on AI (headed by Google ex-CEO Eric Schmidt) published a new set of recommendations.[6]  An interim report for the third quarter is now available too.[7]  The recommendations are in the nature of a mixture of various aspects of industrial policy and promoting human resources.

●      On July 30, 2020, the United Kingdom’s Information Commissioner’s Office developed and published a framework for auditing AI.[8]  The framework covers:

○      Accountability and governance, including calling for the use of data protection impact assessments of AI systems when processing personal data;

○      Ensuring lawfulness, fairness, and transparency in AI systems, including methods of mitigating bias and discrimination;

○      Assessing security and data minimization in AI systems, including looking for various forms of attack and considering a number of practical methods of protecting the privacy and security of training data; and

○      Ensuring individual rights, including by explaining the results of automated decision-making, allowing for individuals to assert their objections, and empowering workers to provide oversight concerning the results of automated decision-making.

 


[1] Russell T. Vought, Guidance for Regulation of Artificial Intelligence Applications (undated memorandum), https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf.

[2] Office of the Privacy Commissioner of Canada, Proposals for ensuring appropriate regulation of artificial intelligence (Jan. 28, 2020), https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/consultation-ai/pos_ai_202001/.

[3] European Commission, White Paper on Artificial Intelligence -- A European approach to excellence and trust (Feb. 19, 2020), https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

[4] European Commission, Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee, Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics (Feb. 19, 2020), https://ec.europa.eu/info/publications/commission-report-safety-and-liability-implications-ai-internet-things-and-robotics-0_en.

[5] Andrew Smith, Using Artificial Intelligence and Algorithms (Apr. 8, 2020, 9:58 AM), https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms?utm_source=govdelivery.

[6] National Security Commission on Artificial Intelligence, Second Quarter Recommendations (July 2020), https://drive.google.com/file/d/1hgiA38FcyFcVQOJhsycz0Ami4Q6VLVEU/view.

[7] National Security Commission on Artificial Intelligence, Draft Recommendations for Commission Review at Public Meeting on Oct 8, 2020, National Security Commission on Artificial Intelligence 2020 Interim Report and Third Quarter Recommendations (Oct. 2020), https://drive.google.com/file/d/1R5XqQ-8Xg-b6CGWcaOPPUKoJm4GzjpMs/view.

[8] Information Commissioner’s Office, Guidance on AI and data protection (July 30, 2020), https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection/.

Previous
Previous

ABA AI and Robotics National Institute

Next
Next

Complicity or Security Breach: The Clarifai Case