Biden Administration Moves Forward on AI Initiatives

In October 2022, the Administration’s Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights.[1]  The Administration sought to guide the design, development, and deployment of AI, presumably in view of the growing importance of AI to the economy and society as a whole.  Recognizing critical risks to individual rights of Americans, the Administration produced the document to provide guidance to businesses developing and offering AI technologies.  The Blueprint also provides direction to policymakers by filling gaps in existing law and policies.  The document includes a “technical companion” at the end to provide more detailed advice to AI developers and those that procure and deploy AI systems, with specific steps businesses can take to implement the broad principles in the document.

The Blueprint focuses on five key principles:

·      AI systems should be safe and effective.  Developers should develop AI technologies after testing and managing appropriate risks.[2]  They should proactively protect users from harm.  Finally, they should obtain independent evaluation to confirm safety and effectiveness.

·      AI systems should not, by operation of algorithms, discriminate against persons in protected classes.[3]  Developers and deployers should take proactive steps to protect individuals and communities against discrimination.  Again, independent evaluation should confirm the fairness of the system.

·      AI systems should respect individuals’ privacy.  The Blueprint speaks about providing meaningful (and non-manipulated) plain-language information to individuals about the operation of AI systems.  This transparency allows individuals to give informed consent and permissions.  The use of AI for processing sensitive personal information or where key legal rights are at risk requires additional protections.  Surveillance and monitoring should have limits.

·      Users should have effective notice and an explanation of how AI systems work.  Reflecting the interest in providing robust transparency, developers and deployers of AI systems should make sure users have plain-language information about how AI systems produce results.

·      AI developers and deployers should offer a mechanism by which an individual can opt out of AI decision making by having a human that can promptly consider and remedy any mistakes made by an AI system.  In other words, a human or other alternative should provide a second look where an AI system may have created an apparent error or unfair result.

I believe the Blueprint creates a new useful framework for AI governance.  Moreover, the technical companion will provide more helpful guidance than past statements of high-level principles that seem difficult to apply in real-world situations.[4]   In this sense, the Blueprint is an important step forward.  Nonetheless, the Blueprint is not binding in any way, and it does not even reflect official Administration policy.  I suspect that many businesses will proceed with their AI developments and deployments without using the Blueprint’s guidance.

Also, in October 2022, the general counsel of the NLRB issued a memo to regional NLRB offices taking the position that algorithmic management tools used by employers may interfere with collective bargaining rights under Section 7 of the National Labor Relations Act.[5]  For instance, workplace surveillance may intimidate workers.  The memo states that an employer initiating the use of surveillance in response to Section 7 activities violates the Act because it may interfere with, restrain, or coerce employees exercising their organization and bargaining rights.  Screening employees using AI systems to ferret out those involved with previous protected activity may also constitute an unfair labor practice.  The general counsel stated she will propose a framework of protections for the NLRB to safeguard employees’ rights.

Finally, President Biden signed Executive Order 14086 in October 2022 to implement what has been termed the “Privacy Shield 2.0” agreement.[6]  This agreement is between the European Commission and the Biden Administration to overcome objections to the importation of personal data from the EU under the original Privacy Shield program voiced in the European Court of Justice’s Schrems II decision.  The decision allowed the continued use of the Standard Contractual Clauses but required personal data importation arrangements to have supplemental measures to assure an adequate level of protection for personal data as against possible U.S. government surveillance.

Section 3 of the Executive Order establishes a redress mechanism to review complaints concerning U.S. signals intelligence activities for violations of U.S. law and, if necessary, appropriate remediation. Complaints from an individual in a qualifying state to a privacy authority in the individual’s jurisdiction are routed to the Director of National Intelligence. The Civil Liberties Protection Officer (CLPO) of the Office of the Director of National Intelligence investigates, reviews, and, as necessary, orders appropriate remediation for qualifying complaints. The complainant my apply for review of the CLPO’s determination by the Data Protection Review Court. The Attorney General has authority to designate a country or regional economic integration organization as a “qualifying state.” These additional steps to create a strengthened regime for personal data importation may allow the EU to consider protections in the U.S. to be “adequate.”


[1] White House Off. of Sci. & Tech. Pol’y, Blueprint for an AI Bill of Rights (Oct. 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

[2] The National Institute of Standards and Technology issued the second draft of its risk management framework, which may assist businesses attempting to implement this principle.  Nat’l Inst. of Standards & Tech., AI Risk Management Framework:  Second Draft (Aug. 19, 2022), https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf.

[3] The Blueprint and many commentators use the term “algorithms” when discussing AI systems.  From a machine learning technical perspective, “algorithms” run on data to produce a machine learning “model.”  It is the model that a business would use in practice to make predictions and guide operations, for example to judge creditworthiness of a loan applicant.  It is the activity of a model that may result in biased outcomes and perhaps rise to the level of discrimination.  However, to remain consistent with the terms used in the Blueprint, I use the term “algorithm” in this article.

[4] See, e.g., U.N. Educ., Sci. & Cultural Org., Recommendation on the Ethics of Artificial Intelligence 20-23 (Nov. 23, 2021), https://unesdoc.unesco.org/ark:/48223/pf0000381137.

Future of Life Inst., AI Principles (Aug. 11, 2017) (The Asilomar AI Principles), https://futureoflife.org/open-letter/ai-principles/.

[5] NLRB Off. of Gen. Counsel, Memorandum GC 23-02, Electronic Monitoring and Algorithmic Management of Employees Interfering with the Exercise of Section 7 Rights (Oct. 31, 2022),  https://www.nlrb.gov/guidance/memos-research/general-counsel-memos.

[6] Executive Order No. 14,086, 87 Fed. Reg. 62,283 (Oct. 7, 2022).

Previous
Previous

AI Public Policy Predictions for 2023

Next
Next

Brain Organoids and the Law