2020 AI Legislative Year in Review

2020 has been an active year in terms of legislative activity regarding artificial intelligence. Section 1 of this post covers significant AI legislation enacted this year. Section 2 discusses significant proposed legislation this year. Section 3 focuses on other policies initiatives.

 

1.             Enacted Legislation

The year 2020 has been a transitional time for legislation as well as public policy. The most significant new AI law this year is the Illinois Artificial Intelligence Video Interview Act.[1] Also, San Francisco and Oakland in California, as well as Somerville, Massachusetts, have prohibited the city use of facial recognition.

The Illinois Artificial Intelligence Video Interview Act addresses AI systems intended to predict strong job candidates based on an analysis of their interview videos. When using such systems, employers must provide notice and disclosures to job applicants, and they must obtain applicants’ consent before using such systems; the Act bars the use of AI to evaluate a video interview without consent of the applicant.[2] Moreover, employers cannot share recorded interview videos except with vendors providing expertise or technology, presumably infrastructure and applications, to support the storage and analysis of videos.[3] The Act contains a right of deletion requiring employers to delete recorded video interviews, including those in the hands of recipients, presumably vendors.[4]

 

2.             Significant Proposed Legislation

The European Commission published a report in February 2020 describing threats and challenges posed by the kinds of AI discussed above.[5] On the same day, the Commission published a white paper proposing the consideration of new, more stringent legislation for AI used in high-risk sectors.[6] Examples of high-risk sectors include healthcare, transportation, energy, and certain government sectors. New laws would focus on high-risk applications within those sectors.[7] The white paper recommends considering legislative protections to ensure:

·      Safety of AI usage, such as ensuring that data sets that are broad and cover all relevant scenarios.

·      The prevention of discrimination, for instance by having sufficiently representative data sets.

·      Privacy and security protection of data sets in accordance with the GDPR and the Law Enforcement Directive.[8]

Other recommendations include requirements for adequate notice and transparency to provide data subjects clear information about the use of AI; notice when data subjects are interacting with bots; mechanisms to ensure robustness and accuracy of outcomes; requirements for reasonable human oversight before or after AI decisions; and requirements to ensure that the use of remote biometric identification is duly justified, proportionate, and subject to adequate safeguards.[9] In the future, I believe we will likely see one or more EU regulations or directives on AI that would have a global effect similar to what we’ve seen with the GDPR.

In the U.S., Congress took up the Algorithmic Accountability Act of 2019, which was introduced in the House in April 2019.[10] The bill would direct the Federal Trade Commission (FTC) to create regulations requiring businesses under the FTC’s jurisdiction to conduct automated decision system impact assessments and data protection impact assessments.[11] Violations of the regulations would constitute unfair or deceptive trade practice.[12] It would only cover large businesses with more than $50 million in revenue or more than one million consumers or consumer devices.[13] Accordingly, the bill seems aimed at the big tech companies. I am skeptical that this bill will become law during this Congress, but promoting the use of AI impact assessments may assist businesses to flag and act on AI risks.

The Automatic Listening Exploitation Act,[14] introduced last July in the House, attempts to redress the issue identified in the previous section of companies having humans recording or making a transcription of consumers’ speech or sound without notice or consent. It applies only to smart speaker companies, and would prohibit these companies from recording or using recordings from smart speakers without notice or consent, and users would have the right to require deletion of a recording. However, a business would be permitted to use recordings or transcripts to “improve the speech recognition and natural language understanding of the voice-user interface.”[15] Therefore, companies that truly are just trying to train their AI algorithms may be able to take advantage of this carve-out. Again, I am doubtful that this bill will become law, but if more tech companies’ usage of user communications comes to light in the next Congress, I can foresee a greater likelihood of passage of a similar bill.

Finally, the Senate took up the Filter Bubble Transparency Act at the end of October 2019.[16] The sponsors are aiming at algorithms used by large scale social network, video sharing service, search engine, or content aggregation companies that use secret (undisclosed) AI algorithms to show users what the companies want them to see. Such companies would need to provide users notice of the use of such algorithms, the right to opt-out, and the ability to see unfiltered results. Similar to the Algorithmic Accountability Act, the Filter Bubble Transparency Act would apply to for-profit businesses with more than 500 employees and  $50 million or more of annual revenue, or that process personal data of one million or more individuals.[17] I am also doubtful that this bill will pass, but it does have an interesting approach, allowing users to toggle between AI-filtered and unfiltered results.

New York’s City Council also took up proposed legislation concerning the use of “automated employment decision tools” in hiring, the definition of which encompasses AI.[18] The Council bill would add provisions to the city’s administrative code prohibiting the sale of automated decision tools in New York City unless the tool is subject to an annual audit for possible bias.[19] In addition,  employer purchasers of an automated decision tool would have to notify job candidates of the use of the tool and the qualifications or characteristics used by the tool.[20] Potential penalties are $500 for the first violation, with later violations ranging between $500 and $1,500. Penalties would accrue on a daily basis.[21] This accrual provision suggests that each passing day of delay in obtaining an annual audit, delay in providing an audit result to an employer, or delay of an employer providing notice to a job applicant accrues an additional penalty. The law would, if passed, take effect on January 1, 2022.[22]

 

3.             Other Policy Initiatives

While the Trump Administration’s general philosophy is to avoid regulation, and the administration was slow to act on AI generally, the administration is beginning to take action to promote and provide guidance for AI. Following up after last year’s Maintaining American Leadership in Artificial Intelligence executive order 13859,[23] the National Institute of Standards and Technology (NIST) developed a plan for promoting federal participation in standards-making activities.[24] Voluntary, consensus-based standards have the potential to promote more ethical AI without the need for regulation. Skeptics may say, however, that voluntary standards may or may not protect consumers, workers, and the public adequately, especially if manufacturers and operators of AI systems disregard them. In general, standards play a critical role in advancing the interests of the U.S. by promoting that its values be embedded in widely used technologies. Therefore, promoting sound standards will play at least a partial role in helping to govern AI development and use.

Similarly, the National Science and Technology Council (NSTC) published a new research and development strategic plan in June 2019.[25] One of the strategies in the plan is to understand the ethical, legal, and societal  implications (ELSI) of AI.[26] Another strategy involves making systems more robust and trustworthy, which includes protecting systems from attacks and promoting explainability and transparency to promote trust and avoid bias.[27]

Likewise, the Office of Management and Budget (OMB), again prompted by Executive Order 13859, is seeking comments[28] to a draft memorandum of guidance on AI deployment by federal agencies..[29] The draft memorandum proposes a number of principles to guide agencies, including the need for agencies to consider the impact of AI on fairness and discrimination, to promote disclosure and transparency to promote public trust, and to advance safety and security of systems.[30] As with the NIST, the OMB seeks to facilitate a non-regulatory approach to AI governance. Examples of non-regulatory mechanisms used to promote key principles are sector-specific guidance or frameworks, pilot programs or experiments, and voluntary consensus standards.[31]

Also, the Department of Housing and Urban Development has proposed new regulations interpreting the Fair Housing Act and its disparate impact standard. When defendants in housing cases use algorithmic models to determine potential tenant risk or creditworthiness, and the plaintiff’s claim is based on a discriminatory disparate impact of the model, a defendant can use procedural controls on the AI used to rebut a prima facie case of discrimination. The defense would be based on the defendant’s proof that the material factors acting as inputs do not rely in any material part on factors that are substitutes or close proxies for protected classes, the model is from a third party that determines industry standards, or the model has been validated by a neutral third party as not based on substitutes or close proxies for protected classes.[32]

Other notable AI data protection public policy events in the past year include:

·      A petition for rulemaking from the Electric Privacy Information Center (EPIC) was filed with the FTC in February 2020 to have it promulgate a trade regulation rule specifying which AI-related activities constitute unfair and deceptive trade practices that harm consumers.[33]

·      Work by an Artificial Intelligence Working Group in the National Institutes of Health (NIH) to, among other things, identify major ethical considerations of AI in biomedical research and healthcare.[34]

·      A discussion paper from the Food and Drug Administration regarding regulating the use of AI and machine learning software as a medical device.[35]

·      The Office of the Privacy Commissioner of Canada (OPC) proposing a number of AI principles to Parliament to supplement the Personal Information Protection and Electronic Documents Act. They include GDPR-style rights to an explanation and the right to object to automated decision-making.[36]OPC views these AI proposals as part of a policy to promote human rights. Other proposals address the application of privacy and human rights by design, looking at other bases for processing personal data besides consent; addressing de-identification of personal data and preventing re-identification; ensuring sources of data are traceable ; and promoting accountability, through means such as records retention requirements and audits.[37]

·      A potential ballot initiative in California, the California Privacy Rights and Enforcement Act of 2020, which would supplement the CCPA, has a provision concerning automated data processing that can predict sensitive aspects of data subjects’ lives. The Act would provide consumers a right of explanation of profiling, although not a right of human intervention.

Finally, a wide variety of public and private organizations around the world have recently published ethical/legal frameworks of key principles for AI governance. There are too many of them to summarize here. However, a team led by Jessica Fjeld of Harvard University published a paper in January 2020 cataloguing dozens of major ethical frameworks.[38] The team’s work identified a common set of principles that can guide businesses and governments for future AI uses. Accordingly, the team’s white paper is a useful place to start to obtain an overview of world perspectives on how to develop and operate AI ethically and legally


[1] 820 ILCS § 42.

[2] Id. § 42/5.

[3] Id. § 42/10.

[4] Id. § 42/15.

[5] European Commission, Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics (Feb. 19, 2020), https://ec.europa.eu/info/sites/info/files/report-safety-liability-artificial-intelligence-feb2020_en_1.pdf.

[6] European Commission, White Paper On Artificial Intelligence – A European Approach to Excellence and Trust (Feb. 19, 2020), https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

[7] Id. at 17.

[8] Id. at 19.

[9] Id. at 20–22.

[10] H.R. 2231, 116th Cong. (2019).

[11] Id. § 3.

[12] Id.

[13] Id. § 2.

[14] H.R. 4048, 116th Cong. (2019).

[15] Id. § 2.

[16] S. 2763, 116th Cong. (2019).

[17] Id. § 2.

[18] A Local Law to amend the administrative code of the city of New York, in relation to the sale of automated employment decision tools. N.Y.C. Council 1894-2020 (2020), https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&Options=Advanced&Search.

[19] Id. § 1.

[20] Id.

[21] Id.

[22] Id. § 2.

[23] Exec. Order No. 13,859, 84 Fed. Reg. 3967 (2019).

[24] National Institute of Standards and Technology, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (Aug. 9, 2019), https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf.

[25] Select Committee on Artificial Intelligence of the National Science & Technology Council, The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (Jun. 2019), https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.

[26] Id. at 19–22.

[27] Id. at 23–26.

[28] Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications,” 85 Fed. Reg. 1825 (2020).

[29] Russell T. Vought, Guidance for Regulation of Artificial Intelligence Applications (2020).

[30] Id. at 5–6.

[31] Id. at 6–7.

[32] HUD’s Implementation of the Fair Housing Act’s Disparate Impact Standard, 84 Fed. Reg. 42,854, 42,862 (2019).

[33] Electric Privacy Information Center, Petition for Rulemaking, In re:  Petition for Rulemaking Concerning Use of Artificial Intelligence in Commerce (Fed. Trade Comm’n Feb. 2020), https://epic.org/privacy/ftc/ai/EPIC-FTC-AI-Petition.pdf.

[34] Artificial Intelligence Working Group in the National Institutes of Health, Artificial Intelligence Working Group Update 6 (Dec. 13, 2019), https://acd.od.nih.gov/documents/presentations/06132019AI.pdf.

[35] U.S. FOOD & DRUG ADMIN., Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning [AI/ML]-Based Software as a Medical Device [SaMD] (Apr. 2, 2019), https://www.fda.gov/media/122535/download.

[36] Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for Ensuring Appropriate Regulation of Artificial Intelligence (Jan. 28, 2020), https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/consultation-ai/pos_ai_202001/.

[37] See id.

[38] Jessica Fjeld et al., Principled Artificial Intelligence:  Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (Jan. 15, 2020), https://poseidon01.ssrn.com/delivery.php?ID=853127084071024088002019029108064077046084047001025025024007113112122004028097084126052032042044047025048106003098001116065096000058035046014111069031124086080127061016062020017081093031001068008126116109080117029024105105007113112108064102089117083&EXT=pdf.

Previous
Previous

Complicity or Security Breach: The Clarifai Case

Next
Next

European Approach to AI Bias and Fairness