Transparency in Artificial Intelligence Systems

Manager Raising Multicultural Team Above AI App AdobeStock_209611631.jpeg

One of the key challenges to machine learning artificial intelligence systems is the compliance challenge of meeting legal requirements for transparency. Here, I am referring to machine learning AI, as opposed to symbolic or rule-based AI. Humans can examine the rules and logic used by symbolic AI, making them transparent to human review. As of 2020, however, machine learning has been the dominant form of AI, and in contrast to symbolic AI, machine learning systems are not inherently transparent.

News stories in the media talk extensively about the lack of transparency in the operation of AI systems. Even AI professionals who create AI systems may not know how their own systems operate. Their inability to explain how the AI comes up with the results that it does means the system operates like a “black box.” Work and research on transparency are underway to help improve the explainability of AI systems.

The European Union’s General Data Protection Regulation (GDPR) takes one approach to address the problems of transparency in AI. GDPR speaks in terms of “automated data processing.” Automated data processing is not the same as AI, but it is broad enough to encompass AI. Article 15 of GDPR gives individuals a right of access to information about personal data collected about them. Paragraph 1(h) of article 15 includes the right of the data subject to know about the existence of automated decision-making and “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Recital 71 refers to the data subject having a right to an explanation of a decision reached by automated means. Thus, GDPR gives data subjects a “right of explanation” requiring transparency of businesses using AI to make decisions about them.

In addition, under GDPR article 22, a “data subject shall have the right not to be subject to a decision based solely on automated processing,” producing “legal effects concerning him or her or similarly significantly affects him or her.” In other words, a data subject can opt-out of automated data processing, with the implication that a human must make a manual decision. When the lawful basis of processing such personal data is consent or performing a contract, the data controller must still provide for safeguards for the data subjects, which at least includes “the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” Accordingly, GDPR provides data subjects a “right of human intervention” to have a manual review of the results of the operation of AI systems using personal data.

The combination of the right of explanation and right of human intervention creates a transparency mechanism to shine a light on the results of automated data processing, allowing data subjects to ferret out mistakes, corrupted data, and bias. The GDPR’s right of explanation and right of human intervention are the only prominent current examples of laws intended to address transparency. With machine learning systems and black box AI systems, businesses may face problems trying to explain AI results to data subjects. Accordingly, building transparency into systems within the scope of GDPR is critical.

Various technical tools and methods can be used to shed light on machine learning systems to identify how they work and what factors were key in producing their outputs. A full explanation of these systems is beyond the scope of this blog post. Nonetheless, we can conclude as a legal matter that AI companies will need a defensible position regarding transparency if claimants or government regulators question the transparency of their systems. Moreover, it is easier to build transparency mechanisms into an AI system if they are part of the design from the beginning. Figuring out the workings of an AI system after it is already in production is too late.

The solution to this issue, for any given company, will lie in creating an interdisciplinary team of business, technical, and legal team members. This team can analyze which technical tools and methods provide transparency for the company’s AI system and how to implement them. Recording and maintaining documentation explaining the design choices will be helpful to defend the company later, should it face challenges to its transparency practices.

Previous
Previous

It Just Got Harder for AI Companies to Import European Personal Data

Next
Next

Machine Learning Data: Do You Really Have Rights to Use It?