Tracing AI Decisions: AI Explainability and the GDPR

it-researchers-discussing-transaction-related-9UYSHZK.jpg

In machine learning, deep learning systems, and other artificial intelligence subsets, algorithms often operate in ways that even the algorithms’ developers don't understand. For example, imagine if two people with similar financial backgrounds apply for bank loans. Even though they are similar in many ways, one person gets the loan and the other person doesn’t. What factors did the bank’s AI use to reject the one customer? How were those factors weighed? The designers of the AI “trained” it with a vast amount of consumer data, but at the end of the day, it may be that no one knows exactly how the AI made the decision it did.

Under the European Union’s General Data Protection Regulation (GDPR), a business using personal data for automated processing must be able to explain how the system makes decisions. See Article 15(1)(h) and Recital 71 of GDPR.  The individual data subject (i.e. the person who was rejected for the loan) has the right to ask the business why it made the decision it did. The business must then explain how the system came to its decision. If the company can’t explain the decision in response to an individual’s request, it would not be compliant with the GDPR.

In addition, Article 22 of GDPR, in essence, grants an individual a “right of human intervention.” Under this right, an individual may ask for a human to review the AI’s decision to determine whether or not the system made a mistake. This right of human intervention and the right of explainability together place a legal obligation on the business to understand what happened, and then make a reasoned judgment as to if a mistake was made.

Accommodating these rights, of course, requires an understanding of how the result came about — if the business is not able to do that, it will not be able to comply with those requirements of the GDPR. Regardless of whether a business is using AI or less sophisticated software performing automated personal data processing, it’s important for businesses to consider the necessity of explainability down the road and build it into their development process. That way, when a customer demands an explanation as to why the AI-generated a particular result, the business will be prepared to answer those requests and comply with the law.

Naturally, a business may face challenges when trying to promote the explainability of systems. Further research and redevelopment may need to occur to offer more explainability capabilities. These will require money and effort. Nonetheless, I believe GDPR is just the first of laws around the world that will eventually make explainability important. Governments will likely increasingly demand more of vendors than simply offering “black box” solutions. And the use of black-box systems may constitute an unfair or deceptive trade practice under general consumer protection laws.

At present, the U.S. doesn’t have a similar specific law or set of policies requiring explainability or human intervention, though significant discussion has begun at the federal level in the wake of the GDPR. It’s likely this GDPR requirement will cause other countries to adopt similar policies. In sum, it seems wise, regardless of the law, to understand how AI systems do what they do.

Mitigating Risks

With AI and other types of automated data processing becoming increasingly commonplace in all industries of business and business transactions, customers will demand capabilities from their vendors and service providers to trace and explain computer decision making in their products and services. For example, people may want to know what rights they have relative to the businesses that collect their personal information. Privacy groups and lawyers representing plaintiffs have also begun looking at laws and comparing them against different companies’ practices to find problems. They often work with disgruntled people who are upset about decisions the company has made so they can seek legal action. Companies need to be prepared for the possibility of facing inquiries from individual customers, privacy groups, and the plaintiff's lawyers. Having the processes in place to provide answers when questions are raised will allow companies to defend themselves against future issues and litigation.

SVLG has been looking closely at the GDPR, and also at the California Consumer Protection Act, which went into effect Jan. 1, 2020. We can help you and your business prepare for compliance with security and privacy laws, as well as assist your business in bolstering its capabilities to explain the operation of AI systems and creating a process for human intervention in AI-generated decisions.

Previous
Previous

A Recipe for GDPR Cookies Consent on Your Website

Next
Next

Incident Response Steps: What Happens When There is A Breach?