In recent years, mainstream media coverage on artificial intelligence (AI) has exploded. Major AI breakthroughs in winning complex games, such as chess and Go, in autonomous mobility, and many other fields show the rapid advances of the technology. AI is touching more and more areas of human life, and is making decisions that humans frequently find difficult to understand.
This article explores the increasingly important topic of «explainable AI» and addresses the questions why we need to build systems that can explain their decisions and how should we build them. Specifically, the article adds three additional dimensions to capture transparency to underscore the tremendous importance of explainability as a property inherent to machine learning algorithms. It highlights that explainability can be an additional feature and dimension along which machine learning algorithms can be categorized. The article proposes to view explainability as an intrinsic property of an AI system as opposed to some external function or subsequent auditing process. More generally speaking, this article contributes to legal informatics discourse surrounding the so-called «third wave of AI» which leverages the strengths of manually designed knowledge, statistical inference, and supervised machine learning techniques.
Conference (0)