- Comment: A review at WP:WikiProject Computing is in order. It isn't entirely clear what is meant by XAI; the article mentions that. Robert McClenon (talk) 04:26, 18 July 2017 (UTC)
Explainable AI (XAI) is a neologism that has recently reached the parlance of Artificial Intelligence. It's purpose is to provide accountability when addressing technological innovations ascribed to dynamic and none linearly programmed systems e.g. Artificial neural networks, Deep learning, Genetic Algorithms, etc.
It is about asking the question of how algorithms arrive at their decisions. AI related algorithmic (supervised and unsupervised) practices work on a model of success that orientates towards some form of correct state, with singular focus placed on an expected output e.g. an image recognition algorithm's level of success will be based on the algorithms ability to recognize certain objects, failure to do so will indicate that the algorithm requires further tuning. As the tuning level is dynamic, closely correlated to function refinement and training data-set, granular understanding of the underlying operational vectors is rarely introspected.
XAI aims to address this black-box approach and allow introspection of these dynamic systems tractable, allowing humans to understand how computational machines develop their own models for solving tasks.
Definition
A universal definition of this term has yet to have been fully established however the DARPA XAI program defines it's aim as the following:
- Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
- Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.[1]
History
Since DARPA's introduction of it's program in 2016, a number of initiatives have started to address the issue of algorithmic accountability and provide transparency concerning how technologies within this domain function.
- 25.04.2017: Nvidia publishes it's paper on: "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car"[2]
- 13.07.2017: Accenture recommends, "Responsible AI: Why we need Explainable AI"[3]
Accountability
A cross-section of industrial sectors will be affected by these requirements, as accountability is delegated to a greater or lesser extent from humans to machines.
Examples of these effects have already been seen in the following sectors:
- Neural Network Tank imaging[4]
- Antenna design (Evolved Antenna)[5]
- Algorithmic trading (High-frequency trading)[6]
- Medical diagnosis[7]
- Autonomous vehicles[8][9]
Recent developments
As regulators, official bodies and general users dependency on AI-based dynamic systems, clearer accountability will be required for decision making processes to ensure trust and transparency. Evidence of this requirement gaining more momentum can be seen with the launch of the first global conference exclusively dedicated to this emerging discipline:
- International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI)[10]
External links
- ‘Explainable Artificial Intelligence’: Cracking open the black box of AI
- Attentive Explanations: Justifying Decisions and Pointing to the Evidence
- End-to-End Deep Learning for Self-Driving Cars
- Explaining How End-to-End Deep Learning Steers a Self-Driving Car
- Accenture's Responsible AI Imperative
- PWC - Responsible AI
References
- ^ "Explainable Artificial Intelligence (XAI)". DARPA. DARPA. Retrieved 17 July 2017.
- ^ "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car" (PDF). Arxiv. Arxiv. Retrieved 17 July 2017.
- ^ "Responsible AI: Why we need Explainable AI". YouTube. Accenture. Retrieved 17 July 2017.
- ^ "Neual Network Tank image". Neil Fraser. Neil Fraser. Retrieved 17 July 2017.
- ^ "NASA 'Evolutionary' software automatically designs antenna". NASA. NASA. Retrieved 17 July 2017.
- ^ "The Flash Crash: The Impact of High Frequency Trading on an Electronic Market" (PDF). CFTC. CFTC. Retrieved 17 July 2017.
- ^ "Can machine-learning improve cardiovascular risk prediction using routine clinical data?". PLOS One. PLOS One. Retrieved 17 July 2017.
- ^ "Tesla says it has 'no way of knowing' if autopilot was used in fatal Chinese crash". Guardian. Guardian. Retrieved 17 July 2017.
- ^ "Joshua Brown, Who Died in Self-Driving Accident, Tested Limits of His Tesla". New York Times. New York Times. Retrieved 17 July 2017.
- ^ "IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI)". Earthlink. IJCAI. Retrieved 17 July 2017.
Category:Artifical Intelligence
Category:Autonomous Vehicles
Category:XAI
Category:Accountability