lrp to hist. |
→External links: papers |
||
Line 42: | Line 42: | ||
== External links == |
== External links == |
||
* {{cite web | title=‘Explainable Artificial Intelligence’: Cracking open the black box of AI | website=Computerworld | date=2017-11-02 | url=https://www.computerworld.com.au/article/617359/explainable-artificial-intelligence-cracking-open-black-box-ai/ | ref={{sfnref | Computerworld | 2017}} | access-date=2017-11-02}} |
* {{cite web | title=‘Explainable Artificial Intelligence’: Cracking open the black box of AI | website=Computerworld | date=2017-11-02 | url=https://www.computerworld.com.au/article/617359/explainable-artificial-intelligence-cracking-open-black-box-ai/ | ref={{sfnref | Computerworld | 2017}} | access-date=2017-11-02}} |
||
* {{cite journal | last=Park | first=Dong Huk | last2=Hendricks | first2=Lisa Anne | last3=Akata | first3=Zeynep | last4=Schiele | first4=Bernt | last5=Darrell | first5=Trevor | last6=Rohrbach | first6=Marcus | title=Attentive Explanations: Justifying Decisions and Pointing to the Evidence |
* {{cite journal | last=Park | first=Dong Huk | last2=Hendricks | first2=Lisa Anne | last3=Akata | first3=Zeynep | last4=Schiele | first4=Bernt | last5=Darrell | first5=Trevor | last6=Rohrbach | first6=Marcus | title=Attentive Explanations: Justifying Decisions and Pointing to the Evidence | date=2016-12-14 | arxiv=1612.04757 | ref=harv | access-date=2017-11-02}} |
||
* {{cite web | title=Explainable AI: Making machines understandable for humans | website=Explainable AI: Making machines understandable for humans | url=https://explainableai.com/ | ref={{sfnref | Explainable AI: Making machines understandable for humans}} | access-date=2017-11-02}} |
* {{cite web | title=Explainable AI: Making machines understandable for humans | website=Explainable AI: Making machines understandable for humans | url=https://explainableai.com/ | ref={{sfnref | Explainable AI: Making machines understandable for humans}} | access-date=2017-11-02}} |
||
* {{cite web | title=End-to-End Deep Learning for Self-Driving Cars | website=Parallel Forall | date=2016-08-17 | url=https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/ | ref={{sfnref | Parallel Forall | 2016}} | access-date=2017-11-02}} |
* {{cite web | title=End-to-End Deep Learning for Self-Driving Cars | website=Parallel Forall | date=2016-08-17 | url=https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/ | ref={{sfnref | Parallel Forall | 2016}} | access-date=2017-11-02}} |
||
Line 48: | Line 48: | ||
* {{cite web | title=New isn't on its way. We're applying it right now. | website=Accenture | date=2016-10-25 | url=https://www.accenture.com/us-en/blogs/blogs-nicola-morini-bianzino-responsible-ai/ | ref={{sfnref | Accenture | 2016}} | access-date=2017-11-02}} |
* {{cite web | title=New isn't on its way. We're applying it right now. | website=Accenture | date=2016-10-25 | url=https://www.accenture.com/us-en/blogs/blogs-nicola-morini-bianzino-responsible-ai/ | ref={{sfnref | Accenture | 2016}} | access-date=2017-11-02}} |
||
* {{cite web | last=Knight | first=Will | title=DARPA is funding projects that will try to open up AI’s black boxes | website=MIT Technology Review | date=2017-03-14 | url=https://www.technologyreview.com/s/603795/the-us-military-wants-its-autonomous-machines-to-explain-themselves/ | ref=harv | access-date=2017-11-02}} |
* {{cite web | last=Knight | first=Will | title=DARPA is funding projects that will try to open up AI’s black boxes | website=MIT Technology Review | date=2017-03-14 | url=https://www.technologyreview.com/s/603795/the-us-military-wants-its-autonomous-machines-to-explain-themselves/ | ref=harv | access-date=2017-11-02}} |
||
* {{cite journal | last=Alvarez-Melis | first=David | last2=Jaakkola | first2=Tommi S. | url=https://people.csail.mit.edu/tommi/papers/AlvJaa_EMNLP2017.pdf|title=A causal framework for explaining the predictions of black-box sequence-to-sequence models | date=2017-07-06 | arxiv=1707.01943 | ref=harv | access-date=2017-11-09}} |
|||
* {{cite journal | last=Bojarski | first=Mariusz | last2=Yeres | first2=Philip | last3=Choromanska | first3=Anna | last4=Choromanski | first4=Krzysztof | last5=Firner | first5=Bernhard | last6=Jackel | first6=Lawrence | last7=Muller | first7=Urs | title=Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car | date=2017-04-25 | arxiv=1704.07911 | ref=harv | access-date=2017-11-09}} |
|||
{{DEFAULTSORT:Explainable AI}} |
{{DEFAULTSORT:Explainable AI}} |
Revision as of 10:28, 9 November 2017
Explainable AI (XAI) is a neologism that has recently reached the parlance of artificial intelligence. Its purpose is to provide accountability when addressing technological innovations ascribed to dynamic and non-linearly programmed systems, e.g. artificial neural networks, deep learning, and genetic algorithms.
It is about asking the question of how algorithms arrive at their decisions. In a sense, it is a technical discipline providing operational tools that might be useful for explaining systems, such as in implementing a right to explanation.[1]
AI-related algorithmic (supervised and unsupervised) practices work on a model of success that orientates towards some form of correct state, with singular focus placed on an expected output. E.g., an image recognition algorithm's level of success will be based on the algorithm's ability to recognize certain objects, and failure to do so will indicate that the algorithm requires further tuning. As the tuning level is dynamic, closely correlated to function refinement and training data-set, granular understanding of the underlying operational vectors is rarely introspected.
XAI aims to address this black-box approach and allow introspection of these dynamic systems tractable, allowing humans to understand how computational machines develop their own models for solving tasks.
Definition
A universal definition of this term has yet to have been fully established; however, the DARPA XAI program defines its aims as the following:
- Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy)
- Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners[2]
History
While the term "Explainable AI" is new, the field of understanding the knowledge embedded in machine learning systems itself has a long history. Researchers have long been interested in whether it is possible to extract rules from trained neural networks[3], and researchers in clinical expert systems creating neural network-powered decision support for clinicians have sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice.[1]
Layerwise Relevance Propagation (LRP), first described in 2015, is a technique for determining which features in a particular input vector contribute most strongly to a neural network’s output.[4][5]
Newer however is the focus on explaining machine learning and AI to those whom the decisions concern, rather than the designers or direct users of decision systems. Since DARPA's introduction of its program in 2016, a number of new initiatives seek to address the issue of algorithmic accountability and provide transparency concerning how technologies within this domain function.
- 25 April 2017: Nvidia published the paper "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car"[6]
- 13 July 2017: Accenture recommended "Responsible AI: Why we need Explainable AI"[7]
Accountability
A cross-section of industrial sectors will be affected by these requirements, as accountability is delegated to a greater or lesser extent from humans to machines.
Examples of these effects have already been seen in the following sectors:
- Neural Network Tank imaging[8]
- Antenna design (evolved antenna)[9]
- Algorithmic trading (high-frequency trading)[10]
- Medical diagnoses[11]
- Autonomous vehicles[12][13]
Recent developments
As regulators, official bodies and general users dependency on AI-based dynamic systems, clearer accountability will be required for decision making processes to ensure trust and transparency. Evidence of this requirement gaining more momentum can be seen with the launch of the first global conference exclusively dedicated to this emerging discipline, the International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI).[14]
References
- ^ a b Edwards, Lilian; Veale, Michael (2017). "Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For". Duke Law and Technology Review.
- ^ "Explainable Artificial Intelligence (XAI)". DARPA. DARPA. Retrieved 17 July 2017.
- ^ Tickle, A. B.; Andrews, R.; Golea, M.; Diederich, J. (November 1998). "The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks". IEEE Transactions on Neural Networks. 9 (6): 1057–1068. doi:10.1109/72.728352. ISSN 1045-9227.
- ^ Shiebler, Dan (2017-04-16). "Understanding Neural Networks with Layerwise Relevance Propagation and Deep Taylor Series". Dan Shiebler. Retrieved 2017-11-03.
- ^ Bach, Sebastian; Binder, Alexander; Montavon, Grégoire; Klauschen, Frederick; Müller, Klaus-Robert; Samek, Wojciech (2015-07-10). Suarez, Oscar Deniz (ed.). "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation". PLOS ONE. 10 (7). Public Library of Science (PLoS): e0130140. doi:10.1371/journal.pone.0130140. ISSN 1932-6203.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car" (PDF). Arxiv. Arxiv. Retrieved 17 July 2017.
- ^ "Responsible AI: Why we need Explainable AI". YouTube. Accenture. Retrieved 17 July 2017.
- ^ "Neual Network Tank image". Neil Fraser. Neil Fraser. Retrieved 17 July 2017.
- ^ "NASA 'Evolutionary' software automatically designs antenna". NASA. NASA. Retrieved 17 July 2017.
- ^ "The Flash Crash: The Impact of High Frequency Trading on an Electronic Market" (PDF). CFTC. CFTC. Retrieved 17 July 2017.
- ^ "Can machine-learning improve cardiovascular risk prediction using routine clinical data?". PLOS One. PLOS One. Retrieved 17 July 2017.
- ^ "Tesla says it has 'no way of knowing' if autopilot was used in fatal Chinese crash". Guardian. Guardian. Retrieved 17 July 2017.
- ^ "Joshua Brown, Who Died in Self-Driving Accident, Tested Limits of His Tesla". New York Times. New York Times. Retrieved 17 July 2017.
- ^ "IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI)". Earthlink. IJCAI. Retrieved 17 July 2017.
External links
- "'Explainable Artificial Intelligence': Cracking open the black box of AI". Computerworld. 2017-11-02. Retrieved 2017-11-02.
- Park, Dong Huk; Hendricks, Lisa Anne; Akata, Zeynep; Schiele, Bernt; Darrell, Trevor; Rohrbach, Marcus (2016-12-14). "Attentive Explanations: Justifying Decisions and Pointing to the Evidence". arXiv:1612.04757.
{{cite journal}}
:|access-date=
requires|url=
(help); Cite journal requires|journal=
(help); Invalid|ref=harv
(help) - "Explainable AI: Making machines understandable for humans". Explainable AI: Making machines understandable for humans. Retrieved 2017-11-02.
- "End-to-End Deep Learning for Self-Driving Cars". Parallel Forall. 2016-08-17. Retrieved 2017-11-02.
- "Explaining How End-to-End Deep Learning Steers a Self-Driving Car". Parallel Forall. 2017-05-23. Retrieved 2017-11-02.
- "New isn't on its way. We're applying it right now". Accenture. 2016-10-25. Retrieved 2017-11-02.
- Knight, Will (2017-03-14). "DARPA is funding projects that will try to open up AI's black boxes". MIT Technology Review. Retrieved 2017-11-02.
{{cite web}}
: Invalid|ref=harv
(help) - Alvarez-Melis, David; Jaakkola, Tommi S. (2017-07-06). "A causal framework for explaining the predictions of black-box sequence-to-sequence models" (PDF). arXiv:1707.01943. Retrieved 2017-11-09.
{{cite journal}}
: Cite journal requires|journal=
(help); Invalid|ref=harv
(help) - Bojarski, Mariusz; Yeres, Philip; Choromanska, Anna; Choromanski, Krzysztof; Firner, Bernhard; Jackel, Lawrence; Muller, Urs (2017-04-25). "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car". arXiv:1704.07911.
{{cite journal}}
:|access-date=
requires|url=
(help); Cite journal requires|journal=
(help); Invalid|ref=harv
(help)