Amar K. Das Daniel M. Gruen Deborah L. McGuinness Mohamed Ghalwash Morgan Foreman Oshani Seneviratne Pablo Meyers Prithwish Chakraborty Shruthi Chari The Explanation Ontology (EO) is an ontology that encodes the system- and user- attributes of explanations that would allow them to be generated computationally. This resource is mainly for system designers to guide them to think about the available AI methods and reasoning tasks within their system that would create explanations to suit the end-users' situation and requirements. EO includes attributes directly included in explanations and others that would help in their process of generation. EO also includes different explanation types that address a range of questions each and their generational needs. Copyright 2022 IBM Research and Rensselaer Polytechnic Institute Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Explanation Ontology Explanation Ontology https://purl.org/heals/eo/1.0.0 The official OBI definition, explaining the meaning of a class or property. Shall be Aristotelian, formalized and normalized. Can be augmented with colloquial definitions. The official definition, explaining the meaning of a class or property. Shall be Aristotelian, formalized and normalized. Can be augmented with colloquial definitions. definition definition textual definition imported from is based on has characteristic has parameter has primary parameter has secondary parameter is available in is characteristic of is opposed by is supported by opposes recommended by recommends supports has setting a addresses b if a provides an answer to b addresses a property between a creator and the question that the creator poses asks a property that associates a person entity with the object they consume is consumer of has availability a property that captures what form of output an entity a is presented in has presentation a property that associates a process with an object/informational content entity that it addresses implements a property that associates an entity with the object that uses it is used by maps a property with its associated object possess Chari, S., Gruen, D. M., Seneviratne, O., & McGuinness, D. L. (2020). Foundations of Explainable Knowledge-Enabled Systems. arXiv preprint arXiv:2003.07520. Explanation an account of the system, its workings, the implicit and explicit knowledge used in its reasoning processes and the specific decision, that is sensitive to the end-user's understanding, context, and current needs Social Security Retirement Benefit Pension Income Level Socioeconomic Factors Average Household Income Conceptual Entity Social Circumstances Income Socioeconomic Indicator Median Income Household Income Gross Income Computer Heuristics Machine Learning Supervised Machine Learning Unsupervised Machine Learning Expert Systems Natural Language Processing Robotics Neural Networks (Computer) Fuzzy Logic Knowledge Bases Support Vector Machine Gene Ontology Biological Ontologies Characteristic a distinguishing trait, quality, or property of an object record System Situation is an activity that uses a reasoning type to accomplish within a defined period of time a goal within a larger problem Tu, S. W., Eriksson, H., Gennari, J. H., Shahar, Y., & Musen, M. A. (1995). Ontology-based configuration of problem-solving methods and generation of knowledge-acquisition tools: application of PROTEGE-II to protocol-based decision support. Artificial intelligence in medicine, 7(3), 257-289. https://en.wikipedia.org/wiki/Task_(project_management) AI Task Deductive task Abduction is a task which tries to identify those initial conditions which deduction starts from, given general laws and some singular statements called final states. Stefanelli, M., & Ramoni, M. (1992). Epistemological constraints on medical knowledge-based systems. In Advanced models of cognition for medical training and practice (pp. 3-20). Springer, Berlin, Heidelberg. Abductive Task a subroutine that implements a single family of AI models and takes as input certain inputs and outputs a value AI Method Case-based explanations contain results that “are based on actual prior cases that can be presented to the user to provide compelling support for the system’s conclusions” (Cunningham et al., 2003). Borrowing from Leake, 1988 and Ahmed, 2015, we opine that an AI system generating case-based explanations needs to remember and adapt explanations of similar prior cases (Leake, 1988), or, “has to reason from experiences (old cases) in an effort to solve problems, critique solutions and explain anomalous situation” (Ahmed, 2015). Case-based explanations can involve analogical reasoning, relying on similarities between features of the case and of the current situation. Ahmed IM, Alfonse M, Aref M, Salem ABM. Reasoning Techniques for Diabetics Expert Systems. Procedia Computer Science. 2015;65:813–820 Cunningham P, Doyle D, Loughrey J. An evaluation of the usefulness of case-based explanation. In: International Conference on Case-Based Reasoning. Springer; 2003. p. 122–130. Leake DB. Evaluating Explanations. In: AAAI; 1988. p. 251–255 Case Based Explanation Provides solutions that are based on actual prior cases that can be presented to the user to provide compelling support for the system’s conclusions, and may involve analogical reasoning, relying on similarities between features of the case and of the current situation. Addresses question of the form, "To what other situations has this recommendation been applied?", "What instances from the training data are considered indicative for this recommendation?" Lorin, M.I., Palazzi, D.L., Turner, T.L., Ward, M.A.: What is a clinical pearl andwhat is its role in medical education? Medical teacher30(9-10), 870–874 (2008) Clinical Pearls small bits of free-standing, clinically relevant information based on experience or observation Addresses a question of the form, "What is a fact a system should consider when prescribing this medication?" Contextual explanations are those that refer to information about items other than the explicit inputs and output, such as information about the user, situation, and broader environment that affected the computation. Providing such information requires that a system be “context-aware,” and can include information about a “user’s tasks, significant user attributes, organizational environment, and technical and physical environments” (I.O. for standardization). I. O. for Standardization, Human-centred design processes for interactive systems. Int. Organization for Standardization, 1999 Contextual Explanation Refers to information about items other than the explicit inputs and output, such as information about the user, situation, and broader environment that affected the computation. Addresses a question of the form, "What broader information about the current situation prompted the suggestion of this recommendation?" Context is any information that can be used to characterize the situation of an entity, where an entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves. (Dey 2001) Contextual knowledge in turn is the knowledge that is derived from such relevant context, where relevance depends on the AI task. Contextual knowledge is related to, but distinct from, information that is used by an AI task to produce an output recommendation. Dey, A. K. (2001). Understanding and using context. Personal and ubiquitous computing, 5(1), 4-7. Contextual Knowledge As described by (Van der Waa, 2018) and (Miller, 2019), contrastive explanations define an output of interest and present contrasts between the fact (the event that did occur), the given output, and the foil (the event that did not occur), the output of interest. J. van der Waa, M. Robeer, J. van Diggelen, M. Brinkhuis, and M. Neerincx, “Contrastive explanations with local foil trees,” arXiv preprint arXiv:1806.07470, 2018. T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019 Contrastive Explanation Answers the question “Why this output instead of that output,” making a contrast between the given output and the facts that led to it (inputs and other considerations), and an alternate output of interest and the foil (facts that would have led to it). Addresses a question of the form, "Why administer this new drug over the one I would typically prescribe?", "Why this and not that?" and "Why this feature and not that feature?" Amar Das Deborah L. McGuinness Morgan Foreman Oshani Seneviratne Daniel M. Gruen Shruthi Chari Counterfactual Explanation Addresses the question of what solutions would have been obtained with a different set of inputs than those used. Addresses a question of the form, "What if the patient had a high risk for cardiovascular disease? Would you still recommend the same treatment plan?" focuses on what the data is and how it has been used in a particular decision, as well as what data and how it has been used to train and test the ML model. This type of explanation can help users understand the influence of data on decisions. Explaining Decisions Made with AI: Draft Guidance for Consultation—Part 1: The Basics of Explaining AI; ICO & The Alan Turing Institute: Wilmslow/Cheshire, UK, 2019; p. 19. Webb, M.E.; Fluck, A.; Magenheim, J.; Malyn-Smith, J.; Waters, J.; Deschênes, M.; Zagami, J. Machine Learning for Human Learners: Opportunities, Issues, Tensions and Threats Educ. Tech. Res. Dev.2020 Data Explanation Addresses a question of the form, "What is the data?", "How has the data been used in a particular decision?" and "How has the data been used to train the ML model?" https://en.wikipedia.org/wiki/Decision_tree Decision Tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements. Deduction allows us to derive from each candidate hypothesis what one expects to be true (consequence) if that hypothesis is true Stefanelli, M., & Ramoni, M. (1992). Epistemological constraints on medical knowledge-based systems. In Advanced models of cognition for medical training and practice (pp. 3-20). Springer, Berlin, Heidelberg. Deductive Task Environmental context is relevant information pertaining to the physical elements that describe the experimental situation. This information may extend to physical location, physical characteristics and properties of objects, movement, and time. Bauer, C., & Novotny, A. (2017). A consolidated view of context for intelligent systems. J. Ambient Intell. Smart Environ., 9, 377-393. Environmental Context Accounts of the real world that appeal to the user based on their general understanding and knowledge [(McNeil and Krajack, 2008) of how the world works, and that help them understand why particular facts (events, properties, decisions, etc.) occurred (Miller, 2019). K. L. McNeill and J. Krajcik, “Inquiry and scientific explanations: Helping students use evidence and reasoning,” Sci. as inquiry in the secondary setting, pp. 121–134, 2008 T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019. Everyday Explanation Uses accounts of the real world that appeal to the user, given their general understanding and knowledge. Addresses a question of the form, "Why are gloves recommended when dealing with high-risk patients?" Addresses a question of the form, "Why does option A make sense?" account that articulates how or why an event occured that is supported by evidence and scientific ideas Shruthi Chari https://www.sciencepracticesleadership.com/definitions.html Evidence Based Explanation Address questions of the form, "What studies support this recommendation?" This category explains the predictions of test instances using similar or influential training instances. It is considered as static local post-hoc explanations with samples. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Exemplar Method https://methods.sagepub.com/reference/sage-encyc-qualitative-research-methods/n162.xml Marsha A. Schubert Thomasina J. Borkman Experential Knowledge information and wisdom gained from lived experience the impact that an explanation has on a user or the purpose it was designed to achieve Explanation Goal educational mechanisms that can provide explanations that are understandable to humans (interpretability) and accurately describe model behaviour in the entire feature space (fidelity) Zhou, J., Gandomi, A. H., Chen, F., & Holzinger, A. (2021). Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics, 10(5), 593. Explanation Method a particular form/media in which an explanation exists is expressed Shruthi Chari Explanation Modality Explanation Type wrt Model and Interactivity inputs and other considerations that led to a system recommendation that did occur Daniel Gruen Fact This type of explanation provides steps taken across the design and implementation of an ML system to ensure that the decisions it assists are generally unbiased, and whether or not an individual has been treated equitably. This type of explanation is key to increasing individuals’ confidence in an Artificial Intelligence (AI) system. It can foster meaningful trust by explaining to an individual how bias and discrimination in decisions are avoided. Cunningham P, Doyle D, Loughrey J. An evaluation of the usefulness of case-based explanation. In: International Conference on Case-Based Reasoning. Springer; 2003. p. 122–130. Fairness Explanation Addresses questions of the form, "Is there a bias consequence of this system recommendation?" and "What data was used to arrive at this decision?" Methods such as partial dependence plots (PDP) and sensitivity analysis are used to study the (global) effects of different input features on the output values and are typically consumed through appropriate visualizations (e.g., PDP plots, scatter plots, control charts). Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Feature Relevance Method facts that would have led to a system recommendation that could have occured Daniel Gruen Foil Global Explanation Type - Global Explanation Methods that learn high-level features in an unsupervised manner through variational autoencoder or generative adversarial network frameworks would naturally fall under the data followed by features category. Even supervised methods of learning high-level interpretable features would lie in this category. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. High level feature learning method This type of explanation concerns the impact that the use of an ML system and its decisions has or may have on an individual and on a wider society. This type of explanation gives individuals some power and control over their involvement in ML-assisted decisions. By understanding the possible consequences of the decision, an individual can better assess their participation in the process and how the outcomes of the decision may affect them. Therefore, this type of explanation is often suited to delivery before an ML-assisted decision is made. Cunningham P, Doyle D, Loughrey J. An evaluation of the usefulness of case-based explanation. In: International Conference on Case-Based Reasoning. Springer; 2003. p. 122–130. Impact Explanation What is the impact of a system recommendation? Induction is able to match single statement to single statement and, therefore, to match a single statement derived as a prediction from a hypothesis with a single statement describing a portion of the real state of airs in the patient. Stefanelli, M., & Ramoni, M. (1992). Epistemological constraints on medical knowledge-based systems. In Advanced models of cognition for medical training and practice (pp. 3-20). Springer, Berlin, Heidelberg. Inductive Task Interactive Explanation Type - Interactive Explanation a task that produces interpretations or explanations, which are an account of the system, its workings, the implicit and explicit knowledge it uses to arrive at conclusions in general and the specific decision at hand, that is sensitive to the end-user’s understanding, context, and current needs. Chari, S., Gruen, D., Seneviratne, O.W., & McGuinness, D.L. (2020). Directions for Explainable Knowledge-Enabled Systems. Knowledge Graphs for eXplainable Artificial Intelligence. Interpretation and Explanation Task Knowledge based Systems Knowledge distillation-type approaches, which learn a simpler model based on a complex model’s predictions, would be considered as global interpretations that are learned using a post-hoc surrogate model. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Knowledge Distillation Method a task of extracting pre-specified types of facts from written texts or speech transcripts, and converting them into structured representations (e.g., databases). Ji H. (2009) Information Extraction. In: LIU L., ÖZSU M.T. (eds) Encyclopedia of Database Systems. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-39940-9_204 Information Extraction Knowledge Extraction Task Local Explanation Type - Local Explanation Looks through a row or column for a key and returns the value of the cell in a result range located in the same position as the search row or column https://support.google.com/docs/answer/3256570?hl=en Lookup Task a mechanistic explanation involves presenting a model of the mechanism (the explanans) responsible for some phenomenon of interest (the explanandum). A mechanism is an entity or structure, such as an organ (though a mechanism need not be delineated by such clear physical boundaries) that performs a function by virtue of its component parts, operations and, importantly, their spatial and temporal organisation (Bechtel & Abrahamsen 2005: 421–23). That is, a mechanism does not just consist in its physical parts, but in the ‘organised interplay’ of the activities of those parts (Soom 2012: 656). Examples of phenomena (i.e.,functions or behaviours performed by mechanisms) include such things as the heart’s pumping blood, metabolism and memory. Dessaix, D. Mechanistic Explanation: Some Limits and their Significance. The ANU Undergraduate Research Journal, 51. Mechanistic Explanation Addresses a question of the form, "What is a biological basis for this recommendation?" Model Explanation Output Methods that visualize intermediate representations/layers of a neural network would fall under the global post-hoc category as people typically use these visualizations to gain confidence in the model and inspect the type of high-level features being extracted. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Neural Network Visualization Method a statistical value that backs and quantifies a recommendation Numerical Evidence Object Characteristic This type of explanation is about the “why” of an ML decision and provides reasons that led to a decision, and is delivered in an accessible and understandable way, especially for lay users. If the ML decision was not what users expected, this type of explanation allows users to assess whether they believe the reasoning of the decision is flawed. While, if so, the explanation supports them to formulate reasonable arguments for why they think this is the case. Explaining Decisions Made with AI: Draft Guidance for Consultation—Part 1: The Basics of Explaining AI; ICO & The Alan Turing Institute: Wilmslow/Cheshire, UK, 2019; p. 19. Webb, M.E.; Fluck, A.; Magenheim, J.; Malyn-Smith, J.; Waters, J.; Deschênes, M.; Zagami, J. Machine Learning for Human Learners: Opportunities, Issues, Tensions and Threats Educ. Tech. Res. Dev.2020 Zhou, J., Gandomi, A. H., Chen, F., & Holzinger, A. (2021). Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics, 10(5), 593. Rationale Explanation Addresses example question, "Why of an ML decision and provides reasons that led to a decision?" Work in the natural language processing and computer vision domains that generates rationales/explanations derived from input text would be considered as local self explanations. Here however, new words or phrases could be generated so the feature space can be richer than the original input space. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Providing rationale method a process being executed by the system that achieves a particular goal; e.g. inference models such as treatment planning involved in medical diagnosis can be considered reasoning modes where system is executing a set of tasks Ramoni, M., Stefanelli, M., Magnani, L., & Barosi, G. (1992). An epistemological framework for medical knowledge-based systems. IEEE Transactions on Systems, Man, and Cybernetics, 22(6), 1361-1375. Reasoning Mode The treatment planning process proceeds in four steps. First, the play therapist must develop accurate, comprehensive treatment goals that are consistent with the case formulation and the theoretical model being used. Because these goals are formulated from a theoretical perspective, they may not fully align with the goals of the child and his or her caregivers. Therefore, in the second step, the play therapist must work with both the child client and his or her caregivers to develop a treatment contract that takes the goals created in the first step and addresses each party’s needs. This will ensure they will remain actively engaged in the treatment process over time. Third, the play therapist must organize the treatment goals and the treatment contracts into a realistic and sequential treatment plan. Finally, the play therapist must devise both experiential and cognitive/verbal interventions to move the client toward the treatment goals. Source: O'Connor, K. J., & Ammen, S. (2012). Play therapy treatment planning and interventions: The ecosystemic model and workbook. Academic Press. This type of explanation concerns “who” is involved in the development, management, and implementation of an ML system, and “who” to contact for a human review of a decision. This type of explanation helps by directing the individual to the person or team responsible for a decision. It also makes accountability traceable. Explaining Decisions Made with AI: Draft Guidance for Consultation—Part 1: The Basics of Explaining AI; ICO & The Alan Turing Institute: Wilmslow/Cheshire, UK, 2019; p. 19. Webb, M.E.; Fluck, A.; Magenheim, J.; Malyn-Smith, J.; Waters, J.; Deschênes, M.; Zagami, J. Machine Learning for Human Learners: Opportunities, Issues, Tensions and Threats Educ. Tech. Res. Dev.2020 Zhou, J., Gandomi, A. H., Chen, F., & Holzinger, A. (2021). Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics, 10(5), 593. Responsibility Explanation Addresses the example questions, "Who is involved in the development, management, and implementation of an ML system?" and “Who to contact for a human review of a decision?" Methods that propose certain restrictions on the neural network architecture to make it interpretable, yet maintain richness of the hypothesis space to model complicated decision boundaries would fall under the global directly interpretable category. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Restricted Neural Network Architecture This type of explanation deals with steps taken across the design and implementation of an ML system to maximise the accuracy, reliability, security, and robustness of its decisions and behaviours. This type of explanation helps to assure individuals that an ML system is safe and reliable by explanation to test and monitor the accuracy, reliability, security, and robustness of the ML model. Cunningham P, Doyle D, Loughrey J. An evaluation of the usefulness of case-based explanation. In: International Conference on Case-Based Reasoning. Springer; 2003. p. 122–130. Safety and Performance Explanation Addresses questions of the form, "What steps were taken to ensure robustness and reliability of system?", "How has the data been used to train the ML model?", "What steps were taken to ensure robustness and reliability of AI method?" and "What were the plans for the system development?" Saliency based methods, which highlight different portions in an image whose classification we want to understand, can be categorized as local explanation methods that are static and provide feature-based explanations in terms of highlighting pixels/superpixels. In fact, popular methods such as LIME and SHAP also fall under this umbrella. Counterfactual explanations which are similar to contrastive explanations where one tries to find a minimal change that would alter the decision of the classifier, are another type of explanation in this category. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Saliency Method https://www.thefreedictionary.com/scientific+knowledge Scientific Knowledge knowledge accumulated by systematic study and organized by general principles involves a question and suggested explanation (hypothesis) based on observation, followed by the careful design and execution of controlled experiments, and finally validation, refinement or rejection of this hypothesis. https://www.nature.com/articles/nmeth0409-237 Scientific Method Simulation-based explanations are those based on an imagined or implemented imitation of a system or process and the results that emerge from similar inputs. As simulations can often be run numerous times (e.g. Monte Carlo simulations), and the mechanisms in the simulation can often be observed and traced directly, simulation-based explanations can have elements of statistical and trace-based explanations. Heal suggests that these explanations (Heals, 1996) contain facts that humans would use to determine an outcome in a specified case, and these explanations are intended to “replace and amplify real experiences with guided ones, often “immersive” in nature, that evoke or replicate substantial aspects of the real world in a fully interactive fashion” (Lateef, 2010) F. Lateef, “Simulation-based learning: Just like the real thing,” J. of Emergencies, Trauma and Shock, vol. 3, no. 4, p. 348, 2010 J. Heal, “Simulation, theory, and content,” Theories of theories of mind, pp. 75–89, 1996. Simulation Based Explanation Uses an imagined or implemented imitation of a system or process and the results that emerge from similar inputs Addresses a question of the form, "What would happen if this recommendation is followed?" a statement concerning the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment https://dictionary.cambridge.org/us/dictionary/english/bias statement of bias Static Explanation Type - Static Explanation Statistical explanations present an account of the outcome based on data about the occurrence of events under specified (e.g., experimental) conditions. Statistical explanations refer to numerical evidence on the likelihood of factors or processes influencing the result. (Hempel, 1962) add that a particularly high probability allows the outcome to be expected with practical certainty in any one case where the specified conditions occur. C. G. Hempel, “Deductive-nomological vs. statistical explanation,” University of Minnesota Press, Minneapolis, 1962. Statistical Explanation Presents an account of the outcome based on data about the occurrence of events under specified (e.g., experimental) conditions. Statistical explanations refer to numerical evidence on the likelihood of factors or processes influencing the result. Addresses questions of the form, "What percentage of similar patients who received this treatment recovered?" System Characteristic a person who utilizes and analyzes gathered user reuqirements and provides expertise on building different technical capabilities into systems. These people generally interact with different roles in technical teams including software architects and system developers. Chari S., Seneviratne O., Gruen D.M., Foreman M.A., Das A.K., McGuinness D.L. (2020) Explanation Ontology: A Model of Explanations for User-Centered AI. In: Pan J.Z. et al. (eds) The Semantic Web – ISWC 2020. ISWC 2020. Lecture Notes in Computer Science, vol 12507. Springer, Cham. https://doi.org/10.1007/978-3-030-62466-8_15 System Designer a person who creates computer software. The term computer programmer can refer to a specialist in one area of computers or to a generalist who writes computer programs. https://en.wikipedia.org/wiki/Programmer System Developer Shruthi Chari System Recommendation an output of an AI system that stitches together the system's output from tasks and/or includes external knowledge used by the task (s) sequence of underlying system steps and rules that led to a system recommendation Swartout, W. R., & Moore, J. D. (1993). Explanation in second generation expert systems. In Second generation expert systems (pp. 543-585). Springer, Berlin, Heidelberg. System Trace Technological context is any relevant information that characterizes the devices that are used to interact with or operate the AI system. Bauer, C., & Novotny, A. (2017). A consolidated view of context for intelligent systems. J. Ambient Intell. Smart Environ., 9, 377-393. Technological Context Trace based explanations describe the underlying sequence of steps used by the system to arrive at a specific result. They reveal “the line of reasoning per case” (Lim et al., 2009), and “addresses the question of why and how the application did something” (Lim et al., 2009). B. Y. Lim, A. K. Dey, and D. Avrahami, “Why and why not explanations improve the intelligibility of context-aware intelligent systems,” in Proc. of the SIGCHI Conf. on Human Factors in Computing Systems. ACM, 2009, pp. 2119–2128 Trace Based Explanation Provides the underlying sequence of steps used by the system to arrive at a specific result, containing the line of reasoning per case and addressing the question of why and how the application did something. Addresses a question of the form, "What steps were taken by the system to generate this recommendation?" User Characteristic User context is any relevant information that can be used to characterize the user which was not used as input to some AI task. User Context A corporate employee role is the role of an individual that is employed for some organization and does some tasks with them. corporate employee role Knowledge a collection of fields that has been gathered for a particular object and the selection is fed to a computer program Shruthi Chari Object Record Scientific explanations reference the results of rigorous scientific methods, such as observations and measurements, to explain something we see in the natural world (Moore, 2000). Adapting from (Miller, 2019), we add that scientific explanations usually contain different components of interacting knowledge, including theories or mechanisms such as physiological ones, which are sets of principles that form building blocks for models; models which represent the relationships between entities and their attributes informed by taxonomies and other classification schemes; and data (e.g. measurements, observations). J. Moore, “Varieties of scientific explanation,” The Behavior Analyst, vol. 23, no. 2, pp. 173–190, 2000. T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019. Scientific Explanation References the results of rigorous scientific methods, observations, and measurements. Addresses a question of the form, "What is the biological basis for this recommendation?" https://www.law.cornell.edu/uscode/text/22/8541 User the person that receives and ultimately uses the good, service, or technology Deductive task that generated a recommnendation used as a basis for a contrastive explanation Patient has hyperglycemia Guidelines recommend Drug B for this patient "Why Drug A over Drug B?" Addresses "What other factors about the patient does the system know of?" Addresses "What if the major problem was a fasting plasma glucose?" Drug B is a preferred drug Patient 100 What other factors about the patient does the system know of? What if the major problem was a fasting plasma glucose? Drug A is not sufficient for this patient Guidelines recommend Drug B Question Parameter A characteristic of a question