Journal Browser
Open Access Journal Article

Explainable AI Models for Interpretable Machine Learning

by Emma White 1,*
1
Emma White
*
Author to whom correspondence should be addressed.
TASC  2021, 24; 3(2), 24; https://doi.org/10.69610/j.tasc.20211121
Received: 17 September 2021 / Accepted: 27 October 2021 / Published Online: 21 November 2021

Abstract

The field of artificial intelligence has witnessed significant advancements with the development of complex machine learning models. However, these models often operate as "black boxes," making their decision-making processes opaque to human understanding. This lack of transparency poses challenges in various domains, particularly where human trust and accountability are crucial. This paper explores the concept of Explainable AI (XAI) and its application in creating interpretable machine learning models. We delve into the definitions and methodologies of XAI, emphasizing their importance in improving the reliability and acceptance of AI systems. Through a comprehensive analysis of current research, we identify key techniques and frameworks that facilitate the interpretability of machine learning models. We further discuss the limitations of XAI and propose potential solutions to enhance its effectiveness. The paper concludes by outlining future directions for research in this field, aiming to bridge the gap between AI and human comprehension.


Copyright: © 2021 by White. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (Creative Commons Attribution 4.0 International License). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Share and Cite

ACS Style
White, E. Explainable AI Models for Interpretable Machine Learning. Transactions on Applied Soft Computing, 2021, 3, 24. https://doi.org/10.69610/j.tasc.20211121
AMA Style
White E. Explainable AI Models for Interpretable Machine Learning. Transactions on Applied Soft Computing; 2021, 3(2):24. https://doi.org/10.69610/j.tasc.20211121
Chicago/Turabian Style
White, Emma 2021. "Explainable AI Models for Interpretable Machine Learning" Transactions on Applied Soft Computing 3, no.2:24. https://doi.org/10.69610/j.tasc.20211121
APA style
White, E. (2021). Explainable AI Models for Interpretable Machine Learning. Transactions on Applied Soft Computing, 3(2), 24. https://doi.org/10.69610/j.tasc.20211121

Article Metrics

Article Access Statistics

References

  1. Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
  2. Doshi-Velez, F., & Kim, B. (2017). What do we need to explain in black box models? Knowledge Engineering, 36, 4-9.
  3. Doshi-Velez, F., Kim, B., & Ghassemi, M. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  4. Chaudhuri, K., Dasgupta, S., & Chawla, N. (2016). Local interpretable model-agnostic explanations. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 2065-2074).
  5. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).
  6. Yan, B., Truran, B., Chen, P. Y., & Goh, A. (2019). Explaining deep learning for fairness and accountability. arXiv preprint arXiv:1902.03154.
  7. Madhavi, K., Adapa, S., & Chakravarthy, S. (2017). Explaining machine learning models for medical diagnosis using local interpretable model-agnostic explanations. arXiv preprint arXiv:1710.06349.
  8. Wang, X., Wu, Y., & Wang, Z. (2018). Explainable deep learning for medical image analysis using SHAP and LIME. arXiv preprint arXiv:1801.04332.
  9. Ferri, F., Mehta, P., & Gini, G. (2018). A framework for explainable credit scoring models. IEEE Access, 6, 9962-9971.
  10. Wallach, H., & Farrell, J. (2013). A critical review of explanations for machine learning. arXiv preprint arXiv:1309.5287.