The field of artificial intelligence has witnessed significant advancements with the development of complex machine learning models. However, these models often operate as "black boxes," making their decision-making processes opaque to human understanding. This lack of transparency poses challenges in various domains, particularly where human trust and accountability are crucial. This paper explores the concept of Explainable AI (XAI) and its application in creating interpretable machine learning models. We delve into the definitions and methodologies of XAI, emphasizing their importance in improving the reliability and acceptance of AI systems. Through a comprehensive analysis of current research, we identify key techniques and frameworks that facilitate the interpretability of machine learning models. We further discuss the limitations of XAI and propose potential solutions to enhance its effectiveness. The paper concludes by outlining future directions for research in this field, aiming to bridge the gap between AI and human comprehension.
White, E. Explainable AI Models for Interpretable Machine Learning. Transactions on Applied Soft Computing, 2021, 3, 24. https://doi.org/10.69610/j.tasc.20211121
AMA Style
White E. Explainable AI Models for Interpretable Machine Learning. Transactions on Applied Soft Computing; 2021, 3(2):24. https://doi.org/10.69610/j.tasc.20211121
Chicago/Turabian Style
White, Emma 2021. "Explainable AI Models for Interpretable Machine Learning" Transactions on Applied Soft Computing 3, no.2:24. https://doi.org/10.69610/j.tasc.20211121
APA style
White, E. (2021). Explainable AI Models for Interpretable Machine Learning. Transactions on Applied Soft Computing, 3(2), 24. https://doi.org/10.69610/j.tasc.20211121
Article Metrics
Article Access Statistics
References
Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
Doshi-Velez, F., & Kim, B. (2017). What do we need to explain in black box models? Knowledge Engineering, 36, 4-9.
Doshi-Velez, F., Kim, B., & Ghassemi, M. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Chaudhuri, K., Dasgupta, S., & Chawla, N. (2016). Local interpretable model-agnostic explanations. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 2065-2074).
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).
Yan, B., Truran, B., Chen, P. Y., & Goh, A. (2019). Explaining deep learning for fairness and accountability. arXiv preprint arXiv:1902.03154.
Madhavi, K., Adapa, S., & Chakravarthy, S. (2017). Explaining machine learning models for medical diagnosis using local interpretable model-agnostic explanations. arXiv preprint arXiv:1710.06349.
Wang, X., Wu, Y., & Wang, Z. (2018). Explainable deep learning for medical image analysis using SHAP and LIME. arXiv preprint arXiv:1801.04332.
Ferri, F., Mehta, P., & Gini, G. (2018). A framework for explainable credit scoring models. IEEE Access, 6, 9962-9971.
Wallach, H., & Farrell, J. (2013). A critical review of explanations for machine learning. arXiv preprint arXiv:1309.5287.