The rapid advancement of artificial intelligence (AI) has introduced new tools and methodologies for fraud detection in financial transactions. However, the opaque nature of AI models has often raised concerns about their trustworthiness and the ability to explain their decisions. This paper aims to explore and summarize various explainable AI (XAI) techniques that have been developed to enhance fraud detection in the financial sector. We discuss the challenges faced by financial institutions in combating fraud and how XAI can provide insights into the decision-making process of AI algorithms. The paper reviews different XAI methods such as interpretable models, local interpretability, and global interpretability approaches, and examines their effectiveness in improving fraud detection accuracy while maintaining transparency. We further discuss the potential impact of XAI on regulatory compliance and customer trust in financial systems. The study highlights the importance of selecting appropriate XAI techniques based on specific fraud detection requirements and the dynamic nature of fraudulent activities.
White, E. Explainable AI Techniques for Fraud Detection in Financial Transactions. Transactions on Applied Soft Computing, 2021, 3, 25. https://doi.org/10.69610/j.tasc.20211221
AMA Style
White E. Explainable AI Techniques for Fraud Detection in Financial Transactions. Transactions on Applied Soft Computing; 2021, 3(2):25. https://doi.org/10.69610/j.tasc.20211221
Chicago/Turabian Style
White, Emma 2021. "Explainable AI Techniques for Fraud Detection in Financial Transactions" Transactions on Applied Soft Computing 3, no.2:25. https://doi.org/10.69610/j.tasc.20211221
APA style
White, E. (2021). Explainable AI Techniques for Fraud Detection in Financial Transactions. Transactions on Applied Soft Computing, 3(2), 25. https://doi.org/10.69610/j.tasc.20211221
Article Metrics
Article Access Statistics
References
Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
Vapnik, V. (1995). The Nature of Statistical Learning Theory. Springer-Verlag.
Bifet, A., He, H., & Wu, C. (2005). An interpretable SVM-based approach for credit card fraud detection. In Proceedings of the 2005 ACM SIGKDD workshop on Data mining applications in marketing (pp. 12-19). ACM.
Lipton, Z. C., & Miller, A. J. (2011). Local Interpretable Model-Agnostic Explanations (LIME). arXiv preprint arXiv:1602.05629.
Cubal, M., Scardoni, F., & Sessa, A. (2017). Explaining the behavior of an intelligent system in the field of fraud detection. In Proceedings of the 2017 IEEE International Conference on Big Data (Big Data Congress) (pp. 524-531). IEEE.
Lundberg, S. M., Lee, S. I., & Wilson, A. (2017). Why should I trust you?: Explaining the predictions of any classifier. arXiv preprint arXiv:1706.07828.
Windham, T., Gandy, O., & Willmott, S. J. (2019). Explaining and improving the performance of an AI-based credit card fraud detection system using SHAP. In Proceedings of the 14th ACM International Conference on Data Science (pp. 1-10). ACM.
Kaluza, P., Tsiotas, A., & Papadopoulos, D. (2015). An interpretable SVM-based fraud detection system. In Proceedings of the 4th International Conference on Data Mining, Data Analytics, and Information Security (pp. 1-8). IEEE.
Burkov, S., & Doshi, V. (2016). How to explain AI decisions. arXiv preprint arXiv:1606.07461.
Doshi, V., & Kim, B. (2017). Why should I trust you?: An introduction to interpretability. arXiv preprint arXiv:1702.08268.