Explainable AI in Credit Card Fraud: Why We Need AI to Speak Human
Authors: Puneet Sharma
DOI: https://doi.org/10.5281/zenodo.14607720
Short DOI: https://doi.org/g8xxjp
Country: USA
Full-text Research PDF File:
View |
Download
Abstract: Credit card fraud is an ongoing challenge in the financial industry, with criminals constantly developing sophisticated methods to bypass security systems. As financial institutions embrace artificial intelligence (AI) to combat fraud, the need for explainable AI (XAI) becomes increasingly important. Unlike traditional black-box AI models, explainable AI provides transparency and clarity into decision-making processes, enabling organizations to trust and understand how AI models arrive at their conclusions. This white paper discusses the significance of XAI in credit card fraud detection systems and explores its role in improving trust, accountability, and effectiveness. The paper highlights why traditional AI systems, despite their accuracy, fall short in critical areas such as fairness, interpretability, and accountability, especially in high-stakes areas like credit card fraud. We will explore how XAI can improve not only model performance but also regulatory compliance and customer experience by making AI decisions understandable to non-technical stakeholders. Additionally, the paper will examine the challenges of implementing XAI in fraud detection and offer insights on best practices for developing interpretable, accountable, and fair AI solutions.
Keywords: Explainable AI, Credit Card Fraud Detection, Artificial Intelligence, Transparency, Interpretability, Trustworthy AI, Financial Fraud, Fairness, Machine Learning
Paper Id: 231965
Published On: 2020-09-03
Published In: Volume 8, Issue 5, September-October 2020
Cite This: Explainable AI in Credit Card Fraud: Why We Need AI to Speak Human - Puneet Sharma - IJIRMPS Volume 8, Issue 5, September-October 2020. DOI 10.5281/zenodo.14607720