Explainability vs Performance Trade-offs in High-Stakes AI Systems
DOI:
https://doi.org/10.5281/ZENODO.18087930Keywords:
Explainable AI, high-stakes decision systems, interpretability, trust, Ethical AI, performance trade-offsAbstract
High-stakes artificial intelligence systems increasingly influence decisions with significant ethical, financial, and societal consequences. While complex models often deliver superior predictive performance, their opacity raises concerns related to trust, accountability, and responsible use. This study examines the trade-offs between explainability and performance in high stakes AI systems through empirical evaluation and architectural analysis. We investigate how different model classes, explanation mechanisms, and governance practices affect decision quality and operational reliability. The findings demonstrate that explainability does not uniformly reduce performance and, in many contexts, improves decision effectiveness by supporting calibrated human oversight. The results provide practical guidance for designing AI systems that balance predictive strength with interpretability and accountability.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 The Artificial Intelligence Journal

This work is licensed under a Creative Commons Attribution 4.0 International License.