A Formal Framework for Explainable Artificial Intelligence in High-Reliability Decision Models

Authors

  • Elias Korhonen School of Technology and Innovations University of Vaasa, Finland Author
  • Sofia Rantala School of Technology and Innovations University of Vaasa, Finland Author
  • Markus Lehtinen School of Technology and Innovations University of Vaasa, Finland Author

DOI:

https://doi.org/10.5281/zenodo.17783076

Keywords:

Explainable AI, semantic modeling, interpretability, high-reliability systems, distributed intelligence, computational transparency

Abstract

High-reliability decision systems require artificial intelligence models that operate with clarity, traceability, and consistency under uncertainty. As machine learning systems increasingly influence operational decisions in domains such as safety engineering, distributed monitoring, and autonomy manage ment, the ability to explain how decisions are produced becomes essential. This paper develops a formal framework for explainable artificial intelligence (XAI) that integrates semantic grounding, structural justification, and computational transparency. The framework is designed to operate across distributed architectures characteristic of early 2020 deployments, where cloud and edge components jointly participate in high-stakes decision processes. Through simulated stress conditions involving conflicting evidence, incomplete inputs, and model perturbations, the framework is evaluated for fidelity, stability, and reasoning completeness. The results demonstrate that systematically engineered explainability improves model oversight while maintaining operational reliability in dynamic environments

Downloads

Published

2020-05-12