Designing Compliant and Explainable AI for Cloud-Native Public Safety Frameworks
Keywords:
explainable AI, public safety systems, cloud native architecture, decision support systems, compliance engineering, AI governanceAbstract
Public safety organizations increasingly rely on artificial intelligence to support emergency response, risk assessment, and operational decision making. While cloud-native platforms offer scalability and resilience, the integration of AI into public safety systems raises significant challenges related to compliance, transparency, and trust. This paper presents a design framework for compliant and explainable AI within cloud-native public safety architectures. The proposed approach combines explainability mechanisms, governance controls, and distributed system patterns to support accountable AI-driven decision support under operational stress. Empirical evaluation demonstrates that explainable and compliant AI services can be deployed at scale without degrading system performance or response time.
Downloads
Published
Issue
Section
License
Copyright (c) 2021 The Artificial Intelligence Journal

This work is licensed under a Creative Commons Attribution 4.0 International License.