Multi-Cloud Governance Using Reinforcement Learning
Main Article Content
Abstract
Enterprises increasingly rely on multi-cloud architectures to achieve resilience, regulatory flexibility, and vendor risk mitigation. However, governance mechanisms have failed to evolve at the same pace, remaining largely static, rule-based, and reactive. These approaches are ill-suited to environments characterized by continuous infrastructure change, cross-cloud dependencies, and competing optimization objectives spanning cost, security, reliability, and compliance. This paper introduces a Reinforcement Learning Driven Multi-Cloud Governance Framework (RL-MCGF) that reframes governance as a continuous, policy-bounded control problem rather than a static compliance exercise. The proposed framework embeds reinforcement learning within an enterprise governance control plane, enabling adaptive decision-making under uncertainty while preserving regulatory constraints, auditability, and human oversight. Unlike prior work that applies machine learning to isolated operational optimizations, this framework integrates governance intent, risk signals, and human-in-the-loop mechanisms directly into the learning loop. We present a layered architecture, lifecycle design, and evaluation grounded in operational governance metrics, demonstrating reductions in configuration drift, governance resolution time, and manual toil. This work advances the state of multi-cloud governance by providing a systemic, deployable approach to adaptive control that aligns with real-world enterprise requirements.