In aviation, where safety and reliability are non-negotiable, building trust in AI systems requires more than just accuracy—it demands explainability. This session explores how explainability frameworks can help learning systems gain acceptance and regulatory approval in aviation by making AI decisions transparent, interpretable, and actionable. From techniques such as LIME and SHAP that provide post-hoc interpretability to structured frameworks like Explainability Fact Sheets and IBM’s AI Explainability 360, the focus will be on ensuring that pilots, operators, and regulators can understand and trust the reasoning behind AI outputs. Real-world applications such as runway friction prediction, airspace conflict resolution, and pilot candidate assessment will highlight the role of explainability in enhancing safety and collaboration between humans and AI. The discussion will also address challenges such as balancing model accuracy with interpretability and ensuring consistency across methods, while emphasizing a roadmap for embedding explainability from design to deployment. By establishing trust through transparency, explainability frameworks can pave the way for AI-powered learning systems to truly take off in aviation.