Description
As financial institutions and fintechs adopt artificial intelligence (AI) and machine learning (ML) tools, the need for strong model governance has never been greater. This course introduces participants to the fundamentals of LOM (Lifecycle and Operational Model) Risk Management—an essential framework for managing model risk across its design, development, implementation, and ongoing monitoring stages. You’ll learn how to identify model risks, document key controls, and apply validation practices that meet regulatory expectations. The session also covers emerging AI governance principles, transparency requirements, and ethical considerations that institutions must address as AI becomes more embedded in financial decision-making.
By the end, participants will have a clear understanding of how to build a sustainable, risk-based approach to AI and model oversight that supports innovation without compromising compliance.
Who This Is Designed For: Risk managers, compliance officers, data scientists, auditors, and fintech professionals who are new to model governance and want a structured introduction to model risk management frameworks and validation processes.
Agenda
• Understanding Model Risk: What it is, why it matters, and where it appears in AI/ML systems.
• Model Lifecycle Management (LOM): Explore key stages—design, development, validation, deployment, and monitoring.
• Governance Frameworks: Overview of regulatory expectations and guidance from OCC, SR 11-7, and emerging AI risk standards.
• Validation and Testing: Learn how to evaluate accuracy, stability, and fairness in AI and statistical models.
• Ethics and Transparency: Managing bias, explainability, and accountability in model-driven decisions.
• Ongoing Monitoring: Establishing feedback loops, documentation, and escalation procedures.
By the end of this course, you will know how to:
• Define and identify model risk within AI and data-driven systems.
• Apply lifecycle management principles (LOM) to ensure effective oversight.
• Implement practical validation and documentation controls.
• Recognize emerging AI governance expectations from regulators.
• Build a foundation for a scalable model risk management framework.

