AI Model Risk Management 101: Applying SR 11-7, NIST, and Governance Frameworks to AI Systems

$129.00

‍📅Date: 4/16/2026‍ ‍⏱️Duration: 60 minutes

📊Expertise Level: Intermediate‍

Description

As financial institutions and fintechs rapidly adopt artificial intelligence (AI) and automated decisioning tools, regulators are increasingly focused on how these models are governed, validated, and controlled. This course provides a practical introduction to AI Model Risk Management, grounded in regulatory expectations under SR 11-7, emerging AI governance standards, and NIST risk management frameworks.

Participants will learn how traditional model risk management concepts apply to AI and machine learning systems, including model development, validation, documentation, and ongoing monitoring. The session also explores how AI-specific risks—such as opacity, bias, drift, and third-party model dependencies—create new challenges for risk, compliance, and audit teams.

By the end of this course, attendees will understand how to apply model risk principles to AI use cases in a way that aligns with regulatory expectations, supports responsible innovation, and strengthens enterprise risk governance.

Who This Is Designed For

Risk managers, compliance officers, model risk teams, internal auditors, data and analytics professionals, fintech leaders, and governance professionals seeking a practical introduction to AI model risk, SR 11-7 expectations, and AI oversight frameworks.

Agenda

• Introduction to AI Model Risk
Understanding how AI and machine learning introduce new forms of model risk across financial institutions.

• SR 11-7 and Model Risk Management Foundations
Applying traditional model risk management principles—governance, validation, and controls—to AI systems.

• AI Model Lifecycle & Governance
Reviewing key lifecycle stages: design, development, implementation, monitoring, and retirement.

• NIST AI Risk Management Framework
Overview of NIST’s AI RMF and how it supports safe, explainable, and accountable AI use.

• Model Validation & Controls
Evaluating accuracy, bias, explainability, performance drift, and change management.

• Ongoing Monitoring & Governance
Establishing documentation, testing cadence, issue escalation, and regulatory readiness.

By the End of This Course, You Will Know How To:

• Identify and define AI model risk within financial and fintech environments
• Apply SR 11-7 principles to AI and machine learning use cases
• Understand the role of NIST in AI governance and risk management
• Implement practical validation and documentation controls
• Recognize regulatory expectations related to AI oversight
• Build a scalable foundation for AI and model risk governance

Note: This program meets the eligibility criteria for continuing education under ACFCS and is eligible for 1 credit.

Instructor: Joseph Cuanan

‍📅Date: 4/16/2026‍ ‍⏱️Duration: 60 minutes

📊Expertise Level: Intermediate‍

Description

As financial institutions and fintechs rapidly adopt artificial intelligence (AI) and automated decisioning tools, regulators are increasingly focused on how these models are governed, validated, and controlled. This course provides a practical introduction to AI Model Risk Management, grounded in regulatory expectations under SR 11-7, emerging AI governance standards, and NIST risk management frameworks.

Participants will learn how traditional model risk management concepts apply to AI and machine learning systems, including model development, validation, documentation, and ongoing monitoring. The session also explores how AI-specific risks—such as opacity, bias, drift, and third-party model dependencies—create new challenges for risk, compliance, and audit teams.

By the end of this course, attendees will understand how to apply model risk principles to AI use cases in a way that aligns with regulatory expectations, supports responsible innovation, and strengthens enterprise risk governance.

Who This Is Designed For

Risk managers, compliance officers, model risk teams, internal auditors, data and analytics professionals, fintech leaders, and governance professionals seeking a practical introduction to AI model risk, SR 11-7 expectations, and AI oversight frameworks.

Agenda

• Introduction to AI Model Risk
Understanding how AI and machine learning introduce new forms of model risk across financial institutions.

• SR 11-7 and Model Risk Management Foundations
Applying traditional model risk management principles—governance, validation, and controls—to AI systems.

• AI Model Lifecycle & Governance
Reviewing key lifecycle stages: design, development, implementation, monitoring, and retirement.

• NIST AI Risk Management Framework
Overview of NIST’s AI RMF and how it supports safe, explainable, and accountable AI use.

• Model Validation & Controls
Evaluating accuracy, bias, explainability, performance drift, and change management.

• Ongoing Monitoring & Governance
Establishing documentation, testing cadence, issue escalation, and regulatory readiness.

By the End of This Course, You Will Know How To:

• Identify and define AI model risk within financial and fintech environments
• Apply SR 11-7 principles to AI and machine learning use cases
• Understand the role of NIST in AI governance and risk management
• Implement practical validation and documentation controls
• Recognize regulatory expectations related to AI oversight
• Build a scalable foundation for AI and model risk governance

Note: This program meets the eligibility criteria for continuing education under ACFCS and is eligible for 1 credit.

Instructor: Joseph Cuanan