AI Ethics, Risk Analysis, and Governance Analysis equips participants with the knowledge to identify, assess, and manage the ethical and operational risks associated with AI systems. The course covers foundational AI concepts, regulatory considerations, and how AI is used to support or automate decisions within organizations. Participants will explore core ethical principles, analyze real-world cases, and apply structured approaches to evaluating risk across areas such as compliance, security, and reliability. The course also introduces governance practices, controls, and action planning to support responsible AI implementation. Participants will be able to assess AI risks and apply practical strategies to ensure ethical and compliant use.
Learning Objectives
- Understand key AI concepts, lifecycle stages, and regulation.
- Apply ethical principles to evaluate AI use cases and decisions.
- Categorize AI risk across legal, operational, and reputational areas.
- Use structured approaches to analyze and manage AI risk.
- Apply governance practices and controls for responsible AI.
Course Agenda »
AI Basics, Org Context, and Regulation
- AI Vocabulary
- Typical Lifecycle
- Monitoring
- Common Mental Models
- Clarity on Right Decisions
- Automated Human Decisions
- Key Laws, Regulations, and Standards
- High-Risk vs. Low-Risk Systems
Principles, Dilemmas, Case Work
- Core Principles
- Real Case Examples
- Ethical Dilemma Discussions
Risk Categories and Analysis Models
- Risk Categories
- Repeatable Risk Analysis Method
Governance, Control, & Action Planning
- AI Inventory, Intake Forms, Approval
- Roles and Responsibilities
Key Controls
- Data Impact
- Model Documentation
- Transparency Artifacts
- Human-in-the-Loop Review
- Escalation Paths, Monitoring


