Course Outline

Understanding AI-Specific Risk in Government Settings

  • How AI risk differs from traditional IT and data risk
  • Categories of AI risk: technical, operational, reputational, and ethical
  • Public accountability and risk perception in government

AI Risk Management Frameworks

  • NIST AI Risk Management Framework (AI RMF)
  • ISO/IEC 42001:2023 — AI Management System Standard
  • Other sector-specific and international guidance (e.g., OECD, UNESCO)

Security Threats to AI Systems

  • Adversarial inputs, data poisoning, and model inversion
  • Exposure of sensitive training data
  • Supply chain and third-party model risks

Governance, Auditing, and Controls

  • Human-in-the-loop and accountability mechanisms
  • Auditable AI: documentation, versioning, and interpretability
  • Internal controls, oversight roles, and compliance checkpoints

Risk Assessment and Mitigation Planning

  • Building risk registers for AI use cases
  • Collaborating with procurement, legal, and service design teams
  • Conducting pre-deployment and post-deployment evaluations

Incident Response and Public-Sector Resilience

  • Responding to AI-related incidents and breaches
  • Communicating with stakeholders and the public
  • Embedding AI risk practices in cybersecurity playbooks

Summary and Next Steps

Requirements

  • Experience in IT operations, risk management, cybersecurity, or compliance within government institutions
  • Familiarity with organizational security practices and digital service delivery
  • No prior technical expertise in AI systems required

Audience

  • Government IT teams involved in digital services and systems integration
  • Cybersecurity and risk professionals in public institutions
  • Public sector audit, compliance, and governance personnel
 7 Hours

Upcoming Courses

Related Categories