Free Live Kickoff

    Map the AI Risk You're Already Carrying — Live

    Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

    📅 April 19, 2026⏰ 5:30 PM PDT⏱ 60 minutes🆓 Free to Join
    Nahid Farady, PhD

    Nahid Farady, PhD

    Principal Tech Lead, AI Security & Privacy · Microsoft

    ⭐ 4.9 / 5

    Your AI systems have undocumented risk. Here's how to find it.

    Most teams shipping LLM apps, RAG pipelines, and AI agents have no structured risk assessment behind them — until legal, a regulator, or an incident forces the conversation. This sprint gives security architects, GRC professionals, and engineering managers a repeatable process to identify, score, and document AI risk against NIST AI RMF and EU AI Act requirements. Every deliverable is audit-ready and reusable on the next system you ship.

    4 WeeksLive instruction
    3 ProjectsReal deliverables
    30 SeatsPer cohort, capped

    What You'll Learn

    📋

    GenAI Risk Register

    A scored, prioritized register for LLM apps, RAG pipelines, and agentic systems — mapping failure modes (hallucination, prompt injection, data leakage, model drift, tool call risk, supply chain exposure) to likelihood, impact, and ownership, including cascading risk chains traditional registers miss.

    📊

    NIST AI RMF Assessment Report

    Complete Map, Measure, and Manage assessment — risks scored across six impact categories with evidence, plus a risk treatment plan and remediation roadmap your engineering team can execute and your CISO can present. Audit-ready format.

    ⚖️

    EU AI Act + ISO 42001 Compliance Checklist

    Risk tier classification with documented rationale — including edge cases at the high-risk boundary — plus deployer obligation mapping, ISO 42001 gap analysis, and a 90-day closure plan for legal sign-off.

    📁

    Full Risk and Compliance Package

    Risk register, NIST assessment, and compliance checklist in a single executive-readable package — with risk acceptance and escalation recommendations that hold up under CISO, legal, or auditor scrutiny.

    Who Is This For?

    This sprint is designed for:

    🏗️

    Security Architects Adding AI to Their Risk Surface

    Who are responsible for securing AI systems in production but lack a structured methodology to assess AI-specific risks beyond traditional threat modeling.

    📎

    GRC Professionals Extending Frameworks to Cover AI

    Who own enterprise risk and compliance programs and are being asked to extend existing controls to cover LLMs, RAG pipelines, and AI agents — without a clear starting point.

    🚀

    Engineering Managers Shipping AI Without Risk Documentation

    Who are moving fast on AI features and know the risk layer is undocumented — before a compliance audit, a vendor review, or an incident makes it urgent.

    Sprint Outline

    4 weeks · 3 sessions per week

    Projects You'll Ship

    Leave with real work to show, not just a certificate.

    01

    GenAI Risk Register

    A structured risk register for a real or provided AI system — LLM app, RAG pipeline, or agent — mapping GenAI-specific failure modes to calibrated likelihood, impact, cascading dependencies, and ownership. Scored and prioritized. Reusable as a living document and intake template for every AI system your organization ships.

    02

    NIST AI RMF Assessment Report

    A complete Map, Measure, and Manage function assessment — risk categories scored across NIST's six impact dimensions, gaps documented with evidence, and a remediation roadmap your engineering team can execute and your CISO can present. Audit-ready format, directly handable to an external assessor.

    03

    Full AI Risk and Compliance Package

    A unified risk documentation package integrating the risk register, NIST assessment, and EU AI Act + ISO 42001 compliance checklist — with an executive summary, risk acceptance and escalation recommendations, and a 90-day remediation roadmap. Structured for legal review, vendor audits, and regulatory response.

    Your Instructors

    Nahid Farady, PhD

    Nahid Farady, PhD

    Principal Tech Lead, AI Security & Privacy · Microsoft

    ⭐ 4.9 / 5

    Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.

    What Students Say

    ⭐⭐⭐⭐⭐

    "The risk register from Week 1 is now our standard intake form for every new AI feature. We catch failure modes before engineering starts, not after."

    Alex Johnson

    Alex Johnson

    Security Architect · Lattice

    ⭐⭐⭐⭐⭐

    "Running the NIST AI RMF assessment live in Week 2 surfaced three undocumented risks in our RAG pipeline that had been in production for six months. Worth the entire sprint."

    Emma Lee

    Emma Lee

    GRC Lead · Linear

    ⭐⭐⭐⭐⭐

    "I brought our AI risk package to a vendor audit two weeks after the sprint ended. First time we had documentation that actually answered their questions."

    Sam Patel

    Sam Patel

    Engineering Manager · Cloudflare

    Sprint Schedule

    All sessions are instructor-led and live. Recordings available within 24 hours.

    SUNDAY

    9:00 AM PDT

    Live Class

    Risk taxonomy, NIST AI RMF, EU AI Act compliance — applied to real AI systems.

    WEDNESDAY

    6:00 PM PDT

    Lab Session

    Hands-on assessment and documentation work on your own or provided systems.

    THURSDAY

    6:00 PM PDT

    Build & Ship

    Finalize weekly deliverables with peer review and instructor scoring calibration.

    Frequently Asked Questions

    LIVE KICKOFF

    Map the AI Risk You're Already Carrying — Live

    with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft

    📅 April 19, 2026
    5:30 PM PDT
    60 minutes
    💻 Live on Zoom

    What you'll walk away with:

    Map the top three GenAI failure modes in one AI system you own using a live risk taxonomy exercise
    Score one risk against the NIST AI RMF impact categories using a provided scoring template
    Classify your AI system under EU AI Act risk tiers and identify whether high-risk obligations apply
    Detailed preview of the 4-week sprint

    🎁 Bonus for attendees:

    Get "The AI Risk Assessment Starter Pack"

    Includes a GenAI risk register template, NIST AI RMF scoring worksheet, and EU AI Act risk tier classification guide — ready to run on your systems this week.

    Claim your free seat

    Skills you can deploy on Monday morning.