Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

Nahid Farady, PhD
Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Most teams shipping LLM apps, RAG pipelines, and AI agents have no structured risk assessment behind them — until legal, a regulator, or an incident forces the conversation. This sprint gives security architects, GRC professionals, and engineering managers a repeatable process to identify, score, and document AI risk against NIST AI RMF and EU AI Act requirements. Every deliverable is audit-ready and reusable on the next system you ship.
A scored, prioritized register for LLM apps, RAG pipelines, and agentic systems — mapping failure modes (hallucination, prompt injection, data leakage, model drift, tool call risk, supply chain exposure) to likelihood, impact, and ownership, including cascading risk chains traditional registers miss.
Complete Map, Measure, and Manage assessment — risks scored across six impact categories with evidence, plus a risk treatment plan and remediation roadmap your engineering team can execute and your CISO can present. Audit-ready format.
Risk tier classification with documented rationale — including edge cases at the high-risk boundary — plus deployer obligation mapping, ISO 42001 gap analysis, and a 90-day closure plan for legal sign-off.
Risk register, NIST assessment, and compliance checklist in a single executive-readable package — with risk acceptance and escalation recommendations that hold up under CISO, legal, or auditor scrutiny.
This sprint is designed for:
Who are responsible for securing AI systems in production but lack a structured methodology to assess AI-specific risks beyond traditional threat modeling.
Who own enterprise risk and compliance programs and are being asked to extend existing controls to cover LLMs, RAG pipelines, and AI agents — without a clear starting point.
Who are moving fast on AI features and know the risk layer is undocumented — before a compliance audit, a vendor review, or an incident makes it urgent.
4 weeks · 3 sessions per week
Leave with real work to show, not just a certificate.
A structured risk register for a real or provided AI system — LLM app, RAG pipeline, or agent — mapping GenAI-specific failure modes to calibrated likelihood, impact, cascading dependencies, and ownership. Scored and prioritized. Reusable as a living document and intake template for every AI system your organization ships.
A complete Map, Measure, and Manage function assessment — risk categories scored across NIST's six impact dimensions, gaps documented with evidence, and a remediation roadmap your engineering team can execute and your CISO can present. Audit-ready format, directly handable to an external assessor.
A unified risk documentation package integrating the risk register, NIST assessment, and EU AI Act + ISO 42001 compliance checklist — with an executive summary, risk acceptance and escalation recommendations, and a 90-day remediation roadmap. Structured for legal review, vendor audits, and regulatory response.

Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.
⭐⭐⭐⭐⭐
"The risk register from Week 1 is now our standard intake form for every new AI feature. We catch failure modes before engineering starts, not after."
Alex Johnson
Security Architect · Lattice
⭐⭐⭐⭐⭐
"Running the NIST AI RMF assessment live in Week 2 surfaced three undocumented risks in our RAG pipeline that had been in production for six months. Worth the entire sprint."
Emma Lee
GRC Lead · Linear
⭐⭐⭐⭐⭐
"I brought our AI risk package to a vendor audit two weeks after the sprint ended. First time we had documentation that actually answered their questions."
Sam Patel
Engineering Manager · Cloudflare
All sessions are instructor-led and live. Recordings available within 24 hours.
SUNDAY
9:00 AM PDT
Live ClassRisk taxonomy, NIST AI RMF, EU AI Act compliance — applied to real AI systems.
WEDNESDAY
6:00 PM PDT
Lab SessionHands-on assessment and documentation work on your own or provided systems.
THURSDAY
6:00 PM PDT
Build & ShipFinalize weekly deliverables with peer review and instructor scoring calibration.
with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft
What you'll walk away with:
🎁 Bonus for attendees:
Get "The AI Risk Assessment Starter Pack"
Includes a GenAI risk register template, NIST AI RMF scoring worksheet, and EU AI Act risk tier classification guide — ready to run on your systems this week.
Claim your free seat
Skills you can deploy on Monday morning.