Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

Nahid Farady, PhD
Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Most engineering managers and GRC leads inherit AI systems in production with no governance program behind them. This sprint gives you the frameworks, decision tools, and documentation to build one — from policy design to incident response — in four weeks. Every deliverable is structured for legal review, board presentation, and immediate use in your organization.
Map every AI system your org runs — provenance, data sources, dependencies, intended use, affected populations — classified by EU AI Act risk tier and NIST AI RMF impact category. The foundation every governance activity builds on.
A documented go/no-go framework for AI deployments — risk thresholds set, sign-offs mapped, and a decision record template your legal team can stand behind for every release.
Vendor due diligence framework separating what your AI vendors own versus what deployer obligations stay with you — with contractual requirements and monitoring obligations your procurement team can operationalize.
AI incident response runbook covering hallucinations, bias events, data leakage, and model drift — escalation paths, communication templates, post-mortem format — plus a board-ready program summary with implementation roadmap.
This sprint is designed for:
Who are being asked by legal, compliance, or their board to show a governance program exists — and don't have one yet.
Who own enterprise risk and compliance programs and need to extend their existing frameworks to cover AI-specific risks, regulations, and operational controls.
Who are moving fast on AI product initiatives and know the governance layer is missing — before a compliance audit or incident forces the conversation.
4 weeks · 3 sessions per week
Leave with real work to show, not just a certificate.
A complete map of your org's AI systems — model provenance, data sources, dependencies, intended use, and affected populations — classified by EU AI Act risk tier and NIST AI RMF impact category. Reusable every time a new AI system enters your environment.
Acceptable use policy, model oversight framework, and human review requirements aligned with EU AI Act deployer obligations and NIST AI RMF. Includes a ship/hold decision template with documented risk thresholds and sign-off requirements — structured for legal review and board presentation.
Board-ready governance program combining an AI incident response runbook, third-party vendor risk register, and complete program summary with implementation roadmap and review cadence. Aligned to NIST AI RMF, EU AI Act, and ISO 42001 — immediately deployable in your organization.

Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.
⭐⭐⭐⭐⭐
"The ship/hold decision framework from Week 2 is now mandatory before any AI feature goes to production. It's saved us from two launches we would have regretted."
Jordan Smith
Technical PM · Notion
⭐⭐⭐⭐⭐
"Week 4's incident simulation was the most valuable thing I did all year. We ran a real bias event scenario and found three gaps in our escalation process before it happened for real."
Aisha Patel
Security Architect · Cloudflare
⭐⭐⭐⭐⭐
"I came in with zero governance documentation. I left with a complete program package I presented to our board six weeks later."
Marcus Lee
Engineering Manager · Rippling
All sessions are instructor-led and live. Recordings available within 24 hours.
SUNDAY
9:00 AM PDT
Live ClassFrameworks, policy design, and incident response — applied to real AI systems.
WEDNESDAY
6:00 PM PDT
Lab SessionHands-on drafting and simulation exercises on your own systems.
THURSDAY
6:00 PM PDT
Build & ShipFinalize weekly deliverables with peer review and instructor feedback.
with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft
What you'll walk away with:
🎁 Bonus for attendees:
Get "The AI Governance Quick-Start Pack"
Includes a governance gap scoring template, ship/hold decision checklist, and an EU AI Act deployer obligations summary — ready to use on your systems this week.
Claim your free seat
Skills you can deploy on Monday morning.