Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

Nahid Farady, PhD
Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
ML engineers and data scientists ship AI systems that make real decisions about real people — and most have never run a formal fairness or explainability audit on what they've built. This sprint gives you the tools, frameworks, and documentation to run a responsible AI audit on a live or provided system — bias detection with AI Fairness 360, explainability with SHAP and LIME, and an accountability report your legal and compliance team can act on.
Scored fairness audit using AI Fairness 360 — metric selected against IEEE P7003 guidance (disparate impact, equalized odds, statistical parity, or individual fairness), not just whichever one runs first. Findings scored across demographic groups with remediation prioritized by impact.
SHAP for global feature importance and individual prediction explanations, LIME for model-agnostic local approximations — with documentation of where each tool's assumptions hold and where they break down. Extended to LLM outputs with tiered documentation for engineering, legal, and end users.
A model card built to EU AI Act transparency requirements — documenting intended use, out-of-scope uses, known limitations, bias findings, human oversight requirements, and update/deprecation policy. Structured for technical and legal audiences.
Bias audit, remediation report, explainability documentation, and model card in a single audit package — with an executive summary that maps findings to decisions without requiring stakeholders to read the underlying technical docs.
This sprint is designed for:
Who build and deploy models that make consequential decisions and know fairness and explainability audits are overdue — but have never had a structured process for running one.
Who are being asked by product, legal, or leadership to justify model outputs and need tooling and documentation frameworks to do it credibly.
Who own AI products that touch users directly and need to implement responsible AI practices before a regulatory requirement, audit, or public incident forces the conversation.
4 weeks · 3 sessions per week
Leave with real work to show, not just a certificate.
Fairness audit using AI Fairness 360 — metrics selected against IEEE P7003 guidance, findings scored across demographic groups, and remediation applied with explicit tradeoff documentation and residual risk rationale. Reusable as a repeatable template for every model release.
SHAP and LIME on a real or provided model with tiered documentation for engineering, legal, and end-user audiences. Includes LLM and generative AI explainability coverage and EU AI Act transparency alignment notes.
Bias audit, remediation decisions, explainability documentation, and model card in a single package — with an executive summary and EU AI Act transparency compliance notes. Structured for compliance review, public disclosure, and repeatable use.

Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.
⭐⭐⭐⭐⭐
"Running AI Fairness 360 on our fraud model in Week 1 surfaced a demographic disparity we'd had in production for eight months. We fixed it before it became a regulatory issue."
Jordan Park
ML Engineer · Brex
⭐⭐⭐⭐⭐
"The SHAP documentation package I built in Week 2 is now our standard for every model deployment. Our legal team stopped asking for ad-hoc explanations overnight."
Taylor Nguyen
Data Scientist · Lattice
⭐⭐⭐⭐⭐
"I shipped our model card and full audit report six weeks after the sprint. First time we had responsible AI documentation that passed legal review on the first submission."
Casey Kim
AI Product Manager · Rippling
All sessions are instructor-led and live. Recordings available within 24 hours.
SUNDAY
9:00 AM PDT
Live ClassBias detection, explainability, and accountability — applied to real models in production.
WEDNESDAY
6:00 PM PDT
Lab SessionHands-on tooling work with AI Fairness 360, SHAP, and LIME on real or provided models.
THURSDAY
6:00 PM PDT
Build & ShipFinalize weekly deliverables with peer review and instructor calibration.
with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft
What you'll walk away with:
🎁 Bonus for attendees:
Get "The Responsible AI Audit Starter Pack"
Includes an AI Fairness 360 quickstart notebook, a SHAP documentation template, and a model card starter — ready to run on your models this week.
Claim your free seat
Skills you can deploy on Monday morning.