Free Live Kickoff

    Find Your Model's Blind Spot in 60 Minutes

    Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

    📅 April 19, 2026⏰ 5:30 PM PDT⏱ 60 minutes🆓 Free to Join
    Nahid Farady, PhD

    Nahid Farady, PhD

    Principal Tech Lead, AI Security & Privacy · Microsoft

    ⭐ 4.9 / 5

    You shipped the model. Did you audit it?

    ML engineers and data scientists ship AI systems that make real decisions about real people — and most have never run a formal fairness or explainability audit on what they've built. This sprint gives you the tools, frameworks, and documentation to run a responsible AI audit on a live or provided system — bias detection with AI Fairness 360, explainability with SHAP and LIME, and an accountability report your legal and compliance team can act on.

    4 WeeksLive instruction
    3 ProjectsReal deliverables
    30 SeatsPer cohort, capped

    What You'll Learn

    📊

    Bias Audit Report

    Scored fairness audit using AI Fairness 360 — metric selected against IEEE P7003 guidance (disparate impact, equalized odds, statistical parity, or individual fairness), not just whichever one runs first. Findings scored across demographic groups with remediation prioritized by impact.

    🔬

    Explainability Integration

    SHAP for global feature importance and individual prediction explanations, LIME for model-agnostic local approximations — with documentation of where each tool's assumptions hold and where they break down. Extended to LLM outputs with tiered documentation for engineering, legal, and end users.

    📋

    Accountability and Model Card

    A model card built to EU AI Act transparency requirements — documenting intended use, out-of-scope uses, known limitations, bias findings, human oversight requirements, and update/deprecation policy. Structured for technical and legal audiences.

    📁

    Full Responsible AI Audit Report

    Bias audit, remediation report, explainability documentation, and model card in a single audit package — with an executive summary that maps findings to decisions without requiring stakeholders to read the underlying technical docs.

    Who Is This For?

    This sprint is designed for:

    🤖

    ML Engineers Who've Never Audited What They've Shipped

    Who build and deploy models that make consequential decisions and know fairness and explainability audits are overdue — but have never had a structured process for running one.

    📈

    Data Scientists Asked to Explain Model Decisions to Non-Technical Stakeholders

    Who are being asked by product, legal, or leadership to justify model outputs and need tooling and documentation frameworks to do it credibly.

    📋

    AI Product Managers Responsible for Model Accountability

    Who own AI products that touch users directly and need to implement responsible AI practices before a regulatory requirement, audit, or public incident forces the conversation.

    Sprint Outline

    4 weeks · 3 sessions per week

    Projects You'll Ship

    Leave with real work to show, not just a certificate.

    01

    Bias Audit + Remediation Report

    Fairness audit using AI Fairness 360 — metrics selected against IEEE P7003 guidance, findings scored across demographic groups, and remediation applied with explicit tradeoff documentation and residual risk rationale. Reusable as a repeatable template for every model release.

    02

    Explainability Documentation Package

    SHAP and LIME on a real or provided model with tiered documentation for engineering, legal, and end-user audiences. Includes LLM and generative AI explainability coverage and EU AI Act transparency alignment notes.

    03

    Full Responsible AI Audit Report

    Bias audit, remediation decisions, explainability documentation, and model card in a single package — with an executive summary and EU AI Act transparency compliance notes. Structured for compliance review, public disclosure, and repeatable use.

    Your Instructors

    Nahid Farady, PhD

    Nahid Farady, PhD

    Principal Tech Lead, AI Security & Privacy · Microsoft

    ⭐ 4.9 / 5

    Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.

    What Students Say

    ⭐⭐⭐⭐⭐

    "Running AI Fairness 360 on our fraud model in Week 1 surfaced a demographic disparity we'd had in production for eight months. We fixed it before it became a regulatory issue."

    Jordan Park

    Jordan Park

    ML Engineer · Brex

    ⭐⭐⭐⭐⭐

    "The SHAP documentation package I built in Week 2 is now our standard for every model deployment. Our legal team stopped asking for ad-hoc explanations overnight."

    Taylor Nguyen

    Taylor Nguyen

    Data Scientist · Lattice

    ⭐⭐⭐⭐⭐

    "I shipped our model card and full audit report six weeks after the sprint. First time we had responsible AI documentation that passed legal review on the first submission."

    Casey Kim

    Casey Kim

    AI Product Manager · Rippling

    Sprint Schedule

    All sessions are instructor-led and live. Recordings available within 24 hours.

    SUNDAY

    9:00 AM PDT

    Live Class

    Bias detection, explainability, and accountability — applied to real models in production.

    WEDNESDAY

    6:00 PM PDT

    Lab Session

    Hands-on tooling work with AI Fairness 360, SHAP, and LIME on real or provided models.

    THURSDAY

    6:00 PM PDT

    Build & Ship

    Finalize weekly deliverables with peer review and instructor calibration.

    Frequently Asked Questions

    LIVE KICKOFF

    Find Your Model's Blind Spot in 60 Minutes

    with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft

    📅 April 19, 2026
    5:30 PM PDT
    60 minutes
    💻 Live on Zoom

    What you'll walk away with:

    Run a disparate impact check on a provided model using AI Fairness 360 and interpret the score
    Generate a SHAP summary plot and identify the top three features driving predictions
    Score your model against a responsible AI readiness checklist and identify your first gap
    Detailed preview of the 4-week sprint

    🎁 Bonus for attendees:

    Get "The Responsible AI Audit Starter Pack"

    Includes an AI Fairness 360 quickstart notebook, a SHAP documentation template, and a model card starter — ready to run on your models this week.

    Claim your free seat

    Skills you can deploy on Monday morning.