Free Live Kickoff

    Attack Your AI Systems Before Attackers Do

    Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

    📅 April 19, 2026⏰ 5:30 PM PDT⏱ 60 minutes🆓 Free to Join
    Nahid Farady, PhD

    Nahid Farady, PhD

    Principal Tech Lead, AI Security & Privacy · Microsoft

    ⭐ 4.9 / 5

    You can't defend AI systems you haven't attacked.

    Security engineering job descriptions now ask for AI security skills — prompt injection exploitation, agentic attack surfaces, model supply chain risk — and most security engineers have never touched them. In 6 weeks you'll build a mapped, tested attack surface for any LLM system, exploit the full OWASP LLM Top 10 and the emerging Agentic AI Top 10 before attackers do, and produce the audit reports and ship/hold frameworks your organization will act on. The skills that define the AI Security Engineer role, built through hands-on offensive and defensive work.

    6 WeeksLive instruction
    3 ProjectsReal deliverables
    30 SeatsPer cohort, capped

    What You'll Learn

    🗺️

    LLM Threat Model

    OWASP LLM Top 10-prioritized attack surface map — input vectors, agent trust boundaries, RAG pipeline exposure, output risks — for any architecture you secure. The reusable starting document for every security review.

    💉

    Prompt Injection Playbook

    10+ variants (direct, indirect, stored, multi-turn, jailbreak) run with Garak and PyRIT — documented by success condition and architectural cause. CI/CD-ready test suite you drop into your pipeline after the course.

    🕸️

    RAG and Agentic Attack Surfaces

    Retrieval poisoning, context window stuffing, and embedding manipulation against live vector stores — plus tool call injection, memory poisoning, and cross-agent propagation from the OWASP Agentic AI Top 10 (2026).

    ⚖️

    AI Security Review Framework

    SecAI+ Domain 2-aligned audit — attack surface coverage, defense depth, guardrail evaluation, residual risk, and CI/CD security gates — producing a ship/hold decision your security lead, CISO, and legal team can act on.

    Who Is This For?

    This sprint is designed for:

    🔐

    Security Engineers Moving into AI

    Who know network security, cloud platforms, and AppSec cold — but have never faced an LLM attack surface and need a structured offensive and defensive path to own AI security in their practice.

    🛡️

    Senior Security Engineers Expanding into AI

    Who lead security reviews for AI deployments at their organization but have no structured framework for LLM-specific attack surfaces, agentic systems, or model supply chain risk — and are building that capability now.

    📋

    AppSec Engineers and Security Leads

    Who are responsible for pre-release security sign-off on AI features and have no repeatable audit process for LLM systems — and need a ship/hold framework their stakeholders will actually act on.

    Sprint Outline

    6 weeks · 3 sessions per week

    Projects You'll Ship

    Leave with real work to show, not just a certificate.

    01

    AI Threat Model

    A complete, prioritized attack surface map for a real AI system — input vectors, agent trust boundaries, output leakage, and model supply chain exposure — structured against the OWASP LLM Top 10. Reusable as the standard starting document for every security review you run.

    02

    Full Attack Suite (Injection + RAG + Agentic)

    A combined, reproducible attack library covering prompt injection (10+ variants), RAG retrieval exploitation (retrieval poisoning, embedding manipulation), and OWASP Agentic AI Top 10 attack classes — built with Garak and PyRIT, documented by success condition and architectural cause. CI/CD-ready: integrates directly into any deployment pipeline to catch regressions automatically.

    03

    AI Security Audit Report + Ship/Hold Decision

    A SecAI+ Domain 2-aligned audit report — attack surface coverage, defense depth, guardrail selection rationale, CI/CD gate implementation, and residual risk documentation — with a justified ship/hold recommendation your security lead, CISO, and legal team can act on. The format becomes your standard pre-release process for every AI deployment.

    Your Instructors

    Nahid Farady, PhD

    Nahid Farady, PhD

    Principal Tech Lead, AI Security & Privacy · Microsoft

    ⭐ 4.9 / 5

    Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.

    What Students Say

    ⭐⭐⭐⭐⭐

    "The agentic attack lab in Week 4 covered attack classes our entire security team had never seen. We went back and audited our production agent system the next week and found two live vulnerabilities."

    Jordan Park

    Jordan Park

    Senior Security Engineer · Okta

    ⭐⭐⭐⭐⭐

    "The prompt injection test suite from Week 2 is now part of our CI/CD pipeline. We've caught two real issues before they reached production since the cohort ended."

    Maya Torres

    Maya Torres

    Security Engineer · Databricks

    ⭐⭐⭐⭐⭐

    "The ship/hold framework in Week 6 changed how our security reviews work. For the first time, engineering and legal are aligned on what 'secure enough to ship' actually means."

    Ryan Okafor

    Ryan Okafor

    Tech Lead · Stripe

    Sprint Schedule

    All sessions are instructor-led and live. Recordings available within 24 hours.

    SUNDAY

    9:00 AM PDT

    Live Class

    Deep dive with live attack labs, adversarial exercises, and tool-level breakdowns. Offensive and defensive every week.

    WEDNESDAY

    6:00 PM PDT

    Lab Session

    Structured attack or defense lab with instructor guidance. Bring your system, your findings, your blockers.

    THURSDAY

    6:00 PM PDT

    Build & Ship

    Build and red-team your weekly deliverable. Peer review before submission.

    Frequently Asked Questions

    LIVE KICKOFF

    Attack Your AI Systems Before Attackers Do

    with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft

    📅 April 19, 2026
    5:30 PM PDT
    60 minutes
    💻 Live on Zoom

    What you'll walk away with:

    Live threat model a provided AI system — identify one real attack vector using the OWASP LLM Top 10
    Execute your first prompt injection with Garak and see exactly why it bypasses defenses at the architectural level
    Score your findings on the CompTIA SecAI+ Domain 2 security scorecard — used throughout the sprint
    Detailed preview of the 6-week sprint

    🎁 Bonus for attendees:

    Get "The AI Security Audit Starter Kit"

    Threat model template + OWASP LLM Top 10 mapped to your system type + 10 prompt injection test cases ready to run with Garak

    Claim your free seat

    Skills you can deploy on Monday morning.