Join Nahid Farady, PhD (Principal Tech Lead, AI Security & Privacy · Microsoft) for a free live session.

Nahid Farady, PhD
Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Security engineering job descriptions now ask for AI security skills — prompt injection exploitation, agentic attack surfaces, model supply chain risk — and most security engineers have never touched them. In 6 weeks you'll build a mapped, tested attack surface for any LLM system, exploit the full OWASP LLM Top 10 and the emerging Agentic AI Top 10 before attackers do, and produce the audit reports and ship/hold frameworks your organization will act on. The skills that define the AI Security Engineer role, built through hands-on offensive and defensive work.
OWASP LLM Top 10-prioritized attack surface map — input vectors, agent trust boundaries, RAG pipeline exposure, output risks — for any architecture you secure. The reusable starting document for every security review.
10+ variants (direct, indirect, stored, multi-turn, jailbreak) run with Garak and PyRIT — documented by success condition and architectural cause. CI/CD-ready test suite you drop into your pipeline after the course.
Retrieval poisoning, context window stuffing, and embedding manipulation against live vector stores — plus tool call injection, memory poisoning, and cross-agent propagation from the OWASP Agentic AI Top 10 (2026).
SecAI+ Domain 2-aligned audit — attack surface coverage, defense depth, guardrail evaluation, residual risk, and CI/CD security gates — producing a ship/hold decision your security lead, CISO, and legal team can act on.
This sprint is designed for:
Who know network security, cloud platforms, and AppSec cold — but have never faced an LLM attack surface and need a structured offensive and defensive path to own AI security in their practice.
Who lead security reviews for AI deployments at their organization but have no structured framework for LLM-specific attack surfaces, agentic systems, or model supply chain risk — and are building that capability now.
Who are responsible for pre-release security sign-off on AI features and have no repeatable audit process for LLM systems — and need a ship/hold framework their stakeholders will actually act on.
6 weeks · 3 sessions per week
Leave with real work to show, not just a certificate.
A complete, prioritized attack surface map for a real AI system — input vectors, agent trust boundaries, output leakage, and model supply chain exposure — structured against the OWASP LLM Top 10. Reusable as the standard starting document for every security review you run.
A combined, reproducible attack library covering prompt injection (10+ variants), RAG retrieval exploitation (retrieval poisoning, embedding manipulation), and OWASP Agentic AI Top 10 attack classes — built with Garak and PyRIT, documented by success condition and architectural cause. CI/CD-ready: integrates directly into any deployment pipeline to catch regressions automatically.
A SecAI+ Domain 2-aligned audit report — attack surface coverage, defense depth, guardrail selection rationale, CI/CD gate implementation, and residual risk documentation — with a justified ship/hold recommendation your security lead, CISO, and legal team can act on. The format becomes your standard pre-release process for every AI deployment.

Principal Tech Lead, AI Security & Privacy · Microsoft
⭐ 4.9 / 5
Nahid leads AI security, privacy, and responsible AI engineering at Microsoft Copilot, with prior roles at Google Cloud and Capital One CyberML. She holds a PhD from Virginia Tech and brings 10+ years of applied experience in cybersecurity, threat modeling, and ML deployment at scale. She also teaches AI and security as adjunct faculty at UC Berkeley.
⭐⭐⭐⭐⭐
"The agentic attack lab in Week 4 covered attack classes our entire security team had never seen. We went back and audited our production agent system the next week and found two live vulnerabilities."
Jordan Park
Senior Security Engineer · Okta
⭐⭐⭐⭐⭐
"The prompt injection test suite from Week 2 is now part of our CI/CD pipeline. We've caught two real issues before they reached production since the cohort ended."
Maya Torres
Security Engineer · Databricks
⭐⭐⭐⭐⭐
"The ship/hold framework in Week 6 changed how our security reviews work. For the first time, engineering and legal are aligned on what 'secure enough to ship' actually means."
Ryan Okafor
Tech Lead · Stripe
All sessions are instructor-led and live. Recordings available within 24 hours.
SUNDAY
9:00 AM PDT
Live ClassDeep dive with live attack labs, adversarial exercises, and tool-level breakdowns. Offensive and defensive every week.
WEDNESDAY
6:00 PM PDT
Lab SessionStructured attack or defense lab with instructor guidance. Bring your system, your findings, your blockers.
THURSDAY
6:00 PM PDT
Build & ShipBuild and red-team your weekly deliverable. Peer review before submission.
with Nahid Farady, PhD · Principal Tech Lead, AI Security & Privacy, Microsoft
What you'll walk away with:
🎁 Bonus for attendees:
Get "The AI Security Audit Starter Kit"
Threat model template + OWASP LLM Top 10 mapped to your system type + 10 prompt injection test cases ready to run with Garak
Claim your free seat
Skills you can deploy on Monday morning.