IT & Software

Pentesting GenAI LLM models: Securing Large Language Models

Course Overview

  • Course Title: Pentesting GenAI LLM models: Securing Large Language Models
  • Instructor: Start-Tech Trainings
  • Target Audience:
    • Security professionals
    • AI developers
    • Ethical hackers
  • Prerequisites:
    • Basic understanding of IT or cybersecurity
    • Curiosity about AI systems and their real-world impact
    • No prior knowledge of penetration testing or LLMs required

Curriculum Highlights

  • Key Topics Covered:
    • Unique vulnerabilities of LLMs
    • Penetration testing concepts for generative AI systems
    • Red teaming process for LLMs
    • Traditional benchmarks vs. better evaluation methods
    • Core vulnerabilities (prompt injection, hallucinations, biased responses)
    • MITRE ATT&CK framework for LLMs
    • Model-specific threats (excessive agency, model theft, insecure output handling)
    • Exploitation findings and reporting
  • Key Skills Learned:
    • Identifying vulnerabilities in LLMs
    • Red teaming strategies for LLM-based applications
    • Practical experience through real-world attack simulations
    • Applying MITRE ATT&CK concepts for LLM-specific threats
    • Building resilient generative AI applications

Course Format

  • Duration: 3.5 hours on-demand video
  • Format: Self-paced online course
  • Resources:
    • 2 articles
    • 2 downloadable resources
    • Access on mobile and TV
    • Certificate of completion
Get Coupon on Udemy