inflearn logo
inflearn logo

AI Red Teaming Expert Training: Agentic AI & LLM Security Threats

As AI systems become agentic and expand beyond text into image, audio, and multimodal capabilities, security threats are becoming significantly more complex. This lecture analyzes the latest security threats and attack techniques faced by AI Agents and Multimodal LLMs, as well as real-world cases, from an AI Red Teaming perspective.

2 learners are taking this course

Level Intermediate

Course period Unlimited

security training
security training
AI Agent
AI Agent
security training
security training
AI Agent
AI Agent

What you will gain after the course

  • Understanding the AI Red Teaming Mindset

  • Identifying Agentic AI Threat Models

  • Understanding Multimodal LLM Security Issues

  • Securing the perspectives necessary for establishing an AI security strategy

What is AI Red Teaming expert training?

AI technologies such as ChatGPT, Google Gemini, and DeepSeek are advancing rapidly. However, compared to the speed of AI development, the response to AI security is significantly slower and has yet to receive much attention. NSHC established an AI Security Research Institute in 2020 and has been researching this field to design a security education curriculum ranging from basic to advanced levels. Now that AI is more prevalent than ever, protect your organization's valuable assets through AI Red Teaming expert training!



The 1st AI Security Specialist Training will be held specially with AIM Intelligence.
AIM Intelligence is a small but powerful company with a track record including the Meta LLAMA Competition Asia-Pacific Award and the Ministry of Science and ICT Generative AI Red Team Challenge Award, with its AI security research accepted at international conferences IMCL 2025 and ACL 2025.

What you will learn

Core of LLM & AI Agent Red Teaming

  • Understand the Transformer architecture and LLM structure.

  • Learn attack methodologies based on LLM characteristics.

  • Analyze the limitations of Jailbreaking, Next Token attacks, and Safety Training.

  • Understand the OWASP LLM Top 10 and Red Teaming evaluation metrics.

Agentic AI & Multimodal LLM Threat Analysis

  • Understand AI Agent Architecture and IAM structures.

  • Analyze multi-agent orchestration vulnerabilities.

  • Learn the concepts of Vision and Audio-based Adversarial Attacks.

  • Understand attack scenarios through actual AI Red Teaming cases.

Notes before taking the course

Precautions

  • Unauthorized distribution or sharing of lecture materials and all videos/documents is strictly prohibited.

Recommended for
these people

Who is this course right for?

  • Those who design AI security strategies

  • Security practitioners interested in AI Red Teaming

  • Those who want advanced content beyond the basic stages of LLM security

  • Those who are curious about Agentic AI and Multimodal AI security

  • Those who want to understand attack scenarios from an AI Safety perspective

Need to know before starting?

  • Understanding Basic LLM Concepts

  • AI Security or Information Security Fundamentals

  • Generative AI user experience

Hello
This is SecurityGround

79

Learners

4

Reviews

5.0

Rating

10

Courses

SecurityGround is NSHC's security education brand.

Published: 
Last updated: 

Reviews

Not enough reviews.
Please write a valuable review that helps everyone!

SecurityGround's other courses

Check out other courses by the instructor!

Similar courses

Explore other courses in the same field!

Limited time deal ends in 7 days

$107,030.00

30%

$117.70