강의

멘토링

커뮤니티

NEW
AI Technology

/

Natural Language Processing

Understanding Core LLM Theory and Architecture -How ChatGPT, RAG, and Agents Work All at Once-

You use ChatGPT, but haven't you found it difficult to explain why you get certain answers? "RAG, agents, fine-tuning... I know the terms but find it hard to explain them precisely" "I'm at a loss for words when I hear LLM-related terminology" "Explaining concepts in AI meetings is always vague" This course was created specifically for people like you. This course is a theoretical lecture designed to understand LLMs as a 'structure' rather than a 'tool'. It's not about how to use ChatGPT or Gemini, but about building a framework that allows you to explain why they work the way they do.

(4.8) 4 reviews

74 learners

Level Beginner

Course period Unlimited

  • HappyAI
ChatGPT
ChatGPT
prompt engineering
prompt engineering
LLM
LLM
RAG
RAG
AI Agent
AI Agent
ChatGPT
ChatGPT
prompt engineering
prompt engineering
LLM
LLM
RAG
RAG
AI Agent
AI Agent

What you will gain after the course

  • Structural thinking to understand the process of how LLMs generate answers

  • A clear framework to avoid confusion between core concepts like Prompts, RAG, and Agents

  • The comprehension ability to accurately follow AI-related discussions

  • A realistic sense of judgment considering the limitations and parameters of LLMs


Master the principles of
how LLMs work with this comprehensive theory course

Gain a deep understanding of LLM, the core technology leading the AI era.


Have you been using ChatGPT or Gemini but curious about how they work?
This course explains LLM from basic concepts step by step,

It helps you understand the core technologies of Prompts, RAG, and Agents in an accessible way.

How LLMs work

A theory course to make it your own

I'm using ChatGPT and Gemini, but

Haven't you been curious about why you get those answers?

This course covers everything from the basic structure of LLMs to core concepts

Explained with a focus on understanding, without complex formulas.

Transformer, Self-Attention, and of course

Prompt, RAG, and Agent

You can naturally connect and understand how they work inside the LLM.

Not how to use tools,

This is a course that establishes criteria for evaluating AI.


This course helps you understand the thinking structure of LLMs,

The foundation for utilizing the latest technologies such as Prompt, RAG, and Agent.

This is a course that lays the foundation.

What makes this course different?

This course does not cover simple tool usage or tricks.

LLMs

  • how it understands context

  • Why hallucinations occur and

  • and why techniques like prompting, RAG, fine-tuning, and Agents emerged

explains the core theoretical concepts step by step without formulas, focusing on structure.

We've structured core concepts like Transformer, Self-Attention, tokens, and embeddings
to be understood through an intuitive flow rather than a list of papers.


We especially recommend this for

  • Those who use ChatGPT but are always confused about LLM concepts

  • When RAG or Agent topics come up, planners and PMs who don't understand in meetings khi nhắc đến RAG·Agent

  • Professionals considering AI adoption or utilization strategies

  • For those who are not developers but want to properly understand LLMs

  • Learners looking for an "Introduction to LLM Theory" course

This course is NOT the following type of course

  • ❌ ChatGPT feature explanation lecture

  • ❌ Lectures focused on how to use specific AI tools

  • ❌ Hands-on course focused on automation and work efficiency

This course is a theory-focused course on understanding the structure and operating principles of LLMs.

What you'll learn from this course

From someone who just uses AI
to someone who understands and can design AI


Structure-focused learning to understand
the core principles of LLMs


Section 1 - Understanding the Basics of Generative AI and LLM

We explore the fundamental principles of Generative AI and LLM. We understand how LLMs statistically learn the meaning and context of language through vast amounts of text data to generate natural sentences.

Section 2 - LLM Trends and Industry Analysis

We will examine the latest development trends in LLM technology and analyze the future strategic direction of LLMs within the global AI competitive landscape. This will help provide insight into the present and future of LLM technology.

Section 3 - LLM Operating Principles Basics

Learn the fundamental operating principles of LLMs. Understand core concepts such as tokens, embeddings, vector spaces, and context windows, and explore key output control parameters.

Section 4 - Prompt Engineering Techniques

This section covers the fundamental concepts and advanced techniques of prompt engineering to maximize LLM performance. Learn various patterns such as Zero-shot, Few-shot, and Chain-of-Thought (CoT) to develop effective prompt writing skills.

Section 5 - Overcoming LLM Limitations with RAG

Learn the RAG (Retrieval-Augmented Generation) architecture to overcome limitations of LLMs such as hallucination and lack of up-to-date information. Understand the core components of RAG including embeddings and vector databases, and grasp the overall operational flow.

Section 6 - RAG Performance Improvement Strategies

Learn about metrics and methodologies for evaluating the accuracy of RAG systems, and explore practical performance improvement techniques. Through this, we seek ways to enhance the efficiency and reliability of RAG systems.

Section 7 - Fine-tuning and Lightweight Tuning Strategies

Learn the basic concepts and application strategies of fine-tuning to adapt LLMs to specific tasks or domains. Additionally, acquire efficient model tuning methods through lightweight tuning techniques such as PEFT (Parameter-Efficient Fine-Tuning).

Section 8 - Understanding and Utilizing LLM Agents

You will understand the concept and structure of Agents and explore various types of LLM Agents. You will learn through specific examples how Agents can be utilized in actual work and services.

Section 9 - Latest Theories on MCP and A2A

We will compare and analyze the concepts, operating principles, structures, and utilization methods of MCP (Multi-agent Cooperative Planning) and A2A (Agent-to-Agent), the latest multi-agent system theories. Through this, you will understand advanced agent system design.

From theory to practice


Point 1. Understanding LLM Core Principles Without Math

Do you use ChatGPT but wonder how it works? This course clearly explains how LLMs operate through a structure-focused approach without complex formulas. You'll understand the fundamental principles behind hallucinations, and why prompts, RAG, fine-tuning, and agents are necessary.


Point 2. You can easily understand the principles of LLM.

It focuses on understanding how LLMs operate in terms of their thinking structure, rather than just how to use the tools.

Through this, you'll gain a theoretical framework to understand and explain concepts that appear in AI-related meetings or planning documents without confusion. Additionally, by understanding the structural limitations of LLMs such as hallucinations, recency issues, and context constraints,

you can determine what to expect and what not to expect.

Point 3. Theoretical Framework for Understanding RAG, Fine-tuning, and Agents

We theoretically understand the core components and overall operational flow of the RAG architecture that emerged to address the structural limitations of LLMs. We also structurally explain what problems fine-tuning and lightweight tuning techniques (PEFT, LoRA) were designed to solve, and when they become appropriate choices. Agents are also covered not from the perspective of "how to build them," but focusing on why Agents are needed, their internal structure, and how they differ from simple automation.


Point 4. Building a Perspective to Understand AI

The goal is to move beyond 'just using AI' and develop a perspective for understanding AI structurally.

From core LLM concepts like tokens, embeddings, and context windows, to the latest multi-agent theories such as MCP and A2A

We organize the content focusing on why these structures emerged.


I use ChatGPT, but I don't understand why it works this way.
This course was created for exactly these people.


✔️ Beginners who want to understand LLM from the basic principles

  • Those who want to structurally understand why LLMs experience hallucinations

  • Those who want to know why prompts, RAG, fine-tuning, and agents are necessary and how they work

  • Those who want to fundamentally understand how AI works beyond just using tools

✔️ Planners/practitioners who want to accurately explain LLM concepts in AI-related meetings or planning documents

  • Those who want to clearly explain the core principles of LLMs (tokens, embeddings, context window)

  • Those who want to understand the latest LLM technology trends such as RAG, fine-tuning, and agents

  • Those who want to explore realistic application strategies while considering LLM limitations when planning and establishing AI service strategies

✔️ Working developers/data analysts who want to effectively apply LLM to their work

  • Those who want to structurally understand LLM's reasoning and answer generation process to apply it in development

  • Those who want to establish criteria for determining which approach (prompt, RAG, fine-tuning, agent) is suitable for problem-solving

  • Those who want to design strategies to overcome LLM limitations (hallucinations, recency, context) and apply them to actual services


Stop using AI like a 'black box.'
Become an expert who understands how LLMs work.

Important Notes Before Enrollment

  • This course is
    a theory-focused lecture that concentrates on understanding
    the structure and operating principles of LLMs (Large Language Models)..

    • Hands-on exercises are supplementary tools to aid in understanding concepts.

    • It does not aim to teach how to use specific AI tools or practical automation.

Prerequisites and Important Notes

  • It is helpful to have a basic understanding of core concepts such as LLM, Transformer, and Self-Attention.

  • Familiarity with related terms such as tokens, context windows, and embeddings will be helpful for learning.

  • Curiosity about how AI LLMs work and the willingness to learn are important.



Recommended for
these people

Who is this course right for?

  • The Core Principles of LLM, Transformer, and Self-Attention

  • Essential concepts explained: tokens, context windows, embeddings, etc.

  • Core Prompt Engineering Techniques (Zero-shot, Few-shot, CoT)

  • RAG Architecture and Methods for Improving Accuracy

  • Differences Between Fine-tuning vs RAG and Selection Criteria

  • The Structure of AI Agents and Real-World Application Scenarios

  • Latest multi-agent theory trends including MCP, A2A, etc.

Need to know before starting?

  • People who use ChatGPT but are always confused about the concept of LLM

  • Planners and PMs who don't understand meetings when RAG and agents are discussed

  • Field practitioners considering AI adoption or implementation strategies

  • People who are not developers but want to properly understand AI LLMs

  • A learner looking for "introductory lectures on LLM fundamentals"

Hello
This is

4,481

Learners

225

Reviews

51

Answers

4.6

Rating

11

Courses

Lee JinKyu | Lee JinKyu

AI·LLM·Big Data Analysis Expert / CEO of Happy AI

👉You can check the detailed profile at the link below.
https://bit.ly/jinkyu-profile

Hello.
I am Lee JinKyu (Ph.D. in Engineering, Artificial Intelligence), CEO of Happy AI, who has consistently worked with AI and big data analysis across R&D, education, and project sites.

I have analyzed various unstructured data, such as
surveys, documents, reviews, media, policies, and academic data,
based on Natural Language Processing (NLP) and text mining.
Recently, I have been delivering practical AI application methods tailored to organizations and work environments
using Generative AI and Large Language Models (LLMs).

I have collaborated with numerous public institutions, corporations, and educational organizations, including Samsung Electronics, Seoul National University, Offices of Education, Gyeonggi Research Institute, Korea Forest Service,
and Korea National Park Service, and have conducted a total of over 200 research and analysis projects across various domains such as
healthcare, commerce, ecology, law, economics, and culture.


🎒 Inquiries for Lectures and Outsourcing

Kmong Prime Expert (Top 2%)


📘 Bio (Summary)

  • 2024.07 ~ Present
    CEO of Happy AI, a company specializing in Generative AI and Big Data analysis

  • Ph.D. in Engineering (Artificial Intelligence)
    Dongguk University Graduate School of AI

    Major: Large Language Models (LLM) (2022.03 ~ 2026.02) 2023 ~ 2025 Public News AI Columnist (Generative AI Bias, RAG, LLM Utilization Issues) 2021 ~ 2023 AI & Big Data Specialist Company Stell

    Major: Large Language Models (LLM)

    Bio (Summary) 2024.07 ~ Present CEO of Happy AI, a company specializing in Generative AI and Big Data Analysis Ph.D. in Engineering (Artificial Intelligence) Dongguk University Graduate School of AI Major: Large Language Models (LLM)

    (March 2022 – February 2026)

  • 2023 ~ 2025
    Public News AI Columnist
    (Generative AI Bias, RAG, LLM Utilization Issues)

  • 2021 ~ 2023
    Developer at Stellavision, an AI and Big Data company

  • 2018 ~ 2021
    Government-funded Research Institute NLP & Big Data Analysis Researcher


🔹 Areas of Expertise (Lecture & Project Focused)

  • Generative AI and LLM Utilization

    • Private LLM, RAG, Agent

    • Basics of LoRA and QLoRA Fine-tuning

  • AI-based Big Data Analysis

    • Survey, review, media, policy, and academic data

  • Natural Language Processing (NLP) & Text Mining

    • Topic Analysis, Sentiment Analysis, Keyword Networks

  • Public/Corporate AI Task Automation

    • Document Summarization, Classification, and Analysis

      Natural Language Processing (NLP) and text mining for reviews, media, policy, and academic data. Topic analysis, sentiment analysis, and keyword networks. Public and corporate AI workflow automation for document summarization, classification, and analysis.


🎒 Courses & Activities (Selected)

2025

  • LLM/sLLM Application Development
    (Fine-tuning, RAG, and Agent-based) – KT

2024

  • LangChain·RAG-based LLM Programming – Samsung SDS

  • LLM Theory and RAG Chatbot Development Practice – Seoul Digital Foundation

  • Introduction to Big Data Analysis based on ChatGPT – LetUin Edu

  • AI Fundamentals & Prompt Engineering Techniques – Korea Vocational Development Institute

  • LDA & Sentiment Analysis with ChatGPT – Inflearn

  • Python-based Text Analysis – Seoul National University of Science and Technology

  • Building LLM Chatbots with LangChain – Inflearn

2023

  • Python Basics using ChatGPT – Kyonggi University

  • Big Data Expert Course Special Lecture – Dankook University

  • Fundamentals of Big Data Analysis – LetUin Edu


💻 Projects (Summary)

  • Building a Private LLM-based RAG Chatbot (Korea Electric Power Corporation)

  • LLM-based Big Data Analysis for Forest Restoration (National Institute of Forest Science)

  • Private LLM Text Mining Solution for Internal Networks (Government Agency)

  • Instruction Tuning and RLHF-based LLM Model Development

  • Healthcare, Law, Policy, and Education Data Analysis

  • AI Analysis of Survey, Review, and Media Data

Over 200 projects completed, including public institutions, corporations, and research institutes


📖 Publication (Selected)

  • Improving Commonsense Bias Classification by Mitigating the Influence of Demographic Terms (2024)

  • Improving Generation of Sentiment Commonsense by Bias Mitigation
    – International Conference on Big Data and Smart Computing (2023)

  • Analysis of Perceptions of LLM Technology Based on News Big Data (2024)

  • Numerous NLP-based text mining studies
    (Forestry, Environment, Society, and Healthcare sectors)


🔹 Others

  • Python-based data analysis and visualization

  • Data Analysis Using LLM

  • Improving work productivity using ChatGPT, LangChain, and Agents

Curriculum

All

27 lectures ∙ (1hr 31min)

Published: 
Last updated: 

Reviews

All

4 reviews

4.8

4 reviews

  • lgm00120636618님의 프로필 이미지
    lgm00120636618

    Reviews 1

    Average Rating 5.0

    5

    100% enrolled

    • kimcs4215님의 프로필 이미지
      kimcs4215

      Reviews 1

      Average Rating 4.0

      4

      100% enrolled

      The content was good, but it was disappointing that the lecture materials were not provided.

      • leejinkyu0612
        Instructor

        Hello, I apologize for the late response. Could you please send me an email at leejinkyu0612@naver.com? I will send you the lecture materials.

    • sungjoon님의 프로필 이미지
      sungjoon

      Reviews 1

      Average Rating 5.0

      5

      33% enrolled

      • sinkei94564416님의 프로필 이미지
        sinkei94564416

        Reviews 5

        Average Rating 4.8

        5

        33% enrolled

        Limited time deal ends in 6 days

        $176,082.00

        64%

        $18.70

        HappyAI's other courses

        Check out other courses by the instructor!

        Similar courses

        Explore other courses in the same field!