Understanding Core LLM Theories through Structure: The Working Principles of ChatGPT, RAG, and Agents All at Once

You use ChatGPT, but haven't you found it difficult to explain why it gives certain answers? "I know terms like RAG, Agents, and Fine-tuning... but it's hard to explain them accurately." "I get tongue-tied whenever I hear LLM-related terminology." "Conceptual explanations are always vague during AI meetings." This course was created specifically for people like you. This is a theoretical course designed to help you understand LLMs not just as a 'tool,' but as a 'structure.' It doesn't teach you how to use ChatGPT or Gemini; instead, it establishes the foundation so you can explain exactly why they work the way they do.

(4.3) 12 reviews

143 learners

Level Beginner

Course period Unlimited

ChatGPT
ChatGPT
prompt engineering
prompt engineering
LLM
LLM
RAG
RAG
AI Agent
AI Agent
ChatGPT
ChatGPT
prompt engineering
prompt engineering
LLM
LLM
RAG
RAG
AI Agent
AI Agent

Reviews from Early Learners

Reviews from Early Learners

4.3

5.0

배추가될거야

100% enrolled

It was a great introductory session for listening to the latest key AI concepts.

5.0

해삐

100% enrolled

I had zero knowledge about LLMs, but now I have a rough idea of what they are. Haha, thank you!

5.0

김문수

33% enrolled

Thank you for the great lecture.

What you will gain after the course

  • Structural thinking for understanding the process of how LLMs generate answers

  • Criteria for not confusing core concepts such as Prompts, RAG, and Agents

  • The ability to comprehend and accurately follow AI-related discussions

  • A realistic sense of judgment considering the limitations and parameters of LLMs


Understanding how LLMs work
A definitive theory course to make it your own

Gain a deep understanding of LLM, the core technology leading the AI era.


Do you use ChatGPT or Gemini but wonder how they work?
This course explains the basic concepts of LLMs step-by-step,

It helps you easily understand the core technologies of Prompts, RAG, and Agents.

How LLM works

A theoretical lecture to make your own

I am using ChatGPT and Gemini, but

Haven't you wondered why those answers come out that way?

This course covers everything from the basic structure of LLM to its core concepts.

Explained with a focus on understanding, without complex formulas.

As well as Transformer and Self-Attention

How Prompts, RAG, and Agents

You can naturally connect and understand how things work inside the LLM.

It's not about how to use the tools,

This is a course that establishes the criteria for evaluating AI.


This course is designed to help you understand the thinking structure of LLMs,

to utilize the latest technologies such as Prompt, RAG, and Agents

This is a lecture that builds the foundation.

What makes this lecture different?

This course does not cover simple tool usage or tricks.

LLM (Large Language Model)

  • how it understands context and

  • Why hallucinations occur and

  • and why prompts, RAG, fine-tuning, and agents emerged.

explains the core theories step-by-step, focusing on structure rather than mathematical formulas.

We have structured the content so that core concepts such as Transformer, Self-Attention, tokens, and embeddings can be understood through an intuitive flow,
rather than a mere listing of research papers.


Especially recommended for these people

  • Those who use ChatGPT but are always confused by LLM concepts

  • Planners and PMs who don't understand what's being said in meetings when RAG or Agents are mentioned khi nhắc đến RAG hay Agent

  • Working professionals who are considering AI implementation or utilization strategies

  • Those who are not developers but want to properly understand LLM

  • Learners looking for an "Introductory LLM Theory Course"

This course is not this kind of course.

  • ❌ Not a lecture explaining ChatGPT features

  • ❌ A lecture focused on how to use specific AI tools

  • ❌ Hands-on course focused on automation and work efficiency

This course is a theory-oriented lecture focused on understanding the structure and operating principles of LLMs.

After taking this course

From someone who just uses AI
to someone who can understand and design AI


Structure-centered learning to
understand the core principles of LLM


Section 1 - Basic Understanding of Generative AI and LLMs

Explore the fundamental principles of Generative AI and LLMs. Understand how LLMs statistically learn the meaning and context of language through vast amounts of text data to generate natural sentences.

Section 2 - LLM Trends and Industry Analysis

We will examine the latest development trends in LLM technology and analyze the future strategic directions of LLMs within the global AI competition landscape. This will help provide a perspective on the present and future of LLM technology.

Section 3 - Fundamentals of LLM Operating Principles

Learn the fundamental operating principles of LLMs. Understand core concepts such as tokens, embeddings, vector space, and context windows, and learn about key output control parameters.

Section 4 - Prompt Engineering Techniques

This covers the basic concepts and advanced techniques of prompt engineering to maximize LLM performance. You will cultivate effective prompt writing skills by learning various patterns such as Zero-shot, Few-shot, and Chain-of-Thought (CoT).

Section 5 - Supplementing LLM Limitations through RAG

Learn the RAG (Retrieval-Augmented Generation) architecture to overcome LLM limitations such as hallucinations and the lack of up-to-date information. Understand the core elements of RAG, including embeddings and vector databases, and grasp the overall operational workflow.

Section 6 - RAG Performance Improvement Strategies

Learn the metrics and methodologies for evaluating the accuracy of RAG systems and explore practical performance improvement techniques. Through this, seek ways to enhance the efficiency and reliability of RAG systems.

Section 7 - Fine-tuning and Parameter-Efficient Fine-Tuning Strategies

Learn the basic concepts and application strategies of fine-tuning to adapt LLMs to specific tasks or domains. Additionally, acquire efficient model tuning methods through parameter-efficient fine-tuning (PEFT) techniques.

Section 8 - Understanding and Utilizing LLM Agents

Understand the concept and structure of agents and explore various types of LLM agents. Learn through specific cases how agents can be applied to actual business tasks and services.

Section 9 - Latest Theories on MCP and A2A

We compare and analyze the concepts, operating principles, structures, and application methods of MCP (Multi-agent Cooperative Planning) and A2A (Agent-to-Agent), which are the latest multi-agent system theories. Through this, we gain an understanding of advanced agent system design.

From theory to practice


Point 1. Understand Core LLM Principles Without Formulas

Do you use ChatGPT but wonder how it works? This course provides a clear, structure-oriented explanation of how LLMs operate without using complex formulas. You will grasp the fundamental principles behind why hallucinations occur and why prompts, RAG, fine-tuning, and agents are necessary.


Point 2. You can easily understand the principles of LLMs.

It focuses on understanding the cognitive structure of how LLMs operate, rather than just simple tool usage.

Through this, you will gain a theoretical foundation to understand and explain concepts appearing in AI-related meetings or planning documents without confusion. Furthermore, by understanding the structural limitations of LLMs, such as hallucinations, recency issues, and context limits,

You will be able to determine what to expect and what not to expect.

Point 3. Theoretical framework for understanding RAG, Fine-tuning, and Agents

Theoretically understand the core components and overall workflow of the RAG architecture, which emerged to compensate for the structural limitations of LLMs. Additionally, provide a structural explanation of which problems fine-tuning and parameter-efficient fine-tuning techniques (PEFT, LoRA) were designed to solve and when they become the appropriate choice. Regarding Agents, the focus will not be on "how to build them," but rather on why they are necessary, their internal structure, and how they differ from simple automation.


Point 4. Developing a perspective to understand AI

The goal is to move beyond the stage of 'simply using AI' and develop a perspective for understanding AI structurally.

From core LLM concepts like tokens, embeddings, and context windows, to the latest multi-agent theories such as MCP and A2A.

We will summarize the content with a focus on why these structures emerged.


I use ChatGPT, but I don't know why it works this way.
I created this lecture specifically for people like you.


✔️ Beginners who want to understand the basic principles of LLMs

  • Those who want to structurally understand why LLMs cause hallucinations.

  • Those who want to know why Prompts, RAG, Fine-tuning, and Agents are necessary and how they work

  • Those who want to fundamentally understand how AI works, going beyond just learning how to use the tools.

✔️ Planners/practitioners who want to accurately explain LLM concepts in AI-related meetings or planning documents

  • Those who want to clearly explain the core principles of LLM (tokens, embeddings, context windows)

  • Those who want to understand the latest LLM technology trends, such as RAG, fine-tuning, and agents

  • Those who want to explore realistic utilization plans by considering the limitations of LLMs when planning AI services and establishing strategies.

✔️ Working developers/data analysts who want to effectively apply LLM to their work

  • Those who want to structurally understand the LLM's reasoning and answer generation process to apply it to development.

  • Those who want to establish criteria for determining which approach (Prompting, RAG, Fine-tuning, Agents) is most suitable for solving a problem.

  • Those who want to design strategies to overcome the limitations of LLMs (hallucinations, recency, context) and apply them to real-world services.


Stop using AI like a 'black box.'
Become an expert who understands the inner workings of LLMs.

Notes before taking the course

  • This lecture is an
    theory-oriented course focused on understanding the structure and operating principles of
    LLMs (Large Language Models).

    • The hands-on practice is a supplementary means to help understand the concepts.

    • It does not aim to teach how to use specific AI tools or practical automation.

Prerequisites and Important Notes

  • It is helpful to have a basic understanding of core concepts such as LLM, Transformer, and Self-Attention.

  • Familiarity with related terms such as tokens, context windows, and embeddings will be helpful for learning.

  • Curiosity and a willingness to learn about how AI LLMs work are important.



Recommended for
these people

Who is this course right for?

  • Core principles of LLM, Transformer, and Self-Attention

  • Summary of essential concepts including tokens, context windows, and embeddings

  • Core Prompt Engineering Techniques (Zero-shot, Few-shot, CoT)

  • The overall structure of RAG and methods for improving accuracy

  • Differences between Fine-tuning vs. RAG and Selection Criteria

  • AI Agent Architecture and Real-World Use Cases

  • Latest trends in multi-agent theory, such as MCP and A2A

Need to know before starting?

  • Those who use ChatGPT but are always confused by the concept of LLMs

  • Planners and PMs who don't understand what's being said in meetings when RAG and Agents are mentioned

  • Working-level professionals considering AI adoption or utilization strategies

  • Those who are not developers but want to properly understand AI LLMs.

  • Learners looking for an "Introductory Lecture on Basic LLM Theory"

Hello
This is HappyAI

4,692

Learners

249

Reviews

51

Answers

4.6

Rating

11

Courses

Lee JinKyu | Lee JinKyu

AI·LLM·Big Data Analysis Expert / CEO of Happy AI

👉You can check the detailed profile at the link below.
https://bit.ly/jinkyu-profile

Hello.
I am Lee JinKyu (Ph.D. in Engineering, Artificial Intelligence), CEO of Happy AI, who has consistently handled AI and big data analysis in R&D, education, and project sites.

I have analyzed various types of unstructured data, such as
surveys, documents, reviews, media, policies, and academic data,
based on Natural Language Processing (NLP) and text mining.
Recently, I have been delivering practical AI application methods tailored to organizations and work environments
using Generative AI and Large Language Models (LLM).

We have collaborated with numerous public institutions, corporations, and educational organizations such as Samsung Electronics, Seoul National University, the Office of Education, Gyeonggi Research Institute, the Korea Forest Service,
the Korea National Park Service, and the Seoul Metropolitan Government,
and have conducted more than 200 research and analysis projects across various domains including healthcare, commerce, ecology, law, economics, and culture.

 


🎒 Inquiries for Lectures and Outsourcing

Kmong Prime Expert (Top 2%)


📘 Bio (Summary)

  • 2024.07 ~ Present
    CEO of Happy AI, a company specializing in Generative AI and Big Data analysis

  • Ph.D. in Engineering (Artificial Intelligence)
    Dongguk University Graduate School of AI

     

    Detailed Major: Large Language Models (LLM)

     

    (2022.03 ~ 2026.02)

     

  • 2023 ~ 2025
    Public News AI Columnist
    (Generative AI Bias, RAG, LLM Application Issues)

  • 2021 ~ 2023
    AI & Big Data specialized company Stellavision Developer

  • 2018 ~ 2021
    Government-funded Research Institute Natural Language Processing & Big Data Analysis Researcher


🔹 Areas of Expertise (Lecture & Project Focused)

  • Generative AI and LLM Utilization

    • Private LLM, RAG, Agent

    • Basics of LoRA and QLoRA Fine-tuning

  • AI-based Big Data Analysis

    • Survey, review, media, policy, and academic data

  • Natural Language Processing (NLP) · Text Mining

    • Topic analysis, sentiment analysis, keyword network

  • Public and Corporate AI Task Automation

    • Document summarization, classification, and analysis

       


🎒 Courses & Activities (Selected)

2025

  • LLM/sLLM Application Development
    (Fine-tuning, RAG, Agent-based) – KT

2024

  • LangChain·RAG-based LLM Programming – Samsung SDS

  • LLM Theory and RAG Chatbot Development Practice – Seoul Digital Foundation

  • Introduction to ChatGPT-based Big Data Analysis – LetUin Edu

  • AI Fundamentals & Prompt Engineering Techniques – Korea Vocational Development Institute

  • LDA & Sentiment Analysis with ChatGPT – Inflearn

  • Python-based Text Analysis – Seoul National University of Science and Technology

  • Building LLM Chatbots Using LangChain – Inflearn

2023

  • Python Basics using ChatGPT – Kyonggi University

  • Big Data Expert Course Special Lecture – Dankook University

  • Fundamentals of Big Data Analysis – LetUin Edu


💻 Projects (Summary)

  • Building a Private LLM-based RAG Chatbot (Korea Electric Power Corporation)

  • LLM-based Forest Restoration Big Data Analysis (National Institute of Forest Science)

  • Internal Network Private LLM Text Mining Solution (Government Agency)

  • LLM Model Development based on Instruction Tuning and RLHF

  • Healthcare, Law, Policy, and Education Data Analysis

  • AI Analysis of Survey, Review, and Media Data

Performed over 200 cases, including public institutions, corporations, and research institutes


📖 Publication (Selected)

  • Improving Commonsense Bias Classification by Mitigating the Influence of Demographic Terms (2024)

  • Improving Generation of Sentiment Commonsense by Bias Mitigation
    – International Conference on Big Data and Smart Computing (2023)

  • Analysis of Perceptions of LLM Technology Based on News Article Big Data (2024)

  • Numerous NLP-based text mining studies
    (Forestry, Environment, Society, and Healthcare sectors)


🔹 Others

  • Python-based data analysis and visualization

  • Data analysis using LLM

  • Improving work productivity using ChatGPT, LangChain, and Agents

More

Curriculum

All

27 lectures ∙ (1hr 31min)

Published: 
Last updated: 

Reviews

All

12 reviews

4.3

12 reviews

  • 20007013611님의 프로필 이미지
    20007013611

    Reviews 1

    Average Rating 5.0

    5

    100% enrolled

    It was a great introductory session for listening to the latest key AI concepts.

    • smilecat0128194님의 프로필 이미지
      smilecat0128194

      Reviews 1

      Average Rating 4.0

      4

      33% enrolled

      • jhjun809님의 프로필 이미지
        jhjun809

        Reviews 2

        Average Rating 5.0

        5

        100% enrolled

        I had zero knowledge about LLMs, but now I have a rough idea of what they are. Haha, thank you!

        • 155809234님의 프로필 이미지
          155809234

          Reviews 3

          Average Rating 5.0

          5

          33% enrolled

          • bhsbhs2351964님의 프로필 이미지
            bhsbhs2351964

            Reviews 7

            Average Rating 5.0

            5

            33% enrolled

            HappyAI's other courses

            Check out other courses by the instructor!

            Similar courses

            Explore other courses in the same field!

            $18.70