강의

멘토링

커뮤니티

BEST
AI Technology

/

Deep Learning & Machine Learning

Getting Started with Custom LLM Creation – An Introduction to LoRA & QLoRA Fine-tuning

"Your first step to creating a customized LLM with LoRA-based lightweight fine-tuning!" This is an introductory hands-on course designed so that even those new to LLMs can easily follow along. We minimize complex theory and guide you step-by-step through the entire process: loading the model → applying data → training → comparing results. In a short time, you'll directly experience the workflow of cutting-edge lightweight fine-tuning techniques like LoRA and QLoRA, and gain an intuitive understanding of "how LLM fine-tuning works." Even without extensive resources, experience the satisfaction of creating an LLM specialized for your domain!

(4.7) 35 reviews

294 learners

Level Beginner

Course period Unlimited

  • HappyAI
Deep Learning(DL)
Deep Learning(DL)
NLP
NLP
AI
AI
LLM
LLM
Fine-Tuning
Fine-Tuning
Deep Learning(DL)
Deep Learning(DL)
NLP
NLP
AI
AI
LLM
LLM
Fine-Tuning
Fine-Tuning

Reviews from Early Learners

Reviews from Early Learners

4.7

5.0

dlstjd6401

100% enrolled

"A taste of fine-tuning for beginners" - Recommended for those who want to see what fine-tuning is all about - Not recommended for working developers looking to apply fine-tuning directly As a course for beginners, the content is easy and not too deep. The practice code is also very simple and concise.

5.0

ogu0312

32% enrolled

It's fun

5.0

bigdata01

32% enrolled

As someone who has also done a lot of lectures in the past, you explain both paper reviews and other topics really well. This is very helpful!! Thank you.

What you will gain after the course

  • You can easily understand what fine-tuning is and why LoRA and QLoRA are necessary.

  • You will experience the process of running prepared code, directly loading a small language model (sLLM), and training it.

  • Learn the process of creating a customized LLM tailored to your field without requiring extensive resources or complex theory.

Large Language Models (LLM), LLM Fine-tuning, Where Should I Start?

"LLM Fine-tuning, Where Should I Start?"

This course is designed for beginners to quickly learn the concept and overall flow of fine-tuning and practice it hands-on.

We've boldly reduced complex math and advanced theory, so you can experience step-by-step from loading the model → training → comparing results, and physically feel "Ah, this is how fine-tuning works."

Best of all, it consists of a total of 22 lectures, approximately 1 hour long, short and concise, so even those encountering it for the first time can follow along without burden.

👉 For reference, while dataset construction methods and advanced usage of Huggingface/Unsloth go beyond the scope of this course, they will be covered in future advanced courses. Therefore, this course focuses on helping beginners gain a sense of achievement while understanding the overall picture.

Features of This Course

📌Complete Mastery of the Latest Lightweight Fine-tuning Techniques

We explain the latest techniques such as LoRA, QLoRA, and PEFT step by step from the basics.

📌Hands-on Practice with Various Models

We'll apply techniques hands-on to everything from classic models like GPT-2 and BERT to the latest models including OPT-350M and Llama 3.1.

📌Includes performance comparison analysis

You'll directly compare the performance of Full Fine-tuning and LoRA methods to understand the differences.

📌Beginner-friendly structure

We provide a step-by-step learning flow so that even beginners can easily follow along.

📌Hands-on Practice-Focused Course

Develop a sense for LLM fine-tuning through hands-on practice using Hugging Face and PyTorch.

📌LLM Fine-tuning at a Glance

Without unnecessary extras, containing only the essentials, you can grasp the entire flow in about 1 hour.

This course is recommended for

Junior developers who have used ChatGPT but want to tune it directly with their own data

AI beginner developers who want to understand LLMs through hands-on practice

Those who want to grasp the big picture of LLM fine-tuning in a short time
Those who are curious about cutting-edge techniques like LoRA·QLoRA, but find difficult theories overwhelming. Those who want an easy first step before moving on to advanced courses

LLM beginners who have heard of LoRA and QLoRA but don't know what they are
You can naturally learn the concept of lightweight fine-tuning through hands-on practice without complex theory.

After completing this course

  • You can understand the structure and concept of LoRA and apply it directly to various models.

  • You will gain the complete workflow and hands-on experience to build an LLM specialized for your field.

  • You'll be able to directly compare fine-tuning results and develop the insight to independently select appropriate strategies.

  • You will be able to get a sense of developing an LLM based on your own domain knowledge.


Here's what you'll learn.

Understanding the Basic Concepts of Fine-tuning

  • You'll learn the principles of fine-tuning to optimize pre-trained LLMs for your domain.


Understanding LoRA & QLoRA Architecture

  • Understand the core principles of lightweight fine-tuning easily, without complex theory.


Full vs. LoRA Performance Comparison

  • Experience the performance differences between fine-tuning methods through hands-on experiments.


How to Use Huggingface + PyTorch

  • Learn the essential frameworks needed for actual LLM fine-tuning.

Who created this course

Hello, I'm Lee Jin-gyu, CEO of Happy AI,
passionate about generative AI and LLM fine-tuning practice.người nhiệt huyết với thực tiễn AI tạo sinh và Fine-tuning LLM.

I majored in Natural Language Processing and LLM at an AI graduate school, and since then
have carried out over 200 AI·RAG projects with Samsung Electronics, Seoul National University, Korea Electric Power Corporation, and others,
accumulating practical experience in Private LLM construction, fine-tuning, multimodal RAG, and more.

Recently, I have been conducting numerous
hands-on lectures on LangChain, RAG, and Agent LLM for various companies and public institutions.

This course is designed
❝ so that even beginners can follow along with LoRA-based fine-tuning without complex theory ❞
based on extensive practical experience, structured as a hands-on approach to learning by directly working with models.


📌 Key Career Summary

  • 2024~ CEO of HappyAI (Operating a Generative AI & RAG specialized company)

  • Completed PhD coursework in AI Graduate School (Major in LLM & Natural Language Processing)

  • Public News AI Columnist (serializing on LLM, bias issues, etc.)

  • Over 200 LLM·RAG projects of practical experience


📚 Lecture and Activity Examples

  • KT – LLM-based Agent LLM Development Lecture

  • Samsung SDS – LangChain & RAG Hands-on Training

  • Seoul Digital Foundation – LLM Theory and RAG Chatbot Development

In addition, conducted LLM and big data lectures at numerous companies


🔗 Related Links

Notes Before Taking the Course

Practice Environment

  • All practice code is provided based on Google Colab


  • Reference documents and organized notes will be provided through links.

Learning Materials

  • We'll provide it through a Notion link!

Prerequisites and Important Notes

  • Basic Python syntax


  • Basic AI and LLM knowledge (it would be good if you know the fundamentals of LLM theory)

  • You can take the course with just a Chrome browser and a Google account

Recommended for
these people

Who is this course right for?

  • LLM beginners who have heard of LLMs like ChatGPT but have never done fine-tuning themselves

  • Beginner developers and researchers who want to learn the basic workflow by directly running the latest techniques such as LoRA and QLoRA

  • For those who want to get hands-on experience by running and lightly fine-tuning sLLMs (small language models) to understand the workflow

Need to know before starting?

  • Python basic syntax (variables, functions, conditional statements, etc.)

  • Basic Deep Learning Concepts (fundamental understanding of models, training, loss functions, etc.)

  • Experience with PyTorch or Colab would be helpful

Hello
This is

4,489

Learners

225

Reviews

51

Answers

4.6

Rating

11

Courses

Lee JinKyu | Lee JinKyu

AI·LLM·Big Data Analysis Expert / CEO of Happy AI

👉You can check the detailed profile at the link below.
https://bit.ly/jinkyu-profile

Hello.
I am Lee JinKyu (Ph.D. in Engineering, Artificial Intelligence), CEO of Happy AI, who has consistently worked with AI and big data analysis across R&D, education, and project sites.

I have analyzed various unstructured data, such as
surveys, documents, reviews, media, policies, and academic data,
based on Natural Language Processing (NLP) and text mining.
Recently, I have been delivering practical AI application methods tailored to organizations and work environments
using Generative AI and Large Language Models (LLMs).

I have collaborated with numerous public institutions, corporations, and educational organizations, including Samsung Electronics, Seoul National University, Offices of Education, Gyeonggi Research Institute, Korea Forest Service,
and Korea National Park Service, and have conducted a total of over 200 research and analysis projects across various domains such as
healthcare, commerce, ecology, law, economics, and culture.


🎒 Inquiries for Lectures and Outsourcing

Kmong Prime Expert (Top 2%)


📘 Bio (Summary)

  • 2024.07 ~ Present
    CEO of Happy AI, a company specializing in Generative AI and Big Data analysis

  • Ph.D. in Engineering (Artificial Intelligence)
    Dongguk University Graduate School of AI

    Major: Large Language Models (LLM) (2022.03 ~ 2026.02) 2023 ~ 2025 Public News AI Columnist (Generative AI Bias, RAG, LLM Utilization Issues) 2021 ~ 2023 AI & Big Data Specialist Company Stell

    Major: Large Language Models (LLM)

    Bio (Summary) 2024.07 ~ Present CEO of Happy AI, a company specializing in Generative AI and Big Data Analysis Ph.D. in Engineering (Artificial Intelligence) Dongguk University Graduate School of AI Major: Large Language Models (LLM)

    (March 2022 – February 2026)

  • 2023 ~ 2025
    Public News AI Columnist
    (Generative AI Bias, RAG, LLM Utilization Issues)

  • 2021 ~ 2023
    Developer at Stellavision, an AI and Big Data company

  • 2018 ~ 2021
    Government-funded Research Institute NLP & Big Data Analysis Researcher


🔹 Areas of Expertise (Lecture & Project Focused)

  • Generative AI and LLM Utilization

    • Private LLM, RAG, Agent

    • Basics of LoRA and QLoRA Fine-tuning

  • AI-based Big Data Analysis

    • Survey, review, media, policy, and academic data

  • Natural Language Processing (NLP) & Text Mining

    • Topic Analysis, Sentiment Analysis, Keyword Networks

  • Public/Corporate AI Task Automation

    • Document Summarization, Classification, and Analysis

      Natural Language Processing (NLP) and text mining for reviews, media, policy, and academic data. Topic analysis, sentiment analysis, and keyword networks. Public and corporate AI workflow automation for document summarization, classification, and analysis.


🎒 Courses & Activities (Selected)

2025

  • LLM/sLLM Application Development
    (Fine-tuning, RAG, and Agent-based) – KT

2024

  • LangChain·RAG-based LLM Programming – Samsung SDS

  • LLM Theory and RAG Chatbot Development Practice – Seoul Digital Foundation

  • Introduction to Big Data Analysis based on ChatGPT – LetUin Edu

  • AI Fundamentals & Prompt Engineering Techniques – Korea Vocational Development Institute

  • LDA & Sentiment Analysis with ChatGPT – Inflearn

  • Python-based Text Analysis – Seoul National University of Science and Technology

  • Building LLM Chatbots with LangChain – Inflearn

2023

  • Python Basics using ChatGPT – Kyonggi University

  • Big Data Expert Course Special Lecture – Dankook University

  • Fundamentals of Big Data Analysis – LetUin Edu


💻 Projects (Summary)

  • Building a Private LLM-based RAG Chatbot (Korea Electric Power Corporation)

  • LLM-based Big Data Analysis for Forest Restoration (National Institute of Forest Science)

  • Private LLM Text Mining Solution for Internal Networks (Government Agency)

  • Instruction Tuning and RLHF-based LLM Model Development

  • Healthcare, Law, Policy, and Education Data Analysis

  • AI Analysis of Survey, Review, and Media Data

Over 200 projects completed, including public institutions, corporations, and research institutes


📖 Publication (Selected)

  • Improving Commonsense Bias Classification by Mitigating the Influence of Demographic Terms (2024)

  • Improving Generation of Sentiment Commonsense by Bias Mitigation
    – International Conference on Big Data and Smart Computing (2023)

  • Analysis of Perceptions of LLM Technology Based on News Big Data (2024)

  • Numerous NLP-based text mining studies
    (Forestry, Environment, Society, and Healthcare sectors)


🔹 Others

  • Python-based data analysis and visualization

  • Data Analysis Using LLM

  • Improving work productivity using ChatGPT, LangChain, and Agents

Curriculum

All

22 lectures ∙ (1hr 9min)

Course Materials:

Lecture resources
Published: 
Last updated: 

Reviews

All

35 reviews

4.7

35 reviews

  • dlstjd64012541님의 프로필 이미지
    dlstjd64012541

    Reviews 1

    Average Rating 5.0

    Edited

    5

    100% enrolled

    "A taste of fine-tuning for beginners" - Recommended for those who want to see what fine-tuning is all about - Not recommended for working developers looking to apply fine-tuning directly As a course for beginners, the content is easy and not too deep. The practice code is also very simple and concise.

    • motovlim님의 프로필 이미지
      motovlim

      Reviews 3

      Average Rating 5.0

      5

      64% enrolled

      • jameshhjung8294님의 프로필 이미지
        jameshhjung8294

        Reviews 2

        Average Rating 5.0

        5

        32% enrolled

        • mycode200님의 프로필 이미지
          mycode200

          Reviews 9

          Average Rating 5.0

          5

          100% enrolled

          • eomhs231928님의 프로필 이미지
            eomhs231928

            Reviews 2

            Average Rating 5.0

            5

            32% enrolled

            $17.60

            HappyAI's other courses

            Check out other courses by the instructor!

            Similar courses

            Explore other courses in the same field!