강의

멘토링

커뮤니티

BEST
AI Technology

/

Deep Learning & Machine Learning

Getting Started with Custom LLM Creation – An Introduction to LoRA & QLoRA Fine-tuning

"Your first step to creating a customized LLM with LoRA-based lightweight fine-tuning!" This is an introductory hands-on course designed so that even those new to LLMs can easily follow along. We minimize complex theory and guide you step-by-step through the entire process: loading the model → applying data → training → comparing results. In a short time, you'll directly experience the workflow of cutting-edge lightweight fine-tuning techniques like LoRA and QLoRA, and gain an intuitive understanding of "how LLM fine-tuning works." Even without extensive resources, experience the satisfaction of creating an LLM specialized for your domain!

(4.7) 33 reviews

288 learners

Level Beginner

Course period Unlimited

  • HappyAI
파인튜닝
파인튜닝
맞춤형llm
맞춤형llm
도메인특화
도메인특화
llm성능평가및튜닝
llm성능평가및튜닝
llm
llm
Deep Learning(DL)
Deep Learning(DL)
NLP
NLP
AI
AI
LLM
LLM
Fine-Tuning
Fine-Tuning
파인튜닝
파인튜닝
맞춤형llm
맞춤형llm
도메인특화
도메인특화
llm성능평가및튜닝
llm성능평가및튜닝
llm
llm
Deep Learning(DL)
Deep Learning(DL)
NLP
NLP
AI
AI
LLM
LLM
Fine-Tuning
Fine-Tuning

Reviews from Early Learners

What you will gain after the course

  • You can easily understand what fine-tuning is and why LoRA and QLoRA are necessary.

  • You will experience the process of running prepared code, directly loading a small language model (sLLM), and training it.

  • Learn the process of creating a customized LLM tailored to your field without requiring extensive resources or complex theory.

Large Language Models (LLM), LLM Fine-tuning, Where Should I Start?

"LLM Fine-tuning, Where Should I Start?"

This course is designed for beginners to quickly learn the concept and overall flow of fine-tuning and practice it hands-on.

We've boldly reduced complex math and advanced theory, so you can experience step-by-step from loading the model → training → comparing results, and physically feel "Ah, this is how fine-tuning works."

Best of all, it consists of a total of 22 lectures, approximately 1 hour long, short and concise, so even those encountering it for the first time can follow along without burden.

👉 For reference, while dataset construction methods and advanced usage of Huggingface/Unsloth go beyond the scope of this course, they will be covered in future advanced courses. Therefore, this course focuses on helping beginners gain a sense of achievement while understanding the overall picture.

Features of This Course

📌Complete Mastery of the Latest Lightweight Fine-tuning Techniques

We explain the latest techniques such as LoRA, QLoRA, and PEFT step by step from the basics.

📌Hands-on Practice with Various Models

We'll apply techniques hands-on to everything from classic models like GPT-2 and BERT to the latest models including OPT-350M and Llama 3.1.

📌Includes performance comparison analysis

You'll directly compare the performance of Full Fine-tuning and LoRA methods to understand the differences.

📌Beginner-friendly structure

We provide a step-by-step learning flow so that even beginners can easily follow along.

📌Hands-on Practice-Focused Course

Develop a sense for LLM fine-tuning through hands-on practice using Hugging Face and PyTorch.

📌LLM Fine-tuning at a Glance

Without unnecessary extras, containing only the essentials, you can grasp the entire flow in about 1 hour.

This course is recommended for

Junior developers who have used ChatGPT but want to tune it directly with their own data

AI beginner developers who want to understand LLMs through hands-on practice

Those who want to grasp the big picture of LLM fine-tuning in a short time
Those who are curious about cutting-edge techniques like LoRA·QLoRA, but find difficult theories overwhelming. Those who want an easy first step before moving on to advanced courses

LLM beginners who have heard of LoRA and QLoRA but don't know what they are
You can naturally learn the concept of lightweight fine-tuning through hands-on practice without complex theory.

After completing this course

  • You can understand the structure and concept of LoRA and apply it directly to various models.

  • You will gain the complete workflow and hands-on experience to build an LLM specialized for your field.

  • You'll be able to directly compare fine-tuning results and develop the insight to independently select appropriate strategies.

  • You will be able to get a sense of developing an LLM based on your own domain knowledge.


Here's what you'll learn.

Understanding the Basic Concepts of Fine-tuning

  • You'll learn the principles of fine-tuning to optimize pre-trained LLMs for your domain.


Understanding LoRA & QLoRA Architecture

  • Understand the core principles of lightweight fine-tuning easily, without complex theory.


Full vs. LoRA Performance Comparison

  • Experience the performance differences between fine-tuning methods through hands-on experiments.


How to Use Huggingface + PyTorch

  • Learn the essential frameworks needed for actual LLM fine-tuning.

Who created this course

Hello, I'm Lee Jin-gyu, CEO of Happy AI,
passionate about generative AI and LLM fine-tuning practice.người nhiệt huyết với thực tiễn AI tạo sinh và Fine-tuning LLM.

I majored in Natural Language Processing and LLM at an AI graduate school, and since then
have carried out over 200 AI·RAG projects with Samsung Electronics, Seoul National University, Korea Electric Power Corporation, and others,
accumulating practical experience in Private LLM construction, fine-tuning, multimodal RAG, and more.

Recently, I have been conducting numerous
hands-on lectures on LangChain, RAG, and Agent LLM for various companies and public institutions.

This course is designed
❝ so that even beginners can follow along with LoRA-based fine-tuning without complex theory ❞
based on extensive practical experience, structured as a hands-on approach to learning by directly working with models.


📌 Key Career Summary

  • 2024~ CEO of HappyAI (Operating a Generative AI & RAG specialized company)

  • Completed PhD coursework in AI Graduate School (Major in LLM & Natural Language Processing)

  • Public News AI Columnist (serializing on LLM, bias issues, etc.)

  • Over 200 LLM·RAG projects of practical experience


📚 Lecture and Activity Examples

  • KT – LLM-based Agent LLM Development Lecture

  • Samsung SDS – LangChain & RAG Hands-on Training

  • Seoul Digital Foundation – LLM Theory and RAG Chatbot Development

In addition, conducted LLM and big data lectures at numerous companies


🔗 Related Links

Notes Before Taking the Course

Practice Environment

  • All practice code is provided based on Google Colab


  • Reference documents and organized notes will be provided through links.

Learning Materials

  • We'll provide it through a Notion link!

Prerequisites and Important Notes

  • Basic Python syntax


  • Basic AI and LLM knowledge (it would be good if you know the fundamentals of LLM theory)

  • You can take the course with just a Chrome browser and a Google account

Recommended for
these people

Who is this course right for?

  • LLM beginners who have heard of LLMs like ChatGPT but have never done fine-tuning themselves

  • Beginner developers and researchers who want to learn the basic workflow by directly running the latest techniques such as LoRA and QLoRA

  • For those who want to get hands-on experience by running and lightly fine-tuning sLLMs (small language models) to understand the workflow

Need to know before starting?

  • Python basic syntax (variables, functions, conditional statements, etc.)

  • Basic Deep Learning Concepts (fundamental understanding of models, training, loss functions, etc.)

  • Experience with PyTorch or Colab would be helpful

Hello
This is

4,335

Learners

218

Reviews

51

Answers

4.6

Rating

10

Courses

안녕하세요 AI와 빅데이터 분석에 진심인 해피AI 이진규입니다.

[강사약력]

이진규 (Lee JinKyu)

해피AI (Happy AI CEO)

생성 AI 및 빅데이터 분석 분야의 최신 트렌드, 인사이트, 기술 활용 방법을 깊이 있게 전달합니다.

 

🎒  강연 및 외주 문의

[email] leejinkyu0612@naver.com

[Blog] 📺https://blog.naver.com/leejinkyu0612

[YouTube] 📺 https://www.youtube.com/@HappyAI_0612

[github] https://github.com/leejin-kyu/

[Homepage] https://happyaidata.kr

[H.P] 010-9973-2113

[kakao] jinkyu0612

 

📘 크몽 Prime 전문가(상위 2%)📺https://kmong.com/gig/345782

 삼성전자, 서울대, 교육청, 경기연구원, 산림청, 국립공원관리공단, 서울시 등 다수의 정부기관 및 교육기관 프로젝트 진행

의료,커머스,생태,법학,경제,예체능 등 다양한 도메인의 연구경험(총 연구 프로젝트 200회 이상 진행)

 

📘 Bio

- 2024.07~ 생성 AI 및 빅데이터 분석 전문기업 해피AI 대표

- 2023~ 퍼블릭 뉴스 AI 칼럼니스트(AI편향 및 RAG챗봇 전문)

- 2022. AI대학원 박사과정 수료(자연어처리 및 LLM 전공)

- 2021~2023 AI/빅데이터 전문 기업 스텔라비전 개발자

- 2018~2021 정부출연연구기관 자연어처리/빅데이터 분석 연구원 (인문사회과학 데이터 연구)

 

🎒Courses & Activities

 

2025

LLM/sLLM 애플리케이션 개발 강의-파인튜닝, RAG, Agent 기반 . KT(2025)

 

2024

Langchain 및 RAG 등 LLM 프로그래밍.삼성SDS(2024)

ChatGPT 기반 빅데이터 분석 입문. 렛유인에듀 (2024)

인공지능 기초 및 데이터 분석 기초 강의. 한국직업개발원 (2024)

LLM 실무자를 위한 LLM이론 및 Langchain 기반 RAG챗봇 개발 강의. 서울디지털 재단 (2024)

쉽게 따라하는 LDA & 감성분석 빅데이터분석법 with ChatGPT. 인프런 (2024)

파이썬을 활용한 텍스트 분석 강의. 서울과학기술대학교 (2024)

랭체인(LangChain)을 활용한 LLM 챗봇 만들기(feat.ChatGPT). 인프런 (2024)

 

2023

ChatGPT를 활용한 파이썬 기초 강의. 경기대학교 (2023)

빅데이터 전문가 과정 특강. 단국대학교 (2023)

빅데이터 분석 기초 강의. 렛유인에듀 (2023)

 

 

💻 Projects

LLM 기반 산림 복원 빅데이터 분석(국립산림과학원)

Private LLM 기반 RAG 챗봇 모델 구축 (한국전력공사)

AI 기반 빅데이터 분석 기법을 적용한 설문 데이터 분석 (A정부기관)

내부망 전용 PrivateLLM을 활용한 텍스트마이닝 솔루션 개발 (D 정부기관)

빅데이터 분석을 통한 한우시장 트렌드 분석 (이화브리오)

Instruction Tuning 및 강화학습(RLHF)을 통한 LLM 모델 개발 (서울디지털재단)

AI 언어모델 기반 헬스케어 서비스의 사용자 리뷰 텍스트 분석 (삼성전자)

자연어 처리 기술 기반 텍스트마이닝을 활용한 연구동향 분석 (한국대기환경학회)

AI 모델 kopatBERT 기반 특허 논문 QA 모델 개발 (한국기술마켓)

딥러닝 기반 토픽모델링을 활용한 법학 설문 빅데이터 분석 (서울대학교)

AI 모델 Word2Vec과 감성분석을 적용한 설문 문항 빅데이터 분석 (경기연구원)

AI 모델 RNN 기반 리뷰 인사이트 추출 및 분석 프로그램 개발 (서클플랫폼)

빅데이터를 활용한 2022년 국립공원 탐방 키워드 분석 (국립공원관리공단)

이외에도 다수의 공공기관, 기업체와 개인적 의뢰 등 총 200건 이상 프로젝트 진행

 

📖 Publication

 [주요 논문 ]

Improving Commonsense Bias Classification by Mitigating the Influence of Demographic Terms.2024.

Improving Generation of Sentiment Commonsense by Bias Mitigation" International Conference on Big Data and Smart Computing.2023.

언론기사 빅데이터 분석을 통한 대규모 언어모델에 대한 기술 인식 분석: ChatGPT 등장 전후를 중심으로, 2024

자연어 처리(NLP)기반 텍스트마이닝을 활용한 소나무에 대한 국내외 연구동향(2001∼2020)분석 | 농업생명과학연구 | 2022

숲길에 대한 10 년간의 언론 인식분석-텍스트 마이닝 분석을 중심으로 | 산림경제연구 | 2021

이외에도 타 분야에서 다수의 학술논문, 학술발표, 연구보고서 등의 성과 창출

Others

Python을 활용한 데이터분석 및 시각화

LLM을 활용한 데이터분석

ChatGPT와 LangChain,Agent을 활용한 업무 생산성 향상

Curriculum

All

22 lectures ∙ (1hr 9min)

Course Materials:

Lecture resources
Published: 
Last updated: 

Reviews

All

33 reviews

4.7

33 reviews

  • jameshhjung8294님의 프로필 이미지
    jameshhjung8294

    Reviews 1

    Average Rating 5.0

    5

    32% enrolled

    • mycode200님의 프로필 이미지
      mycode200

      Reviews 9

      Average Rating 5.0

      5

      100% enrolled

      • eomhs231928님의 프로필 이미지
        eomhs231928

        Reviews 2

        Average Rating 5.0

        5

        32% enrolled

        • rsj10014119님의 프로필 이미지
          rsj10014119

          Reviews 1

          Average Rating 5.0

          5

          32% enrolled

          It's fun

          • bigdata013872님의 프로필 이미지
            bigdata013872

            Reviews 1

            Average Rating 5.0

            5

            32% enrolled

            As someone who has also done a lot of lectures in the past, you explain both paper reviews and other topics really well. This is very helpful!! Thank you.

            $26.40

            HappyAI's other courses

            Check out other courses by the instructor!

            Similar courses

            Explore other courses in the same field!