강의

멘토링

로드맵

NEW
AI Development

/

Deep Learning & Machine Learning

<Building and Learning LLM from Scratch> Code Commentary

This is a code explanation lecture for <Build a Large Language Model (from Scratch)> (Gilbut, 2025). The code can be found on GitHub (https://github.com/rickiepark/llm-from-scratch/). <Build a Large Language Model (from Scratch)> is the Korean translation of the bestseller <Build a Large Language Model (from Scratch)> (Manning, 2024) written by Sebastian Raschka. This book provides a way to learn and utilize the operating principles of large language models by building a complete model from scratch, starting with OpenAI's GPT-2 model.

5 learners are taking this course

  • haesunpark
실습 중심
llm
사전훈련
미세튜닝
대규모언어모델
PyTorch
gpt-2
transformer
LLM
Fine-Tuning

What you will learn!

  • Starting from scratch, we implement a complete LLM directly with code.

  • Learn the core components that make up LLMs, including transformers and attention mechanisms.

  • Learn how to pre-train LLMs similar to GPT.

  • Learn how to fine-tune LLMs for classification.

  • Learn how to fine-tune LLMs to respond following human instructions.

This course explains the example code provided with . The GitHub repository (https://github.com/rickiepark/llm-from-scratch/) contains not only the example code from the book but also various supplementary materials. Explanations for these additional contents are also provided.

This course can be taken by anyone without purchasing the book. However, it is most effective when taken together with the book. Some code explanations may be difficult to understand without viewing them alongside the book. The prerequisite knowledge required is Python programming. It would be helpful if you have experience using deep learning and PyTorch. If you are encountering these two concepts for the first time, please read Appendix A first.

The lectures covering the content of <Build a Large Language Model (From Scratch)> can be viewed for free on YouTube. Please refer to the translator's blog for errata.

Book Introduction

Follow the code line by line, and your own GPT will be complete!
A practical guide to implementing GPT from scratch and mastering LLM principles through hands-on experience

Difficult concepts are explained through illustrations, and you learn LLMs by building them yourself. This book is a hands-on LLM introduction that allows you to learn by implementing the structure and operating principles of large language models from start to finish. Rather than simply explaining concepts, it starts with text preprocessing and tokenization, embedding processes, and gradually builds up self-attention and multi-head attention, transformer blocks step by step. It then integrates these components to complete an actual GPT model, and directly handles core elements of modern architecture design such as model parameter count, training stabilization techniques, activation functions, and normalization methods. It also provides in-depth guidance on pre-training and fine-tuning processes. You can conduct pre-training on unlabeled data, tune models for downstream tasks like text classification, and even practice the recently popular instruction-based learning techniques. It also includes cutting-edge content like LoRA-based parameter-efficient fine-tuning (PEFT), broadly presenting methods to connect LLMs to actual services and research. All concepts are implemented in PyTorch code and optimized to be practical even in regular laptop environments. By following the implementation process in this book, you will naturally understand what happens inside LLMs and gain hands-on experience of how large language model mechanisms work.

Recommended for
these people

Who is this course right for?

  • Someone who wants to understand in detail how Large Language Models (LLMs) work

  • Those who want to pre-train and fine-tune LLMs using PyTorch and the transformers package

  • Someone who wants to know the structure of OpenAI's GPT-2 model

  • Someone who can't rest until they've made everything themselves!

Need to know before starting?

  • I need basic knowledge about Python programming.

Hello
This is

20,166

Learners

190

Reviews

63

Answers

4.8

Rating

6

Courses

기계공학을 전공했지만 졸업 후엔 줄곧 코드를 읽고 쓰는 일을 했습니다. Google AI/Cloud GDE, Microsoft AI MVP입니다. 텐서 플로우 블로그(tensorflow.blog)를 운영하고 있고, 머신러닝과 딥러닝에 관한 책을 집필하고 번역하면서 소프트웨어와 과학의 경계를 흥미롭게 탐험하고 있습니다.

『혼자 만들면서 공부하는 딥러닝』(한빛미디어, 2025), 『혼자 공부하는 머신러닝+딥러닝(개정판)』(한빛미디어, 2025), 『혼자 공부하는 데이터 분석 with 파이썬』(한빛미디어, 2023), 『챗GPT로 대화하는 기술』(한빛미디어, 2023), 『Do it! 딥러닝 입문』(이지스퍼블리싱, 2019)을 집필했습니다.

『밑바닥부터 만들면서 배우는 LLM』(길벗, 2025), 『핸즈온 LLM』(한빛미디어, 2025), 『머신 러닝 Q & AI』(길벗, 2025), 『개발자를 위한 수학』(한빛미디어, 2024), 『실무로 통하는 ML 문제 해결 with 파이썬』(한빛미디어, 2024), 『머신러닝 교과서: 파이토치 편』(길벗, 2023), 『스티븐 울프럼의 챗GPT 강의』(한빛미디어, 2023), 『핸즈온 머신러닝 3판』(한빛미디어, 2023), 『만들면서 배우는 생성 딥러닝 2판』(한빛미디어, 2023), 『코딩 뇌를 깨우는 파이썬』(한빛미디어, 2023), 『트랜스포머를 활용한 자연어 처리』(한빛미디어, 2022), 『케라스 창시자에게 배우는 딥러닝 2판』(길벗, 2022), 『개발자를 위한 머신러닝&딥러닝』(한빛미디어, 2022), 『XGBoost와 사이킷런을 활용한 그레이디언트 부스팅』(한빛미디어, 2022), 『구글 브레인 팀에게 배우는 딥러닝 with TensorFlow.js』(길벗, 2022), 『(개정2판)파이썬 라이브러리를 활용한 머신러닝』(한빛미디어, 2022)을 포함하여 수십여 권의 책을 우리말로 옮겼습니다.

Curriculum

All

44 lectures ∙ (1hr 54min)

Published: 
Last updated: 

Reviews

Not enough reviews.
Please write a valuable review that helps everyone!

Limited time deal

$46,200.00

30%

$51.70

haesunpark's other courses

Check out other courses by the instructor!

Similar courses

Explore other courses in the same field!