inflearn logo

<From Scratch: Building and Learning LLMs> Commentary Lecture

This is a course covering the GitHub notebooks and bonus content from <Build a Large Language Model from Scratch> (Gilbut, 2025). GitHub: https://github.com/rickiepark/llm-from-scratch/ <Build a Large Language Model from Scratch> is the Korean translation of the bestseller <Build a Large Language Model (from Scratch)> (Manning, 2024) by Sebastian Raschka. This book provides a way to learn and utilize the operating principles of large language models by building a complete model starting from scratch with OpenAI's GPT-2 model.

(4.8) 21 reviews

527 learners

Level Basic

Course period Unlimited

PyTorch
PyTorch
gpt-2
gpt-2
transformer
transformer
LLM
LLM
Fine-Tuning
Fine-Tuning
PyTorch
PyTorch
gpt-2
gpt-2
transformer
transformer
LLM
LLM
Fine-Tuning
Fine-Tuning

Reviews from Early Learners

Reviews from Early Learners

4.8

5.0

Park Ju Yeong

100% enrolled

I studied in the following order: Challenge lecture attendance - Book reading - Source analysis lecture attendance I will continue with Large Language Models Core Concepts Quickly.

5.0

김진철

14% enrolled

Working hard.....😊

5.0

든든한꼬마

65% enrolled

Thank you for the great lecture.

What you will gain after the course

  • Starting from scratch, we implement a complete LLM directly with code.

  • Learn the core components that make up LLMs, including transformers and attention mechanisms.

  • Learn how to pre-train LLMs similar to GPT.

  • Learn how to fine-tune LLMs for classification.

  • Learn how to fine-tune LLMs to respond following human instructions.

This course explains the example code provided with <build (from scratch) a large language model>. The GitHub repository (<a target="_blank" rel="noopener noreferrer nofollow" href="https://github.com/rickiepark/llm-from-scratch/)에는">https://github.com/rickiepark/llm-from-scratch/)</a> contains not only the example code from the book but also various supplementary materials. Explanations for these additional contents are also provided.</build>

This course can be taken by anyone without purchasing the book. However, it is most effective when taken together with the book. Some code explanations may be difficult to understand without referring to the book. The prerequisite knowledge required is Python programming. It would be helpful if you have experience using deep learning and PyTorch. If you are encountering these two concepts for the first time, please read Appendix A first.

You can watch lectures covering the content of <Build a Large Language Model (From Scratch)> for free on YouTube. Please refer to the translator's blog for errata.

Book Introduction

Follow the code line by line, and your own GPT will be complete!
A practical guide to implementing GPT from scratch and mastering LLM principles through hands-on experience

Difficult concepts are explained through illustrations, and LLMs are learned by building them directly. This book is a practical LLM introductory guide that allows you to learn by implementing the structure and operating principles of large language models from start to finish. Rather than simply explaining concepts, it starts with text preprocessing and tokenization, and the embedding process, then gradually builds up self-attention and multi-head attention, and transformer blocks step by step. It then integrates these components to complete an actual GPT model, and directly handles core elements of modern architecture design such as model parameter count, training stabilization techniques, activation functions, and normalization methods. It also provides in-depth guidance on pre-training and fine-tuning processes. You can conduct pre-training on unlabeled data, tune models for downstream tasks like text classification, and even practice the recently spotlighted instruction-based learning techniques. It also includes cutting-edge content like LoRA-based parameter-efficient fine-tuning (PEFT), broadly presenting methods to connect LLMs to actual services and research. All concepts are implemented in PyTorch code and optimized to be executable even in general laptop environments. By following the implementation process in this book, you will naturally understand what happens inside LLMs and gain hands-on experience of how large language model mechanisms work.

Recommended for
these people

Who is this course right for?

  • Someone who wants to understand in detail how Large Language Models (LLMs) work

  • Those who want to pre-train and fine-tune LLMs using PyTorch and the transformers package

  • Someone who wants to know the structure of OpenAI's GPT-2 model

  • Someone who can't rest until they've made everything themselves!

Need to know before starting?

  • I need basic knowledge about Python programming.

Hello
This is haesunpark

22,765

Learners

393

Reviews

131

Answers

4.9

Rating

10

Courses

I majored in mechanical engineering, but since graduation, I have been consistently reading and writing code. I am a Google AI/Cloud GDE and a Microsoft AI MVP. I run the TensorFlow blog (tensorflow.blog) and am interestingly exploring the boundaries between software and science by writing and translating books on machine learning and deep learning.

 

ml-dl-book-roadmap.png.webp

 

He has authored "Deep Learning from Scratch" (Hanbit Media, 2025), "Machine Learning + Deep Learning Alone (Revised Edition)" (Hanbit Media, 2025), "Data Analysis with Python Alone" (Hanbit Media, 2023), "The Art of Conversing with ChatGPT" (Hanbit Media, 2023), and "Do it! Introduction to Deep Learning" (Easys Publishing, 2019).

 

He has translated dozens of books into Korean, including "Large Language Models: Fast Track to the Core!" (Insight, 2025), "Machine Learning: Fast Track to the Core!" (Insight, 2025), "Learning LLMs by Building from Scratch" (Gilbut, 2025), "Hands-On LLMs" (Hanbit Media, 2025), "Machine Learning Q & AI" (Gilbut, 2025), "Mathematics for Developers" (Hanbit Media, 2024), "Machine Learning Pocket Reference with Python" (Hanbit Media, 2024), "Machine Learning with PyTorch and Scikit-Learn" (Gilbut, 2023), "What Is ChatGPT Doing ... and Why Does It Work?" (Hanbit Media, 2023), "Hands-On Machine Learning, 3rd Edition" (Hanbit Media, 2023), "Generative Deep Learning, 2nd Edition" (Hanbit Media, 2023), "Python for the Coding Brain" (Hanbit Media, 2023), "Natural Language Processing with Transformers" (Hanbit Media, 2022), "Deep Learning with Python, 2nd Edition" (Gilbut, 2022), "AI and Machine Learning for Coders" (Hanbit Media, 2022), "Hands-On Gradient Boosting with XGBoost and Scikit-Learn" (Hanbit Media, 2022), "Deep Learning with TensorFlow.js" (Gilbut, 2022), and "Introduction to Machine Learning with Python, 2nd Revised Edition" (Hanbit Media, 2022).

More

Curriculum

All

79 lectures ∙ (15hr 24min)

Published: 
Last updated: 

Reviews

All

21 reviews

4.8

21 reviews

  • 85artihoya님의 프로필 이미지
    85artihoya

    Reviews 1

    Average Rating 5.0

    5

    30% enrolled

    • jchkim55님의 프로필 이미지
      jchkim55

      Reviews 2

      Average Rating 5.0

      5

      14% enrolled

      Working hard.....😊

      • haesunpark
        Instructor

        Fighting! :)

    • calculator님의 프로필 이미지
      calculator

      Reviews 140

      Average Rating 4.8

      5

      65% enrolled

      Thank you for the great lecture.

      • haesunpark
        Instructor

        Thank you!

    • redinblue6136님의 프로필 이미지
      redinblue6136

      Reviews 4

      Average Rating 5.0

      5

      100% enrolled

      I studied in the following order: Challenge lecture attendance - Book reading - Source analysis lecture attendance I will continue with Large Language Models Core Concepts Quickly.

      • haesunpark
        Instructor

        Thank you. Fighting!

    • 7000cj님의 프로필 이미지
      7000cj

      Reviews 137

      Average Rating 5.0

      5

      8% enrolled

      haesunpark's other courses

      Check out other courses by the instructor!

      Similar courses

      Explore other courses in the same field!

      $77.00