inflearn logo
inflearn logo

<From Scratch: Building and Learning LLMs> Commentary Lecture

This is a course covering the GitHub notebooks and bonus content from <Build a Large Language Model from Scratch> (Gilbut, 2025). GitHub: https://github.com/rickiepark/llm-from-scratch/ <Build a Large Language Model from Scratch> is the Korean translation of the bestseller <Build a Large Language Model (from Scratch)> (Manning, 2024) by Sebastian Raschka. This book provides a way to learn and utilize the operating principles of large language models by building a complete model starting from scratch with OpenAI's GPT-2 model.

(4.8) 21 reviews

523 learners

Level Basic

Course period Unlimited

PyTorch
PyTorch
gpt-2
gpt-2
transformer
transformer
LLM
LLM
Fine-Tuning
Fine-Tuning
PyTorch
PyTorch
gpt-2
gpt-2
transformer
transformer
LLM
LLM
Fine-Tuning
Fine-Tuning

Reviews from Early Learners

Reviews from Early Learners

4.8

5.0

Park Ju Yeong

100% enrolled

I studied in the following order: Challenge lecture attendance - Book reading - Source analysis lecture attendance I will continue with Large Language Models Core Concepts Quickly.

5.0

김진철

14% enrolled

Working hard.....😊

5.0

든든한꼬마

65% enrolled

Thank you for the great lecture.

What you will gain after the course

  • Starting from scratch, we implement a complete LLM directly with code.

  • Learn the core components that make up LLMs, including transformers and attention mechanisms.

  • Learn how to pre-train LLMs similar to GPT.

  • Learn how to fine-tune LLMs for classification.

  • Learn how to fine-tune LLMs to respond following human instructions.

This course explains the example code provided with . The GitHub repository (https://github.com/rickiepark/llm-from-scratch/) contains not only the example code from the book but also various supplementary materials. Explanations for these additional contents are also provided.

This course can be taken by anyone without purchasing the book. However, it is most effective when taken together with the book. Some code explanations may be difficult to understand without referring to the book. The prerequisite knowledge required is Python programming. It would be helpful if you have experience using deep learning and PyTorch. If you are encountering these two concepts for the first time, please read Appendix A first.

You can watch lectures covering the content of <Build a Large Language Model (From Scratch)> for free on YouTube. Please refer to the translator's blog for errata.

Book Introduction

Follow the code line by line, and your own GPT will be complete!
A practical guide to implementing GPT from scratch and mastering LLM principles through hands-on experience

Difficult concepts are explained through illustrations, and LLMs are learned by building them directly. This book is a practical LLM introductory guide that allows you to learn by implementing the structure and operating principles of large language models from start to finish. Rather than simply explaining concepts, it starts with text preprocessing and tokenization, and the embedding process, then gradually builds up self-attention and multi-head attention, and transformer blocks step by step. It then integrates these components to complete an actual GPT model, and directly handles core elements of modern architecture design such as model parameter count, training stabilization techniques, activation functions, and normalization methods. It also provides in-depth guidance on pre-training and fine-tuning processes. You can conduct pre-training on unlabeled data, tune models for downstream tasks like text classification, and even practice the recently spotlighted instruction-based learning techniques. It also includes cutting-edge content like LoRA-based parameter-efficient fine-tuning (PEFT), broadly presenting methods to connect LLMs to actual services and research. All concepts are implemented in PyTorch code and optimized to be executable even in general laptop environments. By following the implementation process in this book, you will naturally understand what happens inside LLMs and gain hands-on experience of how large language model mechanisms work.

Recommended for
these people

Who is this course right for?

  • Someone who wants to understand in detail how Large Language Models (LLMs) work

  • Those who want to pre-train and fine-tune LLMs using PyTorch and the transformers package

  • Someone who wants to know the structure of OpenAI's GPT-2 model

  • Someone who can't rest until they've made everything themselves!

Need to know before starting?

  • I need basic knowledge about Python programming.

Hello
This is haesunpark

22,656

Learners

386

Reviews

131

Answers

4.9

Rating

10

Courses

I majored in mechanical engineering, but since graduation, I have been reading and writing code. I am a Google AI/Cloud GDE and a Microsoft AI MVP. I run the TensorFlow blog (tensorflow.blog), and I enjoy exploring the boundary between software and science while writing and translating books on machine learning and deep learning.

 

tensorflow blog-5.jpg.webp

 

He has authored "Deep Learning by Building Alone" (Hanbit Media, 2025), "Machine Learning + Deep Learning Alone (Revised Edition)" (Hanbit Media, 2025), "Data Analysis with Python Alone" (Hanbit Media, 2023), "The Art of Conversing with ChatGPT" (Hanbit Media, 2023), and "Do it! Introduction to Deep Learning" (EasysPublishing, 2019).

 

He has translated dozens of books into Korean, including "Large Language Models, Just the Essentials!" (Insight, 2025), "Machine Learning, Just the Essentials!" (Insight, 2025), "Build a Large Language Model (From Scratch)" (Gilbut, 2025), "Hands-On Large Language Models" (Hanbit Media, 2025), "Machine Learning Q & AI" (Gilbut, 2025), "Math for Developers" (Hanbit Media, 2024), "Machine Learning Solutions with Python for Real-World Applications" (Hanbit Media, 2024), "Machine Learning with PyTorch and Scikit-Learn" (Gilbut, 2023), "What Is ChatGPT Doing... and Why Does It Work?" by Stephen Wolfram (Hanbit Media, 2023), "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 3rd Edition" (Hanbit Media, 2023), "Generative Deep Learning, 2nd Edition" (Hanbit Media, 2023), "Python for Awakening Your Coding Brain" (Hanbit Media, 2023), "Natural Language Processing with Transformers" (Hanbit Media, 2022), "Deep Learning with Python, 2nd Edition" (Gilbut, 2022), "Machine Learning & Deep Learning for Developers" (Hanbit Media, 2022), "Gradient Boosting with XGBoost and Scikit-Learn" (Hanbit Media, 2022), "Deep Learning with TensorFlow.js from Google Brain Team" (Gilbut, 2022), and "Introduction to Machine Learning with Python, 2nd Edition" (Hanbit Media, 2022).

More

Curriculum

All

79 lectures ∙ (15hr 24min)

Published: 
Last updated: 

Reviews

All

21 reviews

4.8

21 reviews

  • 7000cj님의 프로필 이미지
    7000cj

    Reviews 136

    Average Rating 5.0

    5

    8% enrolled

    • redinblue6136님의 프로필 이미지
      redinblue6136

      Reviews 4

      Average Rating 5.0

      5

      100% enrolled

      I studied in the following order: Challenge lecture attendance - Book reading - Source analysis lecture attendance I will continue with Large Language Models Core Concepts Quickly.

      • haesunpark
        Instructor

        Thank you. Fighting!

    • calculator님의 프로필 이미지
      calculator

      Reviews 133

      Average Rating 4.9

      5

      65% enrolled

      Thank you for the great lecture.

      • haesunpark
        Instructor

        Thank you!

    • jchkim55님의 프로필 이미지
      jchkim55

      Reviews 2

      Average Rating 5.0

      5

      14% enrolled

      Working hard.....😊

      • haesunpark
        Instructor

        Fighting! :)

    • 85artihoya님의 프로필 이미지
      85artihoya

      Reviews 1

      Average Rating 5.0

      5

      30% enrolled

      haesunpark's other courses

      Check out other courses by the instructor!

      Similar courses

      Explore other courses in the same field!

      $77.00