inflearn logo
Challenge
Ended

<Build Your Own LLM from Scratch> Completion Challenge

This challenge is a 7-week program to read through the book <Build a Large Language Model (From Scratch)> (Gilbut, 2025) together with the translator. If you have any questions about the challenge or the book, please feel free to contact us anytime via Inflearn, the translator's blog (https://tensorflow.blog/llm-from-scratch/), or the KakaoTalk open chat room (http://bit.ly/tensor-chat, participation code: flow).

PyTorch
gpt-2
self-attention
LLM
book-challenge

53개 수업 학습

8회 미션 수행

무제한 복습, 내 것으로 만들어요.

질문하고 즉시 답을 얻어요.

성취의 증표, 수료증을 발급해요.

haesunpark님과 함께해요!

22,945

Learners

403

Reviews

131

Answers

4.9

Rating

11

Courses

I majored in mechanical engineering, but since graduation, I have been consistently reading and writing code. I am a Google AI/Cloud GDE and a Microsoft AI MVP. I run the TensorFlow blog (tensorflow.blog) and enjoy exploring the boundary between software and science by writing and translating books on machine learning and deep learning.

ml-dl-roadmap.png.webp

 He has authored "Deep Learning by Building Alone" (Hanbit Media, 2025), "Machine Learning + Deep Learning Alone (Revised Edition)" (Hanbit Media, 2025), "Data Analysis with Python Alone" (Hanbit Media, 2023), "The Art of Conversing with ChatGPT" (Hanbit Media, 2023), and "Do it! Introduction to Deep Learning" (EasysPublishing, 2019).

He has translated dozens of books into Korean, including "LLM Fine-Tuning: Quick Core Concepts!" (Insight, 2026), "Learning LLM & AI with PyTorch" (Hanbit Media, 2026), "Large Language Models: Quick Core Concepts!" (Insight, 2025), "Machine Learning: Quick Core Concepts!" (Insight, 2025), "Learning LLM by Building from Scratch" (Gilbut, 2025), "Hands-On LLM" (Hanbit Media, 2025), "Machine Learning Q & AI" (Gilbut, 2025), "Mathematics for Developers" (Hanbit Media, 2024), "Practical ML Problem Solving with Python" (Hanbit Media, 2024), "Machine Learning Textbook: PyTorch Edition" (Gilbut, 2023), "Stephen Wolfram's ChatGPT Lecture" (Hanbit Media, 2023), "Hands-On Machine Learning, 3rd Edition" (Hanbit Media, 2023), "Generative Deep Learning, 2nd Edition" (Hanbit Media, 2023), "Python for Awakening the Coding Brain" (Hanbit Media, 2023), "Natural Language Processing with Transformers" (Hanbit Media, 2022), "Deep Learning with Python, 2nd Edition" (Gilbut, 2022), "Machine Learning & Deep Learning for Developers" (Hanbit Media, 2022), "Gradient Boosting with XGBoost and Scikit-Learn" (Hanbit Media, 2022), "Deep Learning with TensorFlow.js" (Gilbut, 2022), and "Introduction to Machine Learning with Python, 2nd Edition" (Hanbit Media, 2022).

More

Challenge Information

The recruitment period for this challenge is from September 23rd to October 5th.

The challenge schedule will run for 8 weeks, from October 6th to November 30th.

(The actual challenge curriculum is 7 weeks, but the period has been set to 8 weeks to account for the Chuseok holiday)


If you participate in the challenge, the following information and benefits will be provided.

  1. The translator provides video lectures explaining the text.

  2. Weekly reading verification and progress tracking

  3. Free coupons for a paid lecture (valued at 60,000 KRW) explaining the book's code will be provided to all challenge participants.

    1. <Building LLMs from Scratch> Code Explanation Lecture (Free coupons will be issued in bulk on October 6th)

  4. Upon completion of the challenge, 20,000 points that can be used like cash on the Gilbut website will be awarded (Special thanks to Gilbut Publishing for sponsoring the points)

Reference Sites

Book Introduction

Follow the code line by line, and your very own GPT will be complete!
A practical guide to mastering the principles of LLMs by implementing GPT from scratch.

Complex concepts are explained through illustrations, and LLMs are learned by building them yourself. This book is a practical introductory guide to LLMs that allows you to learn the structure and operating principles of large language models by implementing them from scratch. Rather than just explaining concepts, it starts with text preprocessing, tokenization, and embedding processes, and step-by-step builds self-attention, multi-head attention, and Transformer blocks. It then integrates these components to complete an actual GPT model, directly handling key elements of modern architecture design such as parameter counts, training stabilization techniques, activation functions, and normalization methods. Furthermore, it provides in-depth guidance on the pre-training and fine-tuning processes. You can practice pre-training on unlabeled data, tuning the model for downstream tasks like text classification, and even instruction-based learning techniques that are currently in the spotlight. It also covers the latest topics, such as LoRA-based Parameter-Efficient Fine-Tuning (PEFT), offering a broad range of methods to connect LLMs to real-world services and research. All concepts are implemented in PyTorch code and optimized for practice even in standard laptop environments. By following the implementation process in this book, you will naturally understand what happens inside an LLM and gain a hands-on grasp of how large language model mechanisms work.

10월

5일

챌린지 시작일

2025년 10월 5일 PM 03:00

챌린지 종료일

2025년 11월 30일 PM 02:59

챌린지 커리큘럼

All

61 lectures ∙ (4hr 38min)

Course Materials:

챌린지에서 배워요

  • Implement a complete LLM from scratch directly in code.

  • You will learn the core components that make up LLMs, including Transformers and Attention.

  • Learn how to pre-train an LLM similar to GPT.

  • Learn how to fine-tune an LLM for classification.

  • Learn how to fine-tune an LLM to follow human instructions and respond.

Recommended for
these people

Who is this course right for?

  • Those who want to understand the operating principles of Large Language Models (LLMs) in detail

  • Those who want to pre-train and fine-tune LLMs using PyTorch and the transformers package

  • Those who want to know the architecture of OpenAI's GPT-2 model

  • Someone who isn't satisfied unless they make everything themselves!

Need to know before starting?

  • Basic knowledge of Python programming is required.

  • If you are not familiar with PyTorch, please refer to Appendix A at the back of the book.

Reviews

All

90 reviews

5.0

90 reviews

  • devpaw님의 프로필 이미지
    devpaw

    Reviews 3

    Average Rating 5.0

    5

    74% enrolled

    I've never had this much fun studying LLM - both the book and the lectures were great

    • haesunpark
      Instructor

      Thank you!

  • myhkjung5761님의 프로필 이미지
    myhkjung5761

    Reviews 5

    Average Rating 5.0

    5

    100% enrolled

    Thank you for translating and introducing such great materials, and for preparing the lectures and challenges!

    • haesunpark
      Instructor

      I hope this helps! 😊

  • calculator님의 프로필 이미지
    calculator

    Reviews 144

    Average Rating 4.8

    5

    100% enrolled

    My first book completion challenge... It helped me understand how LLMs work. And I'm happy that I succeeded in the challenge too 😊😊 I'm glad I participated.

    • haesunpark
      Instructor

      Congratulations on completing it! :)

  • amgwon4343님의 프로필 이미지
    amgwon4343

    Reviews 4

    Average Rating 5.0

    5

    33% enrolled

    It's great for motivation to be able to study together through the challenge, and it feels like I'm studying with focus for the first time in a while. I'll learn well with this good lecture.

    • haesunpark
      Instructor

      Keep fighting until the end. Thank you!

  • elliraum님의 프로필 이미지
    elliraum

    Reviews 4

    Average Rating 5.0

    5

    100% enrolled

    Thanks to you, I was able to study. Thank you for providing such a great book and opening the challenge! I'll work even harder on my reviews!!

    • haesunpark
      Instructor

      Thank you. Fighting!

FAQs

취소 및 환불 규정
챌린지는 지식공유자가 설정한 수업 최소 정원이 충족되지 않을 경우, 폐강 안내가 고지되며 결제 내역이 자동취소됩니다.

haesunpark's other courses

Check out other courses by the instructor!

$17.60