This course covers the entire process of designing and implementing generative AI services centered on LangChain 1.0 and LangGraph through step-by-step hands-on practice.
Beyond simple LLM calls, you'll directly implement operational AI system architectures that include agent-based architecture, state management, memory, streaming, middleware, and Human-in-the-Loop.
Through hands-on practice with document/PDF/web data-based RAG systems, SQL Agent (Chinook DB), tool-calling-based Agents, Supervisor pattern multi-agents, and state machine-based workflows using LangGraph Graph API, you'll build reusable agent pipelines that can be immediately applied in real-world scenarios.
Additionally, through structured output (Pydantic-based), agent middleware (Summarization, HITL, Retry, PII protection), and token/step-by-step streaming, you'll complete generative AI applications with the stability, scalability, and controllability required in actual services.
👉 For those who want to accurately understand the internal structure and execution flow of LangChain/LangGraph
👉 For those who want to implement RAG·Agent as a real service structure, not just a "demo"
👉 This is the optimal course for those who need a realistic practical roadmap covering state-based agents, SQL·document automation, and multi-agent orchestration.