Made in RAG(Local LLM QA System) With Docker + ollama + R2R
kojeomstudio
In just 2 hours of hands-on practice, we build a local LLM with RAG-based QA system using the Docker + ollama + R2R framework. (Internal/in-house QA systems, personal portfolios, AI-powered work capabilities, and even commercial services are possible.)
초급
Docker, AI, wsl












![[Backend/Exception Handling Scenario/Aggregation Optimization] Backend Portfolio and Practical Experience Enhancement Strategy. All-in-One PART1강의 썸네일](https://cdn.inflearn.com/public/courses/335091/cover/1a19a4de-ec2e-4e26-a84e-28691e777020/335091.jpg?w=420)
![[Interview Speech] Speak confidently in your interview and pass with flying colors!강의 썸네일](https://cdn.inflearn.com/public/courses/328102/cover/327500b6-3b5b-4b1e-9146-e97a776f9d85/328102-eng.png?w=420)
![Data Analyst Ask Me Anything [Monthly Datarian Seminar Replay | July 2023]강의 썸네일](https://cdn.inflearn.com/public/courses/331572/cover/251a81dd-6e3f-414f-9b3c-862567366618/331572-eng.png?w=420)



![Career Roadmap for Designers and PMs [Practical Worksheets Provided]강의 썸네일](https://cdn.inflearn.com/public/courses/333702/cover/665205c9-c947-4682-ac84-886a4c5ee0fe/333702.jpg?w=420)