CUDA Programming (3) - C/C++/GPU Parallel Computing - Memory Structure
✅ (3) CUDA Memory Hierarchy Optimization, out of the full (1) to (6) series ✅ Explains NVIDIA GPU + CUDA programming step-by-step from the basics. ✅ Uses C++/C to process arrays, matrices, image processing, statistical processing, sorting, etc., extremely fast through parallel computing.
It was good that you explained in detail how to solve the problem when the calculation speed is slower than expected even when calculating in parallel using CUDA, so that I could think about how to deal with CUDA in depth.
...So please limit the course period...ㅠ
5.0
하지
100% enrolled
I like how detailed the explanation is!
5.0
몽크in도시
7% enrolled
Above all, the formula explanation is clear and easy to understand. It is really good that even things like determinants are shown in the calculation process and that it is converted into parallel processing and the results of the program execution are shown.
What you will gain after the course
Full Series - Massively Parallel Computing with CUDA using GPUs
This lecture is - Part (3) - CUDA Memory Hierarchy Optimization
Update - August 2023, "Remastering"🍀(Some audio/video)
Bundle discount coupon ✅ provided for the "CUDA Programming" roadmap ✳️
Speed is the lifeblood of a program! Make it fast with massively parallel processing techniques 🚀
I heard massive parallel computing is important 🧐
GPU/graphics card-based massively parallel computing is being very actively used in fields such as AI, deep learning, big data processing, and image/video/audio processing. Currently, the most widely applied technology in GPU parallel computing is NVIDIA's CUDA architecture.
While technologies like massively parallel computing and CUDA are considered important within the field of parallel computing, it is often difficult to even start learning because it's hard to find courses that teach this subject systematically. Through this course, you can learn CUDA programming step-by-step. CUDA and parallel computing require a theoretical background and can be challenging. However, if you follow along from the basics with this course's abundant examples and background explanations, you can certainly master it! This course is planned as a series, ensuring sufficient instructional time is provided.
In this course, we aim to explain how C++/C programmers can combine CUDA libraries and C++/C functions to accelerate problems in various fields using large-scale parallel processing techniques. Through this method, you can accelerate existing C++/C programs or develop new algorithms and programs entirely with parallel computing to achieve breakthrough speeds.
📢 Please check before taking the course!
Please secure a hardware environment where NVIDIA CUDA works in advance for the practice sessions. A PC/laptop equipped with an NVIDIA GeForce graphics card is absolutely necessary.
While NVIDIA GeForce graphics cards can be used in some cloud environments, cloud settings change frequently and often involve costs. If you are using a cloud environment, you must personally ensure you know how to access and use the graphics card.
You can find detailed information about the lecture practice environment in the <00. Preparation Before the Lecture> video within the curriculum.
Course Features ✨
#1. Abundant examples and explanations
CUDA and massively parallel computing require abundant examples and explanations. This lecture series provides a total of over 24 hours of actual instruction time.
#2. Practice is essential!
Since this is a computer programming course, we emphasize extensive hands-on practice and provide actual working source code so that you can follow along step-by-step.
#3. Focusing on the important parts!
During the lecture, redundant explanations for previously covered source code are minimized as much as possible, allowing you to focus your learning on the modified parts or the key points that need emphasis.
Recommended for these people 🙋♀️
University students who want to add a portfolio of new technologies before getting a job
Programmers who want to drastically improve existing programs
Major researchers who want to know how various applications have been accelerated
Those who want to learn the theory and practice of parallel processing for AI, deep learning, and matrix computation
A sneak peek at course reviews 🏃
*The reviews below are for an external lecture conducted by the instructor on the same topic.
"I didn't know anything about parallel algorithms or parallel computing, but after taking the course, I gained confidence in parallel computing."
"There were many algorithms that I couldn't solve with existing C++ programs, but through this lecture, I was able to improve them to enable real-time processing!"
"When I mentioned I had experience in parallel computing during an interview after taking this course, the interviewers were very surprised. They said it's not easy to find CUDA or parallel computing courses at the undergraduate level."
CUDA Programming Mastery Roadmap 🛩️
The CUDA programming course was designed as a 7-part series with over 24 hours of total content to enhance focus on the subject matter.
The roadmap course "CUDA Programming" is also available. Be sure to check it out. ✅
Each lecture consists of 6 or more sections, and each section covers an independent topic. (The current lecture, Part 0, consists of 2 sections and provides only the Introduction.)
The slides used in the lecture are provided as PDF files, and the program source code used in the sections where hands-on examples are explained is also provided.
Part 0 (1-hour free lecture)
Introduction to MPC and CUDA - This is an introductory section providing an overall overview of MPC and CUDA.
Part 1 (3 hours 40 minutes)
CUDA kernel concepts - Learn the concept of the CUDA kernel, the starting point of CUDA programming, and see parallel computing in action.
Part 2 (4 hours 15 minutes)
vector addition - Presents operations between vectors in the form of 1D arrays through various examples, and actually implements the AXPY routine using CUDA.
Part 3 (4 hours 5 minutes)Current Lesson
memory hierarchy - Learn the memory structure, which is the core of CUDA programming. Implement examples such as matrix addition and adjacent difference.
Part 4 (3 hours 45 minutes)
matrix transpose & multiply - Presents operations between 2D array-type matrices through various examples and implements the GEMM routine using CUDA.
Part 5 (3 hours 55 minutes)
atomic operation & reduction - Along with an understanding of CUDA control flow, learn everything from problem definitions to solutions for atomic operations and reduction. Also, implement the GEMV routine using CUDA.
Part 6 (3 hours 45 minutes)
search & sort - Learn examples of effectively implementing search-all problems, even-odd sort, bitonic sort, and counting merge sort using the CUDA architecture.
CUDA Programming and Massive Parallel Computing Mastery Complete!
Q&A 💬
Q. What are the reviews for the paid courses like?
Since the paid lectures are being opened sequentially from (1) to (6), the reviews are scattered and currently set to private. The paid lectures have received the following reviews so far.
It was very helpful because you explained in detail the process of maximizing performance by applying various techniques to a single example.
It was much easier to understand because you explained the memory structures and logic using visualizations.
While vaguely studying AI, it's great to be able to add in-depth content about devices.
The software installation was well-explained and the source code was provided, making it easy to practice.
Q. Is this a lecture that non-majors can also take?
C++ programming experience is required to some extent. At the very least, you should have experience with C programming. Although all examples are written as simply as possible, they are all provided in C++/C code, and the functions provided by malloc, memcpy, etc., are not explained separately.
However, if you have an understanding of computer architecture (registers, cache memory, etc.), operating systems (time-sharing, etc.), and compilers (code generation, code optimization), you will be able to understand the course content more deeply.
This course was originally designed as an advanced study for senior computer science majors at four-year universities.
Q. Is there anything I need to prepare before taking the course? Are there any reference materials regarding the course (required environment, other precautions, etc.)?
You must secure a hardware environment where NVIDIA CUDA works for the practice sessions in advance.A PC/laptop equipped with an NVIDIA GeForce graphics card is absolutely necessary.
While NVIDIA GeForce graphics cards can be used in some cloud environments, cloud configurations change frequently and often involve costs; therefore, if you are using a cloud environment, you must handle the graphics card setup on your own.
Q. To what level does the course content cover?
Starting from Part 0 and moving up from Part 1 to Part 6, the course requires deeper theory and a greater level of understanding.
We strongly recommend that you watch the courses in order from Part 0 to Part 6.
The counting merge sort covered at the end of Part 6 is a problem difficult enough that even professional researchers may find it hard to follow immediately. However, many offline students who followed the curriculum step-by-step reported that they were able to understand it without much difficulty, building upon their learning from the previous sections.
Q. Is there a reason for setting a course enrollment period?
The reason for setting a course enrollment period is that, due to the nature of the computer science field, there is a high possibility that the content of this lecture will already become outdated after that amount of time.
By then, I will see you again in a new course. 😄
Q. Are there subtitles in the videos?
Yes. Currently, all videos include subtitles.
However, some videos added in the future may not have subtitles.
Information regarding fonts used in lecture materials ✔️
Only free fonts from Google / Adobe were used in the videos and PDF files.
The Korean font used is "Noto Sans KR", and the English fonts used are Source Sans Pro and Source Serif Pro,
All of them can be downloaded for free from the following links. After downloading and extracting the files, you can install them on your PC/laptop by right-clicking with your mouse.
Those who want to accelerate array/matrix/image processing, statistical processing, sorting, etc., using C++ based parallel computing/parallel processing.
Those who want to accelerate their own developed programs using parallel computing/CUDA.
Those who wish to study NVIDIA CUDA programming/CUDA computing from the basics.
Those who wish to study both the theory and practice of GPU parallel processing/parallel computing in a balanced manner.
Need to know before starting?
C++ or C programming experience
It is even better if you have knowledge of computer architecture, registers, caches, time sharing, etc.
It was good that you explained in detail how to solve the problem when the calculation speed is slower than expected even when calculating in parallel using CUDA, so that I could think about how to deal with CUDA in depth.
...So please limit the course period...ㅠ
Hello. Thank you for your good review. Parallel processing is actually a core foundation technology in AI, especially deep learning, and I think interest in this field will gradually increase. Have a nice day. 🍀
Above all, the formula explanation is clear and easy to understand. It is really good that even things like determinants are shown in the calculation process and that it is converted into parallel processing and the results of the program execution are shown.
Hello.🌞 There are not many cases of CUDA lectures that cover the basics step by step, either overseas or domestically. In this lecture, I tried to cover all the details. Thank you for your good evaluation.🍀