About Course

Course Outline

This course provides an introduction to GPU, multi-core and multi-node programming and gives an overview of the importance of large scale parallelism, threading concepts, multithreading methodology and programming with threads (MPI, CUDA, OpenMP, and pThreads). The course will help in the design and implementation of GPU and parallel applications. 

 Expected Outcomes

After completing this course, a student should be able to:

  •   Get exposure to CUDA programming on NVIDIA GPU architecture .
  •   Gain an understanding of how to develop well-optimized threaded applications and to improve HPC application performance on parallel computers (both SMP and distributed parallel machines). 
  •   Exhibit understanding of multi-node computing using MPI.

Topics to Be Covered

  1. Basic concepts in parallel and GPU programming  
  2. Amdahl's Law. 
  3. Distributed Parallel Programming and Shared Memory Parallel Programming. 
  4. Instruction level parallelism, vectorization, and SSE instructions.  
  5. Processes, threads and Message Passing Interface (MPI).  
  6. Data parallel and task parallel programming.  
  7. Synchronization and mutual exclusion issues.  
  8. Synchronization primitives - mutex, critical sections, semaphores. 
  9. Hazards, data races, deadlocks and subtle bugs in parallel programs.  
  10. Parallel overheads, load balancing and performance tuning.  
  11. Parallelization of a serial applications: partition, communicate, agglomerate.  
  12. Application scalability.  
  13. Basic of CUDA architecture .  
  14. Programming with CUDA C.  
  15. CUDA Optimization I, II, III.  
  16. CUDA Tools - Visual Profiler, Parallel N sight.  
  17. CUDA Libraries.  
  18. Directive based Programming.  
  19. Recent topics - Features of CUDA 4.0, 4.1 Toolkit.  
  20. Multi-GPU Programming, UVA.