FE529 GPU Computing in Finance
Course Catalog Description
Introduction
Campus | Fall | Spring | Summer |
---|---|---|---|
On Campus | X | ||
Web Campus |
Instructors
Professor | Office | |
---|---|---|
|
More Information
Course Description
Parallel programming using GPU’s is a relatively new area for multithreaded programming. It requires a certain amount of extra knowledge even for the most accomplished programmers. The objective of the course is to provide this extra for our students. Our students will be very well prepared for their future programming and software developing jobs by completing this course. This course is the first of a sequence of advanced programming courses that at the moment do not exist in any financial program at any US institution. This sequence of courses (if realized) in 5 years will make Stevens the top US institution for financial programming. It is easy to see then that the students completing this course will gain unique skills that will put them on top of other graduates.
Course Outcome
After completing the course, students will be able to:
- Gain basic knowledge of parallel programming;
- Understand the memory management and data transfer methodology in CUDA
- Program simple financial models using CUDA platform.
Course Resources
Textbook
Sanders, Jason, and Edward Kandrot. CUDA by example: an introduction to general-purpose GPU programming. Addison-Wesley Professional, 2010. https://developer.nvidia.com/cuda-example
Additional References
Grading
Grading Policies
- 40% Homework
- 20% Classwork
- 40% Projects
Lecture Outline
Topic | Reading | |
---|---|---|
Week 1 | Introduction to massively parallel programming and CUDA | CUDA environment configuration GPU Computing Overview Why is parallel computing important? What is CUDA? Why is it important? Sample example using CUDA |
Week 2 | Basics of CUDA | Thread, block, grid, kernel Thread synchronization, communication, and errors CUDA Programming Model |
Week 3 | CUDA memories | Global, shared and Constant Memory Host & device Copying GPU memory |
Week 4 | CUDA API | CUDA API library Random number generator |
Week 5 | Simple Matrix Multiplication in CUDA | How to set threads, block grid Sample parallel computation |
Week 6 | CUDA Memory Model | using GPU memory more efficiently threads management |
Week 7 | Performance considerations | how to parallel the computing strategy Optimization Using Shared Memory |
Week 8 | Useful Information on CUDA Tools | DUDA runtime librart CUDA cor library |
Week 9 | Parallel Thread Execution | CUDA Architecture Execution methodology |
Week 10 | ArrayFire | Array Fire is a fast software library for GPU computing with an easy-to-use API. |
Week 11 | CUDA demo I | Random number generation Comparison with CPU |
Week 12 | CUDA demo II | Monte Carlo simulation using CUDA Comparison with CPU |
Week 13 | CUDA demo III | Optimal Dynamic Monte Carlo method in option pricing |
Week 14 | Final Project |