NVidia and other graphics card have put comparatively inexpensive massively parallel computing within reach of common desktop (and laptop) computers in the form of the Graphics Processing Unit (GPU). GPUs can reach amazing computational throughput in massively parallel applications, with current estimates of 3 teraflops being bandied about. But the key here is the parallelization which means that perhaps tens of thousands of threads are executing exactly the same instruction at any time.
This brings with it a completely different computational view. Just like Alice's Looking-Glass world didn't make sense to her at first but was still internally consistent, the world of massively parallel heterogeneous programming most likely won't make sense to programmers whose entire experience has been in sequential programming - but it is still internally consistent.
This presentation will give a brief overview of the basic computational model of many GPUs. Additionally sample programs will be shown emphasizing the difference in thinking which must be done in order to make use of this new gift of massive parallelism.
Mike is a programmer with the Boeing Company in Long Beach, CA where he creates instrument simulations, web pages and various tools used by the C-17 program.
POST-MEETING ACTIVITIES: Join us at the Pilsner Room, within walking distance.
Full location details at http://www.uuasc.org/oc.html
Please sign up for the usasc.org email list.
If you get locked out, you can call Todd Jackson at (949)[masked] and hopefully we can get you into the building. Please try to arrive before 7:00PM to avoid problems. When you arrive on the 8th floor you will find that the doors to the Oracle office are locked, we
cannot leave them open because it will set off the alarms.