Skip to content

Details

High level programming languages and GPU accelerators are powerful enablers for a wide range of applications. Achieving scalable vertical (within a compute node), horizontal (across compute nodes), and temporal (over different generations of hardware) performance while retaining productivity requires effective abstractions. Distributed arrays are one such abstraction that enables high level programming to achieve highly scalable performance. Distributed arrays achieve this performance by deriving parallelism from data locality, which naturally leads to high memory bandwidth efficiency. This talk explores distributed array performance on a variety of hardware. Scalable performance is demonstrated within and across CPU cores, CPU nodes, and GPU nodes. The interactive AI supercomputing hardware used spans decades and allows a direct comparison of hardware improvements over this time range.

Please register in advance for this seminar even if you plan to attend in person at https://acm-org.zoom.us/webinar/register/2117607393625/WN_lYs4lxKfSlGkMVq71ibN-g

Indicate on the registration form if you plan to attend in person. This will help us determine whether the room is close to reaching capacity. We plan to serve light refreshments from about 6:30 pm.

After registering, you will receive a confirmation email containing information about joining the webinar.

We may make some auxiliary material such as slides and access to the recording available after the seminar to people who have registered.

This is a joint meeting of the GBC/ACM (http://www.gbcacm.org) and the Boston Chapter of the IEEE-CS.

Events in Cambridge, MA
Artificial Intelligence
Artificial Intelligence Applications
High Performance Computing
High Scalability Computing
Supercomputing

Sponsors

Sponsor logo
ACM
15% off first year membership with ACM

Members are also interested in