Skip to content

Floating Point Precision in Parallel Programming on CPUs, GPUs and clusters

Floating Point Precision in Parallel Programming on CPUs, GPUs and clusters

Details

All,

This talk is on floating point computation and precision:

  • formatting and programming language types
  • precision of arithmetic operations
  • standards: what they cover and don't cover
  • precision of mathematical functions
  • limitations of formats and math. functions on calculus
  • floating point in CPU and GPU programming
  • impact on parallel computation in clusters
  • good programming practices to improve intra-node and inter-node computation

This should help in understanding why results may look unstable or wrong, or differ across HW platforms, and how to keep them as precise as possible in parallel computing while being as fast as possible!

We will give away a graphics card to one of the participants.

Contents in English and talk in French.

Jerome.

Photo of Intelligence Artificielle | Machine Learning | Deep Learning group
Intelligence Artificielle | Machine Learning | Deep Learning
See more events