Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Thursday, August 06, 2020

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic | Academia & Research - HPCwire

Editor’s Note: 
To take advantage of proliferating hardware advances intended to deliver powerful mixed-precision computing, DoE has started an effort to develop new algorithms to make the most of these new capabilities. This new effort is a meaningful and timely pivot from traditional software optimization say Jack Dongarra and Hartwig Anzt. As a first step, Dongarra, Anzt, and colleagues have surveyed the numerical linear algebra community and pulled their findings into a rich report. Dongarra and Anzt’s brief commentary, presented here, provides a glimpse into the report’s contents and, they hope, enticement to dig deeper into the full report. Both are familiar figures in HPC. Brief bios are included at the end. 

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community and their demand for high compute power in low precision formats, argues Hartwig Anzt and Jack Dongarra.
 
Photo: Shutterstock
Also, server-line products are increasingly featuring low-precision special function units, such as the Nvidia tensor cores in the Oak Ridge National Laboratory’s Summit supercomputer, providing more than an order of magnitude of higher performance than what is available in IEEE double precision. 

At the same time, the gap between the compute power on the one hand and the memory bandwidth on the other hand keeps increasing, making data access and communication prohibitively expensive compared to arithmetic operations. Having the choice between ignoring the hardware trends and continuing the traditional path, and adjusting the software stack to the changing hardware designs, the Department of Energy’s Exascale Computing Project decided for the aggressive step of building a multiprecision focus effort to take on the challenge of designing and engineering novel algorithms exploiting the compute power available in low precision and adjusting the communication format to the application-specific needs.

To start the multiprecision focus effort, we have written a survey of the numerical linear algebra community and summarized all existing multiprecision knowledge, expertise, and software capabilities in this landscape analysis report...

In general, we would compute a starting point and f (x) in single precision arithmetic, and the refinement process will be computed in double precision arithmetic. If the refinement process is cheaper than the initial computation of the solution, then double precision accuracy can be achieved nearly at the same speed as the single precision accuracy...

The survey report presents much more detail on the methods and approaches using these techniques, see https://www.icl.utk.edu/files/publications/2020/icl-utk-1392-2020.pdf.

Author Bio – Hartwig Anzt
Hartwig Anzt is a Helmholtz-Young-Investigator Group leader at the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology (KIT). He obtained his PhD in Mathematics at the Karlsruhe Institute of Technology, and afterwards joined Jack Dongarra’s Innovative Computing Lab at the University of Tennessee in 2013. Since 2015 he also holds a Senior Research Scientist position at the University of Tennessee. Hartwig Anzt has a strong background in numerical mathematics, specializes in iterative methods and preconditioning techniques for the next generation hardware architectures. His Helmholtz group on Fixed-point methods for numerics at Exascale (“FiNE”) is granted funding until 2022. Hartwig Anzt has a long track record of high-quality software development. He is author of the MAGMA-sparse open source software package managing lead and developer of the Ginkgo numerical linear algebra library, and part of the US Exascale computing project delivering production-ready numerical linear algebra libraries.

Author Bio – Jack Dongarra
Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his PhD in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University.
Read more... 

Source: HPCwire