TY - JOUR
T1 - Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications
AU - Ozaki, Katsuhisa
AU - Ogita, Takeshi
AU - Oishi, Shin'ichi
AU - Rump, Siegfried M.
PY - 2012/1
Y1 - 2012/1
N2 - This paper is concerned with accurate matrix multiplication in floating-point arithmetic. Recently, an accurate summation algorithm was developed by Rump et al. (SIAM J Sci Comput 31(1):189-224, 2008). The key technique of their method is a fast error-free splitting of floating-point numbers. Using this technique, we first develop an error-free transformation of a product of two floating-point matrices into a sum of floating-point matrices. Next, we partially apply this error-free transformation and develop an algorithm which aims to output an accurate approximation of the matrix product. In addition, an a priori error estimate is given. It is a characteristic of the proposed method that in terms of computation as well as in terms of memory consumption, the dominant part of our algorithm is constituted by ordinary floating-point matrix multiplications. The routine for matrix multiplication is highly optimized using BLAS, so that our algorithms show a good computational performance. Although our algorithms require a significant amount of working memory, they are significantly faster than 'gemmx' in XBLAS when all sizes of matrices are large enough to realize nearly peak performance of 'gemm'. Numerical examples illustrate the efficiency of the proposed method.
AB - This paper is concerned with accurate matrix multiplication in floating-point arithmetic. Recently, an accurate summation algorithm was developed by Rump et al. (SIAM J Sci Comput 31(1):189-224, 2008). The key technique of their method is a fast error-free splitting of floating-point numbers. Using this technique, we first develop an error-free transformation of a product of two floating-point matrices into a sum of floating-point matrices. Next, we partially apply this error-free transformation and develop an algorithm which aims to output an accurate approximation of the matrix product. In addition, an a priori error estimate is given. It is a characteristic of the proposed method that in terms of computation as well as in terms of memory consumption, the dominant part of our algorithm is constituted by ordinary floating-point matrix multiplications. The routine for matrix multiplication is highly optimized using BLAS, so that our algorithms show a good computational performance. Although our algorithms require a significant amount of working memory, they are significantly faster than 'gemmx' in XBLAS when all sizes of matrices are large enough to realize nearly peak performance of 'gemm'. Numerical examples illustrate the efficiency of the proposed method.
KW - Accurate computations
KW - Error-free transformation
KW - Floating-point arithmetic
KW - Matrix multiplication
UR - http://www.scopus.com/inward/record.url?scp=82255175133&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=82255175133&partnerID=8YFLogxK
U2 - 10.1007/s11075-011-9478-1
DO - 10.1007/s11075-011-9478-1
M3 - Article
AN - SCOPUS:82255175133
SN - 1017-1398
VL - 59
SP - 95
EP - 118
JO - Numerical Algorithms
JF - Numerical Algorithms
IS - 1
ER -