next up previous
Next: Test problems Up: Comparing the performance of Previous: The Test Platforms


The Sparse Matrix Benchmark

In our first study the benchmark we choose is a simple yet important operation: that of multiplying two sparse matrices. This operation appears when forming the normal equations of interior point methods for large scale numerical optimization. It also appears, either explicitly or implicitly, in very large scale unstructured calculations where a multilevel/multigrid scheme is used. For example, in the algebraic multilevel algorithm for large sparse linear systems, the matrix on a coarse grid, $A_c$, is derived from the matrix on the fine grid, $A_f$, using the following Galerkin product:

\begin{displaymath}A_c\ =\ P^T A_f P,
\end{displaymath} (1)

where $P$ is the prolongation operator. This is usually one of the most time consuming parts of an algebraic multigrid algorithm. Although in practice the multiplication of the three matrices in (1) is performed in one step, rather than performed twice as a product of two matrices, in this study we only look at the product of two matrices.

Our benchmark (known as JASPA (JAva SPArse benchmark), available at http://www.dl.ac.uk/TCSC/Staff/Hu_Y_F/JASPA) therefore has two simple steps:



Subsections
next up previous
Next: Test problems Up: Comparing the performance of Previous: The Test Platforms

2000-08-16