SpMV on GPUs
From HPCRL Wiki
This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix.
Publications
- Automatic Selection of Sparse Matrix Representation on GPUs, in Proc. ICS 2015 [1].
- Characterizing Dataset Dependence for Sparse Matrix-Vector Multiplication on GPUs, in Proc. Workshop on Parallel Programming for Analytics Applications (held with PPoPP), 2015 [2].
- A Model-Driven Blocking Strategy for Load Balanced Sparse Matrix-Vector Multiplication on GPUs, Journal of Parallel and Distributed Computing (JPDC) [3].
- A Fast Sparse Matrix Multiplication on GPUs for Graph Applications, in Proc. SC 2014 [4].
- An Efficient Two-Dimensional Blocking Mechanism for Sparse Matrix-Vector Multiplication on GPUs, in Proc. ICS 2014 [5].
Project Members
- Arash Ashari
- Naser Sedaghati
- John Eisenlohr
- Louis-Noel Pouchet
- P. Sadayappan