SpMV on GPUs

From HPCRL Wiki
(Difference between revisions)
Jump to: navigation, search
(Publications)
 
(5 intermediate revisions by one user not shown)
Line 1: Line 1:
 
This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix.
 
This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix.
 +
 +
=== Publications ===
 +
 +
* ''Automatic Selection of Sparse Matrix Representation on GPUs'', in Proc. ICS 2015 [http://dl.acm.org/citation.cfm?id=2751244].
 +
 +
* ''Characterizing Dataset Dependence for Sparse Matrix-Vector Multiplication on GPUs'', in Proc. Workshop on Parallel Programming for Analytics Applications (held with PPoPP), 2015 [http://dl.acm.org/citation.cfm?id=2726941].
 +
 +
* ''A Model-Driven Blocking Strategy for Load Balanced Sparse Matrix-Vector Multiplication on GPUs'', Journal of Parallel and Distributed Computing (JPDC) [http://www.sciencedirect.com/science/article/pii/S0743731514002081].
 +
 +
* ''A Fast Sparse Matrix Multiplication on GPUs for Graph Applications'', in Proc. SC 2014 [http://dl.acm.org/citation.cfm?id=2683679].
 +
 +
* ''An Efficient Two-Dimensional Blocking Mechanism for Sparse Matrix-Vector Multiplication on GPUs'', in Proc. ICS 2014 [http://dl.acm.org/citation.cfm?id=2597678].
 +
 +
== Project Members ==
 +
 +
* Arash Ashari
 +
* Naser Sedaghati
 +
* John Eisenlohr
 +
* Louis-Noel Pouchet
 +
* P. Sadayappan

Latest revision as of 21:14, 26 September 2016

This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix.

Publications

  • Automatic Selection of Sparse Matrix Representation on GPUs, in Proc. ICS 2015 [1].
  • Characterizing Dataset Dependence for Sparse Matrix-Vector Multiplication on GPUs, in Proc. Workshop on Parallel Programming for Analytics Applications (held with PPoPP), 2015 [2].
  • A Model-Driven Blocking Strategy for Load Balanced Sparse Matrix-Vector Multiplication on GPUs, Journal of Parallel and Distributed Computing (JPDC) [3].
  • A Fast Sparse Matrix Multiplication on GPUs for Graph Applications, in Proc. SC 2014 [4].
  • An Efficient Two-Dimensional Blocking Mechanism for Sparse Matrix-Vector Multiplication on GPUs, in Proc. ICS 2014 [5].

Project Members

  • Arash Ashari
  • Naser Sedaghati
  • John Eisenlohr
  • Louis-Noel Pouchet
  • P. Sadayappan
Personal tools