SpMV on GPUs
From HPCRL Wiki
(Difference between revisions)
Nsedaghati (Talk | contribs) |
Nsedaghati (Talk | contribs) |
||
Line 1: | Line 1: | ||
This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix. | This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix. | ||
+ | |||
+ | === Publications === | ||
+ | |||
+ | == Project Members == | ||
+ | |||
+ | * Arash Ashari | ||
+ | * Naser Sedaghati | ||
+ | * John Eisenlohr | ||
+ | * Louis-Noel Pouchet | ||
+ | * P. Sadayappan |
Revision as of 21:01, 26 September 2016
This research has focused on optimizing sparse matrix representations (i.e. storage formats) for data-parallel accelerators (i.e. GPUs). In addition, it is shown that no sparse matrix representation is consistently superior, with the best representation being dependent on the matrix sparsity patterns. The research then uses machine learning techniques to automatically select the best sparse representation for a given matrix.
Publications
Project Members
- Arash Ashari
- Naser Sedaghati
- John Eisenlohr
- Louis-Noel Pouchet
- P. Sadayappan