User Tools

Site Tools


faq:cuda_support

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
faq:cuda_support [2014/04/06 14:29] vondelefaq:cuda_support [2014/04/06 14:32] vondele
Line 1: Line 1:
 ====== Which parts of CP2K are CUDA-accelerated? ====== ====== Which parts of CP2K are CUDA-accelerated? ======
-  +
   * Anything that uses ''dbcsr_multiply'', i.e. sparse matrix multiplication, when compiled with ''%%-D__DBCSR_CUDA%%''. This benefits in particular the [[doi>10.1021/ct200897x| linear scaling DFT]] code. See also [[http://dbcsr.cp2k.org | the DBCSR project.]]   * Anything that uses ''dbcsr_multiply'', i.e. sparse matrix multiplication, when compiled with ''%%-D__DBCSR_CUDA%%''. This benefits in particular the [[doi>10.1021/ct200897x| linear scaling DFT]] code. See also [[http://dbcsr.cp2k.org | the DBCSR project.]]
   * If linked against an accelerated scalapack/blas library (in particular pdgemm/pdsyrk/dgemm) that executes these calls on the GPU. The impact of this is most visible for MP2 and RPA calculations. On the hybrid Cray XC30 linking against libsci_acc makes this happen.   * If linked against an accelerated scalapack/blas library (in particular pdgemm/pdsyrk/dgemm) that executes these calls on the GPU. The impact of this is most visible for MP2 and RPA calculations. On the hybrid Cray XC30 linking against libsci_acc makes this happen.
   * FFTs, when compiled with ''%%-D__PW_CUDA%%''.   * FFTs, when compiled with ''%%-D__PW_CUDA%%''.
 +
 +
 +
  
faq/cuda_support.txt · Last modified: 2022/03/09 13:17 by oschuett