@cleardusk
2016-03-06T14:48:21.000000Z
字数 3646
阅读 2014
GjzCV
一块 K40,两块 K20
Device 0: "Tesla K40c"CUDA Driver Version / Runtime Version 7.5 / 7.5CUDA Capability Major/Minor version number: 3.5Total amount of global memory: 11520 MBytes (12079136768 bytes)(15) Multiprocessors, (192) CUDA Cores/MP: 2880 CUDA CoresGPU Max Clock rate: 745 MHz (0.75 GHz)Memory Clock rate: 3004 MhzMemory Bus Width: 384-bitL2 Cache Size: 1572864 bytesMaximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layersMaximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layersTotal amount of constant memory: 65536 bytesTotal amount of shared memory per block: 49152 bytesTotal number of registers available per block: 65536Warp size: 32Maximum number of threads per multiprocessor: 2048Maximum number of threads per block: 1024Max dimension size of a thread block (x,y,z): (1024, 1024, 64)Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)Maximum memory pitch: 2147483647 bytesTexture alignment: 512 bytesConcurrent copy and kernel execution: Yes with 2 copy engine(s)Run time limit on kernels: NoIntegrated GPU sharing Host Memory: NoSupport host page-locked memory mapping: YesAlignment requirement for Surfaces: YesDevice has ECC support: EnabledDevice supports Unified Addressing (UVA): YesDevice PCI Domain ID / Bus ID / location ID: 0 / 132 / 0Compute Mode:< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
Device 1: "Tesla K20m"CUDA Driver Version / Runtime Version 7.5 / 7.5CUDA Capability Major/Minor version number: 3.5Total amount of global memory: 4800 MBytes (5032706048 bytes)(13) Multiprocessors, (192) CUDA Cores/MP: 2496 CUDA CoresGPU Max Clock rate: 706 MHz (0.71 GHz)Memory Clock rate: 2600 MhzMemory Bus Width: 320-bitL2 Cache Size: 1310720 bytesMaximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layersMaximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layersTotal amount of constant memory: 65536 bytesTotal amount of shared memory per block: 49152 bytesTotal number of registers available per block: 65536Warp size: 32Maximum number of threads per multiprocessor: 2048Maximum number of threads per block: 1024Max dimension size of a thread block (x,y,z): (1024, 1024, 64)Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)Maximum memory pitch: 2147483647 bytesTexture alignment: 512 bytesConcurrent copy and kernel execution: Yes with 2 copy engine(s)Run time limit on kernels: NoIntegrated GPU sharing Host Memory: NoSupport host page-locked memory mapping: YesAlignment requirement for Surfaces: YesDevice has ECC support: EnabledDevice supports Unified Addressing (UVA): YesDevice PCI Domain ID / Bus ID / location ID: 0 / 3 / 0Compute Mode:< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
sudo gdebi cuda-repo-ubuntu1404_7.5-18_amd64.debsudo apt-get updatesudo apt-get install cuda
在 bash.bashrc 配置
export CUDA_HOME=/usr/local/cuda-7.5export LD_LIBRARY_PATH=${CUDA_HOME}/lib64exportPATH=${CUDA_HOME}/bin:${PATH}
测试
$ cuda-install-samples-7.5.sh ~$ cd ~/NVIDIA_CUDA-7.5_Samples$ cd 1_Utilities/deviceQuery$ make$ ./deviceQuery
看来之前的 caffe 用的是 cuda7.0 编译的
➜ caffe ./examples/mnist/train_lenet.sh./build/tools/caffe: error while loading shared libraries: libcudart.so.7.0: cannot open shared object file: No such file or directory
https://github.com/xianyi/OpenBLAS/wiki/Installation-Guide
cuda,atlas,无 cudnn
查看 GPU 使用信息
nvidia-sminvidia-smi --loop=1
使用多个 GPU
caffe -gpu all
仅 cuda 模式下,所有 GPU(K40,K20,K20): 5'52''
Sat Mar 5 13:52:29 CST 2016Sat Mar 5 13:58:11 CST 2016
仅 cuda 模式下,K40: 4'42''
Sat Mar 5 14:01:47 CST 2016Sat Mar 5 14:06:29 CST 2016
cuda + OpenBLAS,K40: 4'49
Sat Mar 5 14:18:02 CST 2016Sat Mar 5 14:22:51 CST 2016
cuda + cudnn3.0, k40: 41'', 37'',平均大概 37''
#1Sat Mar 5 15:16:35 CST 2016Sat Mar 5 15:17:16 CST 2016#2Sat Mar 5 15:19:19 CST 2016Sat Mar 5 15:19:56 CST 2016
cuda + cudnn4.0 与 上面结果差不多
cuda + cudnn4.0 + OpenBLAS 还是差不多