@cleardusk
2016-03-06T22:48:21.000000Z
字数 3646
阅读 1549
GjzCV
一块 K40,两块 K20
Device 0: "Tesla K40c"
CUDA Driver Version / Runtime Version 7.5 / 7.5
CUDA Capability Major/Minor version number: 3.5
Total amount of global memory: 11520 MBytes (12079136768 bytes)
(15) Multiprocessors, (192) CUDA Cores/MP: 2880 CUDA Cores
GPU Max Clock rate: 745 MHz (0.75 GHz)
Memory Clock rate: 3004 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 1572864 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Enabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 132 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
Device 1: "Tesla K20m"
CUDA Driver Version / Runtime Version 7.5 / 7.5
CUDA Capability Major/Minor version number: 3.5
Total amount of global memory: 4800 MBytes (5032706048 bytes)
(13) Multiprocessors, (192) CUDA Cores/MP: 2496 CUDA Cores
GPU Max Clock rate: 706 MHz (0.71 GHz)
Memory Clock rate: 2600 Mhz
Memory Bus Width: 320-bit
L2 Cache Size: 1310720 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Enabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
sudo gdebi cuda-repo-ubuntu1404_7.5-18_amd64.deb
sudo apt-get update
sudo apt-get install cuda
在 bash.bashrc 配置
export CUDA_HOME=/usr/local/cuda-7.5
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64
exportPATH=${CUDA_HOME}/bin:${PATH}
测试
$ cuda-install-samples-7.5.sh ~
$ cd ~/NVIDIA_CUDA-7.5_Samples
$ cd 1_Utilities/deviceQuery
$ make
$ ./deviceQuery
看来之前的 caffe 用的是 cuda7.0 编译的
➜ caffe ./examples/mnist/train_lenet.sh
./build/tools/caffe: error while loading shared libraries: libcudart.so.7.0: cannot open shared object file: No such file or directory
https://github.com/xianyi/OpenBLAS/wiki/Installation-Guide
cuda,atlas,无 cudnn
查看 GPU 使用信息
nvidia-smi
nvidia-smi --loop=1
使用多个 GPU
caffe -gpu all
仅 cuda 模式下,所有 GPU(K40,K20,K20): 5'52''
Sat Mar 5 13:52:29 CST 2016
Sat Mar 5 13:58:11 CST 2016
仅 cuda 模式下,K40: 4'42''
Sat Mar 5 14:01:47 CST 2016
Sat Mar 5 14:06:29 CST 2016
cuda + OpenBLAS,K40: 4'49
Sat Mar 5 14:18:02 CST 2016
Sat Mar 5 14:22:51 CST 2016
cuda + cudnn3.0, k40: 41'', 37'',平均大概 37''
#1
Sat Mar 5 15:16:35 CST 2016
Sat Mar 5 15:17:16 CST 2016
#2
Sat Mar 5 15:19:19 CST 2016
Sat Mar 5 15:19:56 CST 2016
cuda + cudnn4.0 与 上面结果差不多
cuda + cudnn4.0 + OpenBLAS 还是差不多