Paper Note of EvaluationDLonHPC

Title: Evaluation of Deep Learning Frameworks over Different HPC Architectures

Topic: talk about performance in training time of 3 DL frameworks over GPU, CPU using/without corresponding optimization tech NVLink, KNL

Idea: using different deep neural networks trained on different combination of GPU/CPU using/without optimization tech, different DL frameworks, different training batch size and scale up/out, control factors to evaluate the corresponding performance

Contribution: a set of training time performance benchmark in different framework-architecture setting

Intellectual merit: new in evaluation of NVLink & KNL, and evaluation of Caffe, TensorFlow, SINGA

Strengths: sufficient experiments, for example use TensorFlow in two different scaling situations as a mid factor so that to indirectly compare the performance of Caffe and SINGA

Weakness: hard to reimplementing since the difficulty of compiling frameworks/implement dnns

Show comments from Gitment