Аннотация:Deep neural networks (DNN) offer great opportunities for solving many problems associated with processing large-scale data. Building and using deep neural networks requires large computational resources. In this regard, the question naturally arises about the possibility of using HPC-systems for the implementation of DNN. In order to better understand the performance implications of DNN on High Performance clusters we analyze the performance of several DNN-models over 2 HPC systems: the Lomonosov-2 supercomputer (the section with processors equipped with P100 GPUs) and the Polus high-performance cluster based on IBM Power8 processors with P100 GPUs. Comparing these frameworks is interesting as they represent different types of processors (Intel for Lomonosov-2 and IBM for Polus). Apart from different processor architectures, these systems feature different internode communications, which may affect the performance of the analysed algorithms in case of parallel and distributed implementation of neural network algorithms. The studies were carried out on the basis of the PyTorch framework.