Tuning ANNs Hyperparameters and Neural Architecture Search Using HPCстатья
Информация о цитировании статьи получена из
Scopus
Статья опубликована в журнале из списка Web of Science и/или Scopus
Дата последнего поиска статьи во внешних источниках: 7 июля 2021 г.
Аннотация:Rapid development of deep ANNs made the number ofhyperparameters constantly grow. As a sequence various aspects ofANNs, such as inference time, efficient resources utilization, losses andeven training time were strongly influenced. In general methods of hyperparameters tuning are used for adapting well-known ANN models tonew tasks or to tasks in similar areas without pre-training or for synthesis of new particular architectures for particular problems. In this article we compare different types of hyperparameters tuning like CoDeepNEAT, Naive Evolution, Tree-Parzen estimation, structured annealingwith MorphNet post-tuning. We apply these methods to particular network architectures for image processing and HRM signal estimation. Theprocess of adaptation this technology to big networks requires a lot ofcomputational resources, so it’s necessary to use parallel implementations. It can be done by utilizing HPC with hybrid computational nodes.Also we propose new type of tool based on Microsoft NNI. It is usedfor tuners comparison, convergence analysis, and runs different tuners inparallel mode on cluster nodes.