Аннотация:Sizes of up to date deep artificial networks and data amounts required for their training ob-struct us from using straightforward learning strategies which imply learning from scratch for each new task. Due to this transfer learning becomes highly relevant.In this paper we studied experimentally Adversarial Discriminative Domain Adaptation method (ADDA) efficiency depending on the unlabeled data amounts for several image recognition tasks. We have estimated the efficiency of ADDA method ensembled with fine-tuning, the number of labeled data for which varied from 0.2% to 14% of the training dataset. Fine-tuning after the transfer increases classification accuracy by 3% – 8% depending on the labeled data amount used. In order to improve the ensembling efficiency of transfer learning and fine-tuning based on especially small labeled data amounts (less than 1% of classic train-ing dataset) we need to develop algorithms extracting domain-specific knowledge from the labeled data and integrate them into transfer learning algorithms. We also plan to study ap-plicability criteria of this approach