Аннотация:The paper deals with the issue of detecting adversarial attacks on machinelearning models. Such attacks are understood as deliberate (special) datachanges at one of the stages of the machine learning pipeline, which is designedto either prevent the operation of the machine learning system, or vice versa,to achieve the desired result for the attacker. Contention attacks pose a greatthreat to machine learning systems because they do not guarantee the results and quality of the system. And such guarantees are, for example, mandatory forthe use of a machine learning (artificial intelligence) system in critical areas suchas avionics, automatic driving, special applications, etc. The article considersone of the possible detectors for the so-called evasion attacks.