![]() |
ИСТИНА |
Войти в систему Регистрация |
Интеллектуальная Система Тематического Исследования НАукометрических данных |
||
This article discusses the issues of testing large language models. Large language models are the most popular form of generative machine learning models. The simple and clear usage model has led to their enormous popularity. However, like other machine learning models, large language models are susceptible to adversarial attacks. One could even say that the success of large language models has greatly increased interest in the security of machine learning models themselves. This direction immediately turned out to affect all users of machine learning systems. This article discusses the use of ready-made datasets for adversarial testing of large language models.