ИСТИНА 
Войти в систему Регистрация 

Интеллектуальная Система Тематического Исследования НАукометрических данных 

A wide family of machine learning algorithms are often called adaptive or datadriven methods, due to their capability of adaptation to the available set of data, learning by example, requiring no physically grounded analytical or computational model or a priori knowledge of the studied object. Such methods may be often called biologically inspired – as by their origin, as by resemblance of their behaviour to the behaviour of data processing systems in living creatures. The scope of problems that are solved by such methods includes those of prediction, evaluation, classification, clusterization, inverse problems, and other data analysis problems. Examples of such methods are artificial neural networks (ANN) and the method of partial least squares, or projection to latent structures (PLS). From mathematical point of view, these methods are sophisticated approximation methods using adaptively tuned combinations of relatively simple functions of most general type. Many physical methods are based on indirect measurements, and therefore they imply solution of inverse problems (IP) – determination of the soughtfor parameters by the observed values. Such problems are often illconditioned or even incorrect. That is why adaptive methods of IP solving based on approximation of the inverse function are demanded and efficiently used. Attention is driven to the main differences between ANN and PLS, and the main shortcoming of PLS – that it is a linear method (yet it is the best linear method). Even with an adequate nonlinear preprocessing of data, PLS is often unable to build an approximation comparable by its quality with that implemented by an ANN. The main advantage of PLS is its low computational cost. In the lecture, methodological aspects of using ANN are discussed. From the point of view of data processing methods, any IP can have various formulations: as a regression, classification (for discretevalued IP) or optimization problem. The key differences of ANN as a method of solving IP from alternative methods are discussed. When solving IP, ANN can be used within one of several methodological approaches: “modelbased”, “experimentbased”, and “quasimodel”. The difference among these approaches, their properties and areas of application are described. A separate question arises if the IP being solved is a multiparameter one. The possible approaches to the order of determination of parameters are autonomous determination, simultaneous determination of all parameters, group determination (with joining of parameters into groups with simultaneous determination within each group), and stepwise determination (when some of the parameters already determined are used as additional inputs for determination of other parameters). Other useful additional methods discussed in the lecture are clusterbased approach, when the problem domain is separated into several subdomains, and the IP is solved separately in each of these subdomains, and training with adding noise into training data, thus increasing noise resilience of the solution. It is stressed that with increasing complexity of a problem, linear methods begin to fail, and ANN turn out to be more resilient to this increasing complexity. The general purpose of the lecture is to attract attention of a wide audience of young scientists to the great opportunities opened by use of biologically inspired adaptive methods, and by the latest methodological achievements in IP solution by ANN. The material is illustrated by examples of IP from two areas of physics – optical spectroscopy and electrical prospecting.