The use of stability principle for kernel determination in relevance vector machinesстатья
Информация о цитировании статьи получена из
Scopus
Дата последнего поиска статьи во внешних источниках: 28 мая 2015 г.
-
Авторы:
Kropotov D.,
Vetrov D.,
Ptashko N.,
Vasiliev O.
-
Сборник:
13th International Conference on Neural Information Processing, ICONIP 2006; Hong Kong; China;
-
Серия:
Lecture Notes in Computer Science
-
Том:
4232
-
Год издания:
2006
-
Издательство:
SPRINGER-VERLAG BERLIN
-
Местоположение издательства:
HEIDELBERGER PLATZ 3, BERLIN, GERMANY,D-14197
-
Первая страница:
727
-
Последняя страница:
736
-
Аннотация:
The task of RBF kernel selection in Relevance Vector Machines (RVM) is considered. RVM exploits a probabilistic Bayesian learning framework offering number of advantages to state-of-the-art Support Vector Machines. In particular RVM effectively avoids determination of regularization coefficient C via evidence maximization. In the paper we show that RBF kernel selection in Bayesian framework requires extension of algorithmic model. In new model integration over posterior probability becomes intractable. Therefore point estimation of posterior probability is used. In RVM evidence value is calculated via Laplace approximation. However, extended model doesn’t allow maximization of posterior probability as dimension of optimization parameters space becomes too high. Hence Laplace approximation can be no more used in new model. We propose a local evidence estimation method which establishes a compromise between accuracy and stability of algorithm. In the paper we first briefly describe maximal evidence principle, present model of kernel algorithms as well as our approximations for evidence estimation, and then give results of experimental evaluation. Both classification and regression cases are considered. В© Springer-Verlag Berlin Heidelberg 2006.
-
Добавил в систему:
Кропотов Дмитрий Александрович