Аннотация:Stacking is historically one of the first ensemble learning methods. It combines several base models (lower-level models) built using absolutely different classes of machine learning methods by means of a “meta-learner” (high-level model) that takes as its inputs the output values of the base models. This chapter demonstrates the ability of stacking to improve predictive performance by combining four base classifiers: partial least squares regression (PLS), regression trees M5P, multiple linear regressions (MLR), and a nearest neighbor model (IBk). It is important to note that the mixing weights of a meta-learner used to combine base regression models should be non-negatives, because the estimates of each base level model should correlate with the target property. The chapter concludes that the stacking of several base classifiers has led to considerable decrease of prediction error compared to that for the best base classifier.