Аннотация:This chapter illustrates two important approaches of Ensemble Learning (EL): bagging and boosting. The methods are demonstrated on the example of building classification models based on interpretable rules. Some general behavior of these approaches are highlighted, in particular the situations where one needs to prefer one approach to another. In Weka, the EL algorithms are generally grouped into the class of meta methods. Both Bagging and Boosting methods belong to it. The algorithm of Bagging consists of: training a pool of base models on the training sets sampled from the same distribution; and combining them either by majority vote (for classification tasks) or by averaging (for regression tasks). The training sets for Bagging are obtained by resampling the original training set uniformly (i.e., all instances (chemical compounds) are chosen from the training set with the same probability) with replacement.