Аннотация:This chapter demonstrates the interpretable rules method. In this method selected rules are sensitive to any modification of the training data, even to the order of the data in the input file. Some machine learning methods allow user to obtain easily interpretable models involving a relatively small number of attributes. Generally, such models consist in limited number of rules which are organized in a logical way. For small and/or unbalanced datasets, these methods may produce “instable” models, which means that small changes of the training data lead to significant changes of selected rules. As a consequence, this may cause a problem with the model's interpretation. However, predictions for the same test set instances performed with different models (before and after training data variations) may be similar. The chapter concludes that the data reordering is sufficient to modify the interpretable rules model.