Axis-parallel decision trees perform poorly on that multidimensional sparse data that are frequently input in many tasks. A straightforward solution is to create decision trees that have oblique splits; however, most training approaches have low performance. These models can easily overfit, so they should be combined with a random ensemble. This paper proposes an algorithm to train kernel decision trees. At each stump, the algorithm optimizes a loss function with a margin rescaling approach that simultaneously optimizes the margin and impurity criteria. We performed an experimental evaluation of several tasks, such as studying the reaction of social media users and image recognition. The experimental results show that the proposed algorithm trains ensembles that outperform other oblique or kernel forests in many datasets.
Devyatkin, D. A., Grigoriev, O. G. Method of Training a Kernel Tree // Sci. Tech. Inf. Proc. 50, 390–396 (2023).