21th AIAI 2025, 26 - 29 June 2025, Limassol, Cyprus

Adversarial Attacks on Trees: Size Matters

Drousiotis Efthyvoulos, Habibi Soodeh, Varsi Alessandro, Maskell Simon, Spirakis Paul, Lovett Tom

Abstract:

  Adversarial attacks pose a significant threat to machine learning models, such as tree-based models. These attacks typically work by inducing small input perturbations that lead to large prediction errors. Although tree‐based models are widely used for their interpretability and computational efficiency, recent studies have revealed that their vulnerability to attacks is strongly influenced by tree size. In this paper, we investigate the adversarial robustness of tree-based models by comparing fully greedy Random Forest (RF), Extreme Gradient Boosting (XGBoost) and a Decision Tree (DT) with Bayesian decision trees constructed via Sequential Monte Carlo (SMC) framework. Our theoretical analysis proves that unregularized trees grow to θ(N) nodes, whereas imposing a Poisson prior on the number of leaves produces an exponential tail bound that regularizes tree growth. Experimentally, we evaluate the models under two attack scenarios—a Gaussian Noise Perturbation attack and a Black-Box Transferability attack—using multiple publicly available datasets. The results consistently show that SMC trees (which are substantially smaller and partition the feature space into larger regions) exhibit better robustness against adversarial perturbations. Our findings suggest that Bayesian alternatives to conventional greedy tree-building techniques constitute a promising approach to improving the security and interpretability of tree-based classifiers.  

*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.