Federated learning, conducive to solving data privacy and security problems,
has attracted increasing attention recently. However, the existing federated
boosting model sequentially builds a decision tree model with the weak base
learner, resulting in redundant boosting steps and high interactive
communication costs. In contrast, the federated bagging model saves time by
building multi-decision trees in parallel, but it suffers from performance
loss. With the aim of obtaining an outstanding performance with less time cost,
we propose a novel model in a vertically federated setting termed as Federated
Gradient Boosting Forest (FedGBF). FedGBF simultaneously integrates the
boosting and bagging’s preponderance by building the decision trees in parallel
as a base learner for boosting. Subsequent to FedGBF, the problem of
hyperparameters tuning is rising. Then we propose the Dynamic FedGBF, which
dynamically changes each forest’s parameters and thus reduces the complexity.
Finally, the experiments based on the benchmark datasets demonstrate the
superiority of our method.