Adversarial attacks attempt to disrupt the training, retraining and utilizing
of artificial intelligence and machine learning models in large-scale
distributed machine learning systems. This causes security risks on its
prediction outcome. For example, attackers attempt to poison the model by
either presenting inaccurate misrepresentative data or altering the models’
parameters. In addition, Byzantine faults including software, hardware, network
issues occur in distributed systems which also lead to a negative impact on the
prediction outcome. In this paper, we propose a novel distributed training
algorithm, partial synchronous stochastic gradient descent (ParSGD), which
defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate
the effectiveness of our algorithm under three common adversarial attacks again
the ML models and a Byzantine fault during the training phase. Our results show
that using ParSGD, ML models can still produce accurate predictions as if it is
not being attacked nor having failures at all when almost half of the nodes are
being compromised or failed. We will report the experimental evaluations of
ParSGD in comparison with other algorithms.

By admin