For model privacy, local model parameters in federated learning shall be
obfuscated before sent to the remote aggregator. This technique is referred to
as emph{secure aggregation}. However, secure aggregation makes model poisoning
attacks such backdooring more convenient considering that existing anomaly
detection methods mostly require access to plaintext local models. This paper
proposes SAFELearning which supports backdoor detection for secure aggregation.
We achieve this through two new primitives – emph{oblivious random grouping
(ORG)} and emph{partial parameter disclosure (PPD)}. ORG partitions
participants into one-time random subgroups with group configurations oblivious
to participants; PPD allows secure partial disclosure of aggregated subgroup
models for anomaly detection without leaking individual model privacy.
SAFELearning can significantly reduce backdoor model accuracy without
jeopardizing the main task accuracy under common backdoor strategies. Extensive
experiments show SAFELearning is robust against malicious and faulty
participants, whilst being more efficient than the state-of-art secure
aggregation protocol in terms of both communication and computation costs.

By admin