Loughborough University
Browse

Maximizing uncertainty for federated learning via Bayesian Optimisation-based model poisoning

Download (9.66 MB)
journal contribution
posted on 2025-03-05, 10:29 authored by Marios AristodemouMarios Aristodemou, Xiaolan Liu, Yuan Wang, Kostas KyriakopoulosKostas Kyriakopoulos, Sangarapillai LambotharanSangarapillai Lambotharan, Qingsong Wei
As we transition from Narrow Artificial Intelligence towards Artificial Super Intelligence, users are increasingly concerned about their privacy and the trustworthiness of machine learning (ML) technology. A common denominator for the metrics of trustworthiness is the quantification of uncertainty inherent in DL algorithms, and specifically in the model parameters, input data, and model predictions. One of the common approaches to address privacy-related issues in DL is to adopt distributed learning such as federated learning (FL), where private raw data is not shared among users. Despite the privacy-preserving mechanisms in FL, it still faces challenges in trustworthiness. Specifically, the malicious users, during training, can systematically create malicious model parameters to compromise the models’ predictive and generative capabilities, resulting in high uncertainty about their reliability. To demonstrate malicious behaviour, we propose a novel model poisoning attack method named Delphi 1 which aims to maximise the uncertainty of the global model output. We achieve this by taking advantage of the relationship between the uncertainty and the model parameters of the first hidden layer of the local model. Delphi employs two types of optimisation, Bayesian Optimisation and Least Squares Trust Region, to search for the optimal poisoned model parameters, named as Delphi-BO and Delphi-LSTR. We quantify the uncertainty using the KL Divergence to minimise the distance of the predictive probability distribution towards an uncertain distribution of model output. Furthermore, we establish a mathematical proof for the attack effectiveness demonstrated in FL. Numerical results demonstrate that Delphi-BO induces a higher amount of uncertainty than Delphi-LSTR highlighting vulnerability of FL systems to model poisoning attacks.

Funding

Pervasive Wireless Intelligence Beyond the Generations (PerCom)

Engineering and Physical Sciences Research Council

Find out more...

Platform Driving The Ultimate Connectivity

Engineering and Physical Sciences Research Council

Find out more...

TITAN Extension

Engineering and Physical Sciences Research Council

Find out more...

Royal Society Research "Machine Learning Enabled Efficient Communication Scheme for Metaverse Over Wireless Networks" [grant number: RG/R2/232525]

RIE2025 Industry Alignment fund - Industry Collaboration Project (IAF-ICP), Administered by A* STAR [grant number: I2301E0020]

History

School

  • Mechanical, Electrical and Manufacturing Engineering
  • Loughborough University, London

Published in

IEEE Transactions on Information Forensics and Security

Volume

20

Pages

2399 - 2411

Publisher

Institute of Electrical and Electronics Engineers

Version

  • AM (Accepted Manuscript)

Rights holder

© IEEE

Publisher statement

This accepted manuscript is made available under the Creative Commons Attribution licence (CC BY) under the JISC UK green open access agreement.

Acceptance date

2025-01-10

Publication date

2025-01-17

Copyright date

2025

ISSN

1556-6013

eISSN

1556-6021

Language

  • en

Depositor

Prof Lambo Lambotharan. Deposit date: 25 February 2025

Usage metrics

    Loughborough Publications

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC