We study decentralized federated learning (DFL) in edge computing networks where edge nodes (ENs) collaboratively train their artificial intelligence (AI) models in a serverless manner without sharing local data. We consider the following critical DFL challenges: i) scarce bandwidth resources of ENs; ii) dynamic, heterogeneous edge environment; iii) incentive provisioning and complex tradeoffs between the DFL performance and training costs. To resolve these challenges, we develop a new model compression method where ENs utilize dynamic, non-identical compression rates to improve the communication efficiency of DFL under time-varying, heterogeneous resource constraints. We show that our method can be formulated as a graphical Markov potential game where ENs act as players deciding on their compression factors and the number of data samples used for model updates. Each EN is incentivized to participate in DFL through rewards based on the EN's contribution to training. We prove that our game has a dominant pure-strategy Nash equilibrium (NE) maximizing its potential function and propose a dynamic distributed compression algorithm in which each EN can find its dominant strategy independently. We show that this algorithm converges to the Pareto-optimal NE, representing the most efficient solution of our game enhancing the DFL performance with minimal costs.
This accepted manuscript has been made available under the Creative Commons Attribution licence (CC BY) under the IEEE JISC UK green open access agreement.