Variable length adaptive filtering within incremental learning algorithms for distributed networks

In this paper we propose the use of variable length adaptive filtering within the context of incremental learning for distributed networks. Algorithms for such incremental learning strategies must have low computational complexity and require minimal communication between nodes as compared to centralized networks. To match the dynamics of the data across the network we optimize the length of the adaptive filters used within each node by exploiting the statistics of the local signals to each node. In particular, we use a fractional tap-length solution to determine the length of the adaptive filter within each node, the coefficients of which are adapted with an incremental-learning learning algorithm. Simulation studies are presented to confirm the convergence properties of the scheme and these are verified by theoretical analysis of excess mean square error and mean square deviation.