Loughborough University
Browse

Sample average approximation for stochastic optimization with dependent data: performance guarantees and tractability

Download (258.92 kB)
conference contribution
posted on 2025-08-18, 15:32 authored by Yafei Wang, Bo Pan, Wei Tu, Peng LiuPeng Liu, Bei Jiang, Chao Gao, Wei Lu, Shangling Jui, Linglong Kong
<p dir="ltr">Sample average approximation (SAA), a popular method for tractably solving stochastic optimization problems, enjoys strong asymptotic performance guarantees in settings with independent training samples. However, these guarantees are not known to hold generally with dependent samples, such as in online learning with time series data or distributed computing with Markovian training samples. In this paper, we show that SAA remains tractable when the distribution of unknown parameters is only observable through dependent instances and still enjoys asymptotic consistency and finite sample guarantees. Specifically, we provide a rigorous probability error analysis to derive 1 - beta confidence bounds for the out-of-sample performance of SAA estimators and show that these estimators are asymptotically consistent. We then, using monotone operator theory, study the performance of a class of stochastic first-order algorithms trained on a dependent source of data. We show that approximation error for these algorithms is bounded and concentrates around zero, and establish deviation bounds for iterates when the underlying stochastic process is phi-mixing. The algorithms presented can be used to handle numerically inconvenient loss functions such as the sum of a smooth and non-smooth function or of non-smooth functions with constraints. To illustrate the usefulness of our results, we present several stochastic versions of popular algorithms such as stochastic proximal gradient descent (S-PGD), stochastic relaxed Peaceman–Rachford splitting algorithms (S-rPRS), and numerical experiment.</p>

Funding

Natural Sciences and Engineering Research Council of Canada (NSERC)

Natural Sciences and Engineering Research Council

Find out more...

University of Alberta/Huawei Joint Innovation Collaboration

Huawei Technologies Canada Co., Ltd.

Canada Research Chair in Statistical Learning

History

School

  • Science

Department

  • Mathematical Sciences

Volume

36

Issue

4

Pages

3859 - 3867

Source

The Thirty-Sixth AAAI Conference on Artificial Intelligence

Publisher

Association for the Advancement of Artificial Intelligence / AAAI Press

Version

  • VoR (Version of Record)

Rights holder

© Association for the Advancement of Artificial Intelligence (www.aaai.org)

Publisher statement

This is a conference paper presented at the 36th Annual AAAI Conference on Artificial Intelligence and published openly by AAAI Press. © Association for the Advancement of Artificial Intelligence. All Rights Reserved.

Publication date

2022-06-28

Copyright date

2022

ISBN

9781577358763

ISSN

2159-5399

eISSN

2374-3468

Language

  • en

Location

Online

Event dates

22nd February 2022 - 1st March 2022

Depositor

Dr Peng Liu. Deposit date: 3 October 2024

Usage metrics

    Loughborough Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC