2134/9876320.v1 Tao Chen Tao Chen All versus one: An empirical comparison on retrained and incremental machine learning for modeling performance of adaptable software Loughborough University 2019 Performance modeling Self-adaptive system Machine learning Software runtime 2019-09-19 15:05:25 Conference contribution https://repository.lboro.ac.uk/articles/conference_contribution/All_versus_one_An_empirical_comparison_on_retrained_and_incremental_machine_learning_for_modeling_performance_of_adaptable_software/9876320 Given the ever-increasing complexity of adaptable software systems and their commonly hidden internal information (e.g., software runs in the public cloud), machine learning based performance modeling has gained momentum for evaluating, understanding and predicting software performance, which facilitates better informed self-adaptations. As performance data accumulates during the run of the software, updating the performance models becomes necessary. To this end, there are two conventional modeling methods: the retrained modeling that always discard the old model and retrain a new one using all available data; or the incremental modeling that retains the existing model and tunes it using one newly arrival data sample. Generally, literature on machine learning based performance modeling for adaptable software chooses either of those methods according to a general belief, but they provide insufficient evidences or references to justify their choice. This paper is the first to report on a comprehensive empirical study that examines both modeling methods under distinct domains of adaptable software, 5 performance indicators, 8 learning algorithms and settings, covering a total of 1,360 different conditions. Our findings challenge the general belief, which is shown to be only partially correct, and reveal some of the important, statistically significant factors that are often overlooked in existing work, providing evidence-based insights on the choice.<br>