Factored Reinforcement Learning for Auto-scaling in Tandem Queues
Apprentissage par renforcement factorisé pour l'autoscaling dans un modèle de file en tandem
Résumé
As today’s networking systems utilize more virtualisation, efficient auto-scaling of resources becomes increasingly critical for controlling both the performance and energy consumption. In this paper, we study the techniques to learn the optimal auto-scaling policies in a distributed network when parts of the system dynamics are unknown. Reinforcement Learning methods have been applied to solve auto-scaling problems. However they can run into computational and convergence issues as the problem scale grows. On the other hand, distributed networks have relational structures with local dependencies between physical and virtual resources. We can exploit these structures to overcome the convergence issues by using a factored representation of the system.We consider a distributed network in the form of a tandem queue composed of two nodes. The objective of the auto-scaling problem is to find policies that have a good trade-off between quality of service (QoS) and operating costs. We develop a factored Reinforcement Learning algorithm, named FMDP online, to find the optimal auto-scaling policies. We evaluate our algorithm with a simulated environment. We compare it with existing Reinforcement Learning methods and show its relevance in terms of policy efficiency and convergence speed.
Origine | Fichiers produits par l'(les) auteur(s) |
---|