Policy-based optimization: single-step policy gradient method seen as an evolution strategy - Computing & Fluids Accéder directement au contenu
Article Dans Une Revue Neural Computing and Applications Année : 2022

Policy-based optimization: single-step policy gradient method seen as an evolution strategy

Résumé

This research reports on the recent development of black-box optimization methods based on single-step deep reinforcement learning (DRL) and their conceptual similarity to evolution strategy (ES) techniques. It formally introduces policy-based optimization (PBO), a policy-gradient method that relies on a policy network to describe the density function of its forthcoming evaluations, and uses covariance estimation to steer the policy improvement process in the right direction. The specifics of the PBO algorithm are detailed, and the connection to evolutionary strategies (especially covariance matrix adaptation evolutionary strategy) is discussed. Relevance is assessed by benchmarking PBO against classical ES techniques on analytic functions minimization problems, and by optimizing various parametric control laws intended for the Lorenz attractor. Given the scarce existing literature on the topic, this contribution definitely establishes PBO as a valid, versatile black-box optimization technique, and opens the way to multiple future improvements building on the inherent flexibility of the neural networks approach.
Fichier principal
Vignette du fichier
main (1).pdf (2.31 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03432655 , version 1 (17-11-2021)

Identifiants

Citer

Jonathan Viquerat, Régis Duvigneau, Philippe Meliga, A Kuhnle, Elie Hachem. Policy-based optimization: single-step policy gradient method seen as an evolution strategy. Neural Computing and Applications, 2022, ⟨10.1007/s00521-022-07779-0⟩. ⟨hal-03432655⟩
114 Consultations
99 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More