Pré-Publication, Document De Travail Année : 2025

A Floyd-Warshall Approach to Value Computation in Markov Decision Processes

Résumé

Value and policy iteration are classical algorithms to maximize the average discounted reward of an MDP. They rely on a breadth-first exploration strategy in the future of each state to update its value and possibly change the action policy at this state. This paper revisits this paradigm and examines a depth-first search strategy. It reformulates the average reward computation as an integral over (future) paths that is better expressed in the formalism of weighted automata. Policy evaluation can then be solved by a Floyd-Warshall algorithm, which gathers at once the rewards along possibly infinite runs. This reformulation opens the way to new approximation schemes for the value function. The same formalism also gives access to other quantities of interest, as the gradient of the average reward with respect to model or policy parameters, or the variance of the reward. The behaviors and performances of this value estimation scheme are illustrated on several benchmarks.

Fichier principal
Vignette du fichier
main-long.pdf (688.59 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04883133 , version 1 (13-01-2025)

Licence

Identifiants

  • HAL Id : hal-04883133 , version 1

Citer

Aymeric Côme, Éric Fabre, Loïc Hélouët. A Floyd-Warshall Approach to Value Computation in Markov Decision Processes. 2025. ⟨hal-04883133⟩
0 Consultations
0 Téléchargements

Partager

More