Efficient reinforcement learning with partial observables for fluid flow control

Akira Kubo and Masaki Shimizu
Phys. Rev. E 105, 065101 – Published 8 June 2022

Abstract

Even if the trajectory in a viscous flow system stays within a low dimensional subspace in the state space, reinforcement learning (RL) requires many observables in the active control problem. This is because the observables are assumed to follow a policy-independent Markov decision process in the usual RL framework and full observation of the system is required to satisfy this assumption. Although RL with a partially observable condition is generally a difficult task, we construct a consistent algorithm with the condition using the low dimensional property of viscous flow. Using typical examples of active flow control, we show that our algorithm is more stable and efficient than the existing RL algorithms, even under a small number of observables.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Received 11 February 2021
  • Accepted 18 May 2022

DOI:https://doi.org/10.1103/PhysRevE.105.065101

©2022 American Physical Society

Physics Subject Headings (PhySH)

Fluid DynamicsNonlinear Dynamics

Authors & Affiliations

Akira Kubo and Masaki Shimizu*

  • Graduate School of Engineering Science, Osaka University, Toyonaka 560-0043, Japan

  • *shimizu@me.es.osaka-u.ac.jp

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 105, Iss. 6 — June 2022

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×