Bibliografie
Conference Paper (international conference)
Policy Learning via Fully Probabilistic Design
,
: DYNALIFE WG1-WG2 Interaction Meeting Data driven evidence: theoretical models and complex biological data, p. 52-52
: DYNALIFE Interaction Meeting Data driven evidence: theoretical models and complex biological data, (Thessaloniki, GR, 20240605)
: CA21169, EU-COST
: Fully probabilistic design, imitation learning, Kullback-Liebler divergence, learning from demonstration, optimal policy.
: https://library.utia.cas.cz/separaty/2025/AS/guy-0604532.pdf
(eng): Applying formalism of fully probabilistic design, we propose a new general data driven approach for finding a stochastic policy from demonstrations. The approach infers a policy directly from data without interaction with the expert or using any reinforcement signal. The expert’s actions generally need not to be optimal. The proposed approach learns an optimal policy by minimising Kullback-Liebler divergence between probabilistic description of the actual agent-environment behaviour and the distribution describing the targeted behaviour of the optimised closed loop. We demonstrate our approach on simulated examples and show that the learned policy: i) converges to the optimised policy obtained by FPD. ii) achieves better performance than the optimal FPD policy whenever a mismodelling is present.
: IN
: 10201