Reinforcement_Learning_with_Deep_Associative_Plastic_Q_Networks-5.pdf (7.34 MB)
Download file

Deep reinforcement learning with modulated Hebbian plus Q-network architecture

Download (7.34 MB)
journal contribution
posted on 18.10.2021, 10:53 by Pawel Ladosz, Eseoghene Ben-Iwhiwhu, Jeff DickJeff Dick, Nicholas Ketz, Soheil Kolouri, Jeffrey L. Krichmar, Praveen K. Pilly, Andrea SoltoggioAndrea Soltoggio
In this article, we consider a subclass of partially observable Markov decision process (POMDP) problems which we termed confounding POMDPs. In these types of POMDPs, temporal difference (TD)-based reinforcement learning (RL) algorithms struggle, as TD error cannot be easily derived from observations. We solve these types of problems using a new bio-inspired neural architecture that combines a modulated Hebbian network (MOHN) with deep Q-network (DQN), which we call modulated Hebbian plus Q-network architecture (MOHQA). The key idea is to use a Hebbian network with rarely correlated bio-inspired neural traces to bridge temporal delays between actions and rewards when confounding observations and sparse rewards result in inaccurate TD errors. In MOHQA, DQN learns low-level features and control, while the MOHN contributes to high-level decisions by associating rewards with past states and actions. Thus, the proposed architecture combines two modules with significantly different learning algorithms, a Hebbian associative network and a classical DQN pipeline, exploiting the advantages of both. Simulations on a set of POMDPs and on the Malmo environment show that the proposed algorithm improved DQN's results and even outperformed control tests with advantage-actor critic (A2C), quantile regression DQN with long short-term memory (QRDQN + LSTM), Monte Carlo policy gradient (REINFORCE), and aggregated memory for reinforcement learning (AMRL) algorithms on most difficult POMDPs with confounding stimuli and sparse rewards.

Funding

United States Air Force Research Laboratory (AFRL) and Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-18-C-0103

Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2020R1A6A1A03040570)

History

School

  • Science

Department

  • Computer Science

Published in

IEEE Transactions on Neural Networks and Learning Systems

Publisher

IEEE

Version

AM (Accepted Manuscript)

Rights holder

© IEEE

Publisher statement

Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Acceptance date

16/08/2021

Publication date

2021-09-24

Copyright date

2021

ISSN

2162-237X

eISSN

2162-2388

Language

en

Depositor

Dr Andrea Soltoggio. Deposit date: 14 October 2021