+In this course, we will explore both classic logic-based techniques and more recent neural network (NN)-based methods, examining how they can be combined to enhance the capabilities of intelligent agents. Our focus will be on applying these approaches to non-Markovian Reinforcement Learning (RL). In this context, Linear Temporal Logic (LTL) has proven to be a powerful tool for specifying tasks and environments that do not conform to the Markovian property, addressing the expressive limitations of Markov Decision Processes (MDPs). However, solving such environments remains challenging, particularly when observations are non-symbolic—such as images, continuous sensor readings, etc—or when there is incomplete or missing prior knowledge about the task. In this course, we will review recent advancements in the field, with a special emphasis on works that integrate logic and neural networks. Topics covered will include Deep RL, temporal RL, Restraining Bolts and Reward Machines, deep learning for sequential data, automata learning, Neurosymbolic AI, Neurosymbolic Reward Machines, Symbol Grounding, transfer learning across LTL tasks, LTL and natural language, integrating LTL knowledge into generative NNs, and more.
0 commit comments