An Overview of the Free Energy Principle and Related Research

Author: Zhengquan Zhang1, Feng Xu2
Affiliation:
1 Key Laboratory of Information Science of Electromagnetic Waves, Fudan University, Shanghai, P.R.C. zqzhang22@m.fudan.edu.cn.
2 Key Laboratory of Information Science of Electromagnetic Waves, Fudan University, Shanghai, P.R.C. fengxu@fudan.edu.cn.
Conference/Journal: Neural Comput
Date published: 2024 Feb 28
Other: Pages: 1-59 , Special Notes: doi: 10.1162/neco_a_01642. , Word Count: 309


The free energy principle and its corollary, the active inference framework, serve as theoretical foundations in the domain of neuroscience, explaining the genesis of intelligent behavior. This principle states that the processes of perception, learning, and decision making-within an agent-are all driven by the objective of "minimizing free energy," evincing the following behaviors: learning and employing a generative model of the environment to interpret observations, thereby achieving perception, and selecting actions to maintain a stable preferred state and minimize the uncertainty about the environment, thereby achieving decision making. This fundamental principle can be used to explain how the brain processes perceptual information, learns about the environment, and selects actions. Two pivotal tenets are that the agent employs a generative model for perception and planning and that interaction with the world (and other agents) enhances the performance of the generative model and augments perception. With the evolution of control theory and deep learning tools, agents based on the FEP have been instantiated in various ways across different domains, guiding the design of a multitude of generative models and decision-making algorithms. This letter first introduces the basic concepts of the FEP, followed by its historical development and connections with other theories of intelligence, and then delves into the specific application of the FEP to perception and decision making, encompassing both low-dimensional simple situations and high-dimensional complex situations. It compares the FEP with model-based reinforcement learning to show that the FEP provides a better objective function. We illustrate this using numerical studies of Dreamer3 by adding expected information gain into the standard objective function. In a complementary fashion, existing reinforcement learning, and deep learning algorithms can also help implement the FEP-based agents. Finally, we discuss the various capabilities that agents need to possess in complex environments and state that the FEP can aid agents in acquiring these capabilities.


PMID: 38457757 DOI: 10.1162/neco_a_01642

BACK