Ted Werbel@tedx_ai
OpenAI is about to blow your mind with Active Inference... So what is it?
> First introduced in the early 2000s in a series of papers by neuroscientist and theoretical neurobiologist, Karl Friston - active inference is a theory of how the brain uses statistical inference and generative world models to predict sensory inputs and guide actions to minimize prediction errors - helping explain human perception, action and learning
> Perception updates our generative world model to reduce errors in prediction while actions change our environment to align with our predictions - minimizing the probability of errors in our predictions
> It is likely that with a combination of enough compute, advancements in continuous learning / information retrieval with causal grounding and layers of active inference methods like GoT, AoT, CoV and MCTS - we may be inching closer and closer to a generative, continually learning model that operates at near-human levels of cognition ✨
I suspect that we might start to see the emergence of “energy-based” models (EBM) that operate more dynamically and continuously learn with hot-swappable memory partitions that evolve over time.
They will intelligently route and adjust the level of compute needed for more sophisticated means of active inference based on the complexity of a given query. These models will also allow users to explicitly define how much energy / reasoning strength is needed at inference time.
I'll be posting more on how this works under the hood soon with layered graphs-of-thought (GoT) and algos like monte-carlo tree search (MCTS)... For more on active inference check out these incredible papers:
> Friston, K. (2003). "Learning and inference in the brain."
> Friston, K. (2005). "A theory of cortical responses."
> Friston, K. (2006). "Free-energy principle for perception and action."