Dr. JEFF BECK - The probability approach to AI

Published 2023-10-12
Support us! www.patreon.com/mlst
MLST Discord: discord.gg/aNPkGUQtc5

Note: We have had some feedback that the audio is a bit low on this for some folks - we have fixed this in the podcast version here: podcasters.spotify.com/pod/show/machinelearningstr…

Dr. Jeff Beck is a computational neuroscientist studying probabilistic reasoning (decision making under uncertainty) in humans and animals with emphasis on neural representations of uncertainty and cortical implementations of probabilistic inference and learning. His line of research incorporates information theoretic and hierarchical statistical analysis of neural and behavioural data as well as reinforcement learning and active inference.

www.linkedin.com/in/jeff-beck-6b5085196/
scholar.google.com/citations?user=RJE1lmUAAAAJ&hl=…

Interviewer: Dr. Tim Scarfe

TOC
00:00:00 Intro
00:00:51 Bayesian / Knowledge
00:14:57 Active inference
00:18:58 Mediation
00:23:44 Philosophy of mind / science
00:29:25 Optimisation
00:42:54 Emergence
00:56:38 Steering emergent systems
01:04:31 Work plan
01:06:06 Representations/Core knowledge

#activeinference

All Comments (21)
  • @jordan13589
    Wrapping myself in my Markov blanket hoping AGI pursues environmental equilibrium 🤗
  • @SymEof
    One of the most profound discussions about cognition available on Youtube. Truly excellent.
  • @BrianMosleyUK
    I've been starved of content from this channel, this is so satisfying!
  • @luke2642
    I really enjoyed this. Great questions, if a little leading, but Beck was just fantastic in answering, thinking on his feet too. The way he framed empiricism, prediction, models... everything, it's just great! And then to top it off he's got the humanity, the self awareness of his Quaker/Buddhist trousers (gotta respect that maintaining hope and love are axiomatic for sanity during the human condition) without any compromise on the scientific method!
  • @Blacky372
    Thanks! I am grateful that such great content is freely available for everyone to enjoy.
  • @Lolleka
    I've switched to the bayesian framework of thinking a few year ago. There is no coming back, it is just too good.
  • @siarez
    Great questioning Tim!
  • @rastgo4432
    Very awesome, hope the lengths of the episodes be more ❤
  • @Daniel-Six
    Love listening to Beck riff on the hidden melodies of the mind. Dude can really shred the scales from minute to macroscopic in the domain of cognition.
  • @ffedericoni
    Epic episode! I am already expanding my horizons by learning Pyro and Lenia.
  • @GaHaus
    Totally epic, a bit out of my depth, but really expanding my horizons. I really liked the answer to the question about believing in things beyond materialism. Non materialist thinking is incredibly important to many people around the world, and can bring us so much meaning, and that Dr Beck didn't instantly jump to support only scientific materialism.
  • @kd192
    Incredible discussion.. thanks for sharing
  • Interesting discussion. Thank you. Does anyone have a reference on the bayesian interpretation for self-attention in transformers?
  • @Blacky372
    Great talk! Thank you very much for doing this interview. One minor thing: I would have preferred to hear Jeff's thoughts flow for longer without interjections in some parts of the video.
  • @dr.mikeybee
    You've hit another one out of the park. Great episode!
  • @ntesla66
    That was truly eye opening... the epiphany I had around the 45 minute mark was that there are the two schools of approach in training just like the two schools of physics. The general relativists whose mathematical foundations are in the tensors and linear algebra, and the quantum physicists being founded in the statistical. The one being a vector approach needing a coordinate system and the other using Hamilton's action principal. Tensors or the Calculus of Variations.
  • @eskelCz
    What was the name of the cellular automata "toy" he mentioned? Particle Len... ? :)
  • 🎯 Key Takeaways for quick navigation: 00:00 🧠 Brain's Probabilistic Reasoning - The brain's implementation of probabilistic reasoning is a focal point of computational neuroscience. - Bayesian brain hypothesis examines how human and animal behaviors align with Bayesian inference. - Neural circuits encode and manipulate probability distributions, reflecting brain's operations. 02:19 📊 Bayesian Analysis and Model Selection - Bayesian analysis provides a principled framework for reasoning under uncertainty. - Model selection involves choosing the best-fitting model based on empirical data and considered models. - Occam's razor effect aids in selecting the most plausible model among alternatives. 06:13 🤖 Active Inference Framework - Active inference involves agents dynamically updating models while interacting with the environment. - It incorporates optimal experimental design, guiding agents to seek the most informative data. - Contrasts traditional machine learning by incorporating continuous model refinement during interaction. 09:34 🌐 Universality of Cognitive Priors - Cognitive priors shape cognitive processes, reflecting evolutionary adaptation and cultural influences. - The debate on universal versus situated priors explores the extent to which priors transcend specific contexts. - Cognitive priors facilitate rapid inference by providing a foundation for reasoning and decision-making. 14:20 💭 Epistemological Considerations - Science prioritizes prediction and data compression over absolute truth, acknowledging inherent uncertainty. - Models serve as predictive tools rather than absolute representations of reality, subject to continuous refinement. - Probabilistic reasoning emphasizes uncertainty and the conditional nature of knowledge, challenging notions of binary truth. 19:11 🗣️ Language as Mediation in Communication - Language serves as a mediation pattern for communication. - Communicating complex models involves a trade-off between representational fidelity and communication ability. - Grounding models in predictions facilitates communication between agents with different internal models. 22:03 🌐 Mediation through Prediction - Communication between agents relies on prediction as a common language. - Interactions and communication are mediated by the environment. - The pragmatic utility of philosophy of mind lies in predicting behavior. 24:24 🧠 Materialism, Philosophy, and Predictive Behavior - The pragmatic perspective in science prioritizes prediction over philosophical debates. - Compartmentalization of beliefs based on context, such as scientific work versus personal philosophy. - Philosophy of mind serves the practical purpose of predicting behavior. 29:46 🧭 Tractable Bayesian Inference for Large Models - Exploring tractable Bayesian inference for scaling up large models. - Gradient-free learning offers an alternative approach to traditional gradient descent. - Transformer models, like the self-attention mechanism, fall within the class amenable to gradient-free learning. 36:56 🎓 Encoding representations in vector space - Gradient-free optimization and the trade-off with limited model accessibility. - The importance of Autograd in simplifying gradient computations. - Accessibility of gradient descent learning for any loss function versus limitations of other learning approaches. 39:18 🔄 Time complexity of gradient-free optimization - Comparing the time complexity of gradient-free optimization to algorithms like Kalman filter. - Discussion on continual learning mindset and measurement of dynamics over time. 40:19 🧠 Markov blanket detection algorithm - Overview of the Markov blanket detection algorithm for identifying agents in dynamic systems. - Explanation of how dynamics-based modeling aids in identifying and categorizing objects in simulations. - Utilization of dimensionality reduction techniques to cluster particles and identify interacting objects. 43:10 🔍 Emergence and self-organization in artificial life systems - Discussion on emergence and self-organization in artificial life systems like particle Linnea. - Exploration of the challenges in modeling complex functional dynamics and the role of emergent phenomena. - Comparison of modeling approaches focusing on bottom-up emergence versus top-down abstraction. 49:02 🎯 Role of reward functions in active inference - Comparison between active inference and reinforcement learning in defining agents and motivating behavior. - Critique of the normative solution to the problem of value function selection and the dangers of specifying reward functions. - Emphasis on achieving homeostatic equilibrium as a more stable approach in active inference. 52:20 🛠️ Modeling levels of abstraction and overcoming brittleness - Discussion on modeling different levels of abstraction in complex systems and addressing brittleness. - Exploration of emergent properties and goals in agent-based modeling. - Consideration of the trade-offs in modeling approaches and the role of self-organization in overcoming brittleness. 55:08 🏠 Active inference and homeostasis - Active inference involves steering emergent systems towards target macroscopic behaviors, often resembling homeostatic equilibrium. - Agents are imbued with a definition of homeostatic equilibrium, leading to stable interactions within a system. - Transitioning agents from a state of homeostasis to accomplishing specific tasks poses challenges in maintaining system stability. 56:34 🔄 Steerable multi-agent systems - Gradient descent training on CNN weights can produce coherent global outputs, illustrating macroscopic optimization. - Outer loops in multi-agent systems steer agents toward fixed objectives without resorting to traditional reward functions. - Manipulating agents' internal states or boundaries can guide them to perform specific tasks without disrupting system equilibrium. 59:00 🎯 Guiding agents' behaviors - Speculative approaches to guiding agents' behaviors include incorporating desired tasks into their definitions of self. - Avoiding brittleness in agent behaviors involves maintaining flexibility and adaptability over time. - Alternatives to altering agents' definitions of self include creating specialized agents for specific tasks, akin to natural selection processes.
  • @kennethgarcia25
    objectives? trajectories? how we define things relate to the aims one sense are important