IS THE MIND REALLY FLAT?

Published 2024-03-01
Nick Chater is Professor of Behavioural Science at Warwick Business School, who works on rationality and language using a range of theoretical and experimental approaches. We discuss his books The Mind is Flat, and the Language Game.

Please support us on Patreon - patreon.com/mlst - Access the private Discord, networking, and early access to content.
MLST Discord: discord.gg/machine-learning-street-talk-mlst-93735…
twitter.com/MLStreetTalk

Part 2 on Patreon now:
www.patreon.com/posts/language-game-1-98661649

Would you like to sponsor MLST? Please contact [email protected]

Buy The Language Game:
amzn.to/3SRHjPm

Buy The Mind is Flat:
amzn.to/3P3BUUC

Find Nick:
www.wbs.ac.uk/about/person/nick-chater/
twitter.com/nickjchater?lang=en

TOC:
00:00:00 The mind of Anna Karenina
00:05:38 Our brain is like the Shoggoth
00:09:26 Brain simulations are incoherent
00:12:32 The world is gnarly
00:19:56 Human moral status
00:23:28 Living a hallucination
00:25:37 Colour perception
00:28:12 Universal knowledge? / rationalism
00:31:33 Math realism
00:35:13 Bayesian brain?
00:39:53 Language game Kick off - Charades
00:49:13 Evolution of language
00:53:54 Intelligence in the memesphere
00:58:21 Creativity
01:04:41 Language encoding and overloading
01:09:54 Analogical reasoning
01:13:25 Language is complex
01:14:19 Language evolution/decline
01:17:23 Is language knowledge?
01:19:53 Chomsky
01:23:36 Theories of everything
01:26:29 Prof Bishops comments on book
01:31:09 Singularity

Interviewer: Dr. Tim Scarfe
www.linkedin.com/in/ecsquizor/

Pod version: podcasters.spotify.com/pod/show/machinelearningstr…

All Comments (21)
  • @Brian-oz8io
    I remember an experiment they did in New York where they placed several groups of people who couldn’t communicate with other groups in different parts of the city and they were supposed to find a way to meet each other. All but one of the groups decided the most obvious place and time to meet would be the Empire State Building at noon. Humans will always find creative ways to communicate and understand each other.
  • @cs-vk4rn
    Wouldn't the discoveries of funsearch and Alphafold qualify as new knowledge?
  • The output of the GNoME AI discovered 380,000 new stable materials that human materials science was previously unaware. Prior to this our known catalogue of stable materials was 48,000. The AI-run lab that GNoME controls has successfully synthesized 41 of these new materials. I fail how to see how this is not new knowledge. Plus look at it from this perspective. A common statement is that AI's can only ever be equal to what they have ingested, hence no AI can exceed a human specialist. However, now an AI can be an expert biologist, expert physicist, expert poet, expert carpenter, expert accountant etc all at once. So even if we assume (unverified) that an AI can't exceed a human specialist then what happens when we have a system that provides expertise in 1000 human domains simultaneously and cross-references them for its modelling, no human can do this. This system is capable of modelling information and cross references that cannot be done by a human. It seems to me that a large portion of the goalpost moving is being done by a form of human intellectual supremacist.
  • The topic of retrospection is a perfect example. When we are asked why we said something wrong make it up on the fly - just like LLMs. There is a big body of research demonstrating we have no clue why we do things. And we don’t even know what we are going to do until after we have decided. We need to stop using standards we don’t meet as metrics for AI abilities.
  • @longline
    Great to see Nick Chater here! I'd love you to speak with Lisa Feldman Barrett too. Also firmly from the school of prediction models, but studies emotion as the output of prediction error. Super relevant, concomitant, and interesting.
  • @dylan_curious
    The CEO of Deepmind just talked about this. In summery it’s about simulating the answers before using various pathways of learned knowledge, and then comparing those simulations before giving your answer.
  • @artbytravissmith
    I do not know if it is because I work a digital artist and often work in 3d modeling viewports, but I did not find what he described difficult. Once I looked at an image of a hexagonal pinwheel I could instantly see it as a as an illusion of a cube from a orthographic perspective that I had seen in 3d modeling viewports tens of thousands of times, especially when viewing cubes in wireframe. In my minds eye, I then rotated a wireframe cube into the position of the illusion demonstrated by the hexagonal pinwheel. Am I missing something? it wasn't difficult. Maybe this is because artists need to train their minds eye beyond what most humans have to, especially if they are artists who value drawing/sculpting/painting from their minds eye and have an understanding of the concept of 'drawing through' and the rules of perspective. Artists who learn sight sizing might struggle more, as they are not trained to think of terms of volumes and rather focus on copying what they see by comparing positive and negative spaces.
  • @davidfirth
    The ability to draw pretty pictures and the ability to imagine and create new things are completely separate artforms and its weird that we lump it all together.
  • Every system which can do trial an error can create new knowledge. Chess computer can do it. Even simple programs, that check wheter a number is a prime number can create new knowledge. LLMs can do it if they are not forced to be in a chatbot environment.
  • @user-yv6xw7ns3o
    I love this podcast. Another phenomenal episode. Very thoughtful and relevant.
  • @damianlewis7550
    Do GFlowNets now solve the intractability problem of the probability distribution in an EM?
  • @Gredias
    Fantastic episode! Though, every time a guest ever utters the words "large language models will never be able to...", you should probably ask for a concrete thing which, if the guest saw the LLM doing it, would cause them to change their mind... :) I can imagine LLMs playing charades fairly successfully in the future!
  • @zrebbesh
    I dunno. I'm in research, and TBH a lot of our "Profoundly Creative" ideas are in fact the kind of things that LLMs do when they get stuff wrong. 99% of the time of course it's crap, but that last bit sometimes results in a good idea. The talent of a "creative" researcher, at least IME, is mostly reinterpreting bad ideas, mistakes, and poor communication of abstract ideas, just trying to make sense of them, until we come up with something the person could possibly have intended to say instead, that might actually be right. And LLMs definitely get stuff wrong and then reinterpret. They just don't notice yet when that last "gaslighting attempt" (out of tens of thousands) is something that could really work.
  • @GodbornNoven
    Akin to asking Why the earth is flat. The flaw is that in the question. A statement is made. A statement that is absolutely false.
  • @longline
    I'd say (re agency and charades) that you need the feedback between action and sampling, turning your head to see the source of a sound, to integrate multi modal data with the highest salience, at the lowest computation cost, for usable predictions (that match our typical usage of predictions). So GPT in control of its own robot is probably enough for charades with humans, and creativity, etc. If we're limiting the domain to "creative like us". Two GPTs that talk to eachother and can chose their own googling, they'll be able to play charades with each other... But we won't understand. And they'll be creative, but we won't understand their domain and their nuance. "You've done it wrong, that's just white noise" we'll say. Wolfram alluded to this, the concept spaces that they've got that don't fit our ideas of valid outcomes. Like that.
  • @andersbodin1551
    What is he talking about I can spin a cube in my head until it two diagonal corners align and it looks like a pinwheel