So back in February I set up a local reading group for cognitive science. I’d wanted to learn more about the less… silicon… aspects of intelligence for a while, and figured getting together with a group of like-minded people was a good route to that. And I was surprised at how dead the existing meetups were, post-COVID. So I started one.

So far, so good. We’ve tended to gather a group of 4-5 each month, mostly face-to-face (we’ve had a couple of dial-ins). We alternate reading books and papers, to give us more time for the former. Our group is mostly the same each month, with maybe one or two new faces - we’ve not been great at retaining new arrivals. Conversations frequently get back to large language models, a topic I like to avoid given my day job, and how hard it is to keep track of what’s public and what’s not, when your entire working life is spent in this stuff. Karl Friston also shows up, metaphorically, most months. We need to spend some time with him, metaphorically, in 2024.

  • March: Being You: A New Science of Consciousness by Anil Seth. A physicalist (and more importantly, Sussex prof) presents an agnostic view of consciousness, emotion, free will, AI. Comforting confession that noone understands Friston on Free Energy, and I enjoyed the explanation of Integrated Information Theory and the takedown of those free-will experiments showing brain intention to act can be measured before intention is conscious.
  • April: The neural architecture of language - Integrative modeling converges on predictive processing. The researchers pass sentences through language models (up to GPT-1) and people (having them read out sentences while FMRI’d). They found they could build predictors both ways, from hidden layers of each to the other - suggesting strong correlations, especially on next word prediction… suggesting it’s a core capability and there are parallels between how artificial neural networks and the brain do it.
  • May: Mind in Motion: How Action Shapes Thought, by Barbara Tversky. I had trouble with this one, frequently finding myself more confused after reading the author’s explanations.
  • June: Building Machines That Learn and Think Like People. Dates from pre-transformer era, so a bit dated. Lots of desire for intuitive physics abilities and intuitive psychology; emphasis on compositionality as a core cognitive ability (which I can imagine, but they present as necessary)
  • July: we took July off, as I was in the UK.
  • August: The Experience Machine, another Sussex connection, Andy Clark’s latest. He takes the predictive processing theories laid out in previous works a bit further, in talking explicitly about precision weighting - an attention mechanism - and connects sensing and acting more overtly.. A few chapters then go into implications of the theory elsewhere - with focus on psychiatry, medicine, and broader societal issues.
  • September: Whatever next? Predictive brains, situated agents and the future of cognitive science. A 2013 paper from Andy Clark - interesting to have read this after the book, I had a real sense of the core ideas being worked through here. One detail I enjoyed: his prediction (on p12) that each level of processing requires distinct units for representations and error. This is exactly what they found in /energy-efficiency/.
  • October: Seven and a half lessons about the brain, by Lisa Feldman Barrett. Her name had come up in past discussions and readings so many times that it felt necessary. I was a bit lost in this one, and wondered if something was lost or added in translation between her academic work and this (more pop science) book. She starts by taking down the notion of the triune (lizard/mammal/human) brain model, wanders through brains-as-networks and pruning/tuning (with the mother/daughter relationship presented as an example, which I found plausible but not convincing), then through social/cultural implications and to the idea of brains creating reality.
  • November: Biological Underpinnings for lifelong learning machines - a literature review of all the ways biology might inspire and address issues with systems that need lifelong learning (i.e. those beyond today’s common training/inference split). Broad, interesting - their reference to memories being maintained even while brain tissue is remodelled must rule out some mechanisms for storage? But their metaphor or forgetting as being “memory locations being overwritten” seemed a bit too silicon-like… There’s a theme of “replay of training data” throughout (random activations in the hippocampus used for rehearsal/generative replay). Working out what needs to be stored for replay seems important.
  • December: Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain by Grace Lindsay. I was surprised how much I enjoyed this exploration of maths in biology - when I started it, it felt like another pop-neuroscience introduction but I found a lot of material which I’d not encountered elsewhere. Starts from the basics of neuron firing, zooming out to network assemblies, into the visual network, to information-encoding and then to the motor system - finally into real-world interactions, Bayesian predictions, reinforcement learning and ending with grand unified theories (e.g. Integrated Information Theory, Thousand Brains, etc., of which the author is quite openly sceptical). I was pleased to hear Pandemonium Architectures called out - their messiness still feels both biologically plausible, and underexplored given our leap to the clean abstractions of matrix multiplication. .

A fun year, I think. Towards the end I was definitely losing energy a bit (mostly thanks to the day job) but reading back I’m happy with the ground we covered.