Rethinking computational approaches to the mind

Fundamental challenges and future perspectives

Rethinking Computational Approaches to the Mind

Fundamental Challenges and Future Perspectives

One-day Online Symposium

21st October 2022


FIND RECORDINGS ON YOUTUBE HERE

Logo

Schedule - Speakers - Organizers

This one-day online event brought together researchers with expertise in various areas such as complexity science, machine learning & artificial intelligence, information theory & data science, as well as computational/theoretical neuroscience & philosophy to explore different computational approaches in the study of the “mind” (in brains and/or machines):

  1. What are those approaches essentially about?
  2. What are major benefits & caveats?
  3. Do different approaches speak to, complement, or contradict each other?
  4. What are the current challenges in computational approaches to understand the mind, and what could bring progress?

The event was organized within the “Sensation and Perception to Awareness: Leverhulme Doctoral Scholarship Programme” at the Univeristy of Sussex (find more info here), and comprised a set of talks followed by a panel discussion (see schedule).

If you have any question regarding the symposium, please feel free to write us at: computational.mind@proton.me

All attendees and speakers at the symposium were required to agree with the code of conduct.

All talks, including the panel discussion, are available on YouTube HERE. Click on “Slides” below the talk titles of each speaker to see & download presentation slides.

Schedule

TIME UTC+1 Speaker Talk Title
2:00-2:30pm Joseph Lizier Enabling tools to model information processing in complex systems
2:30-3:00pm Romain Brette Computation in the Brain
3:00-3:30pm Gaël Varoquaux Model predictions advance science more than modeling ingredients
3:30-4:00pm BREAK
4:00-4:30pm Konrad Kording Causality in the Brain
4:30:5:00pm Jessica Flack Collective Computation in Nature
5:00-5:30pm Melanie Mitchell Why AI is Harder Than We Think
5:30-5:45pm BREAK
5:45-6:45pm PANEL DISCUSSION

Speakers

Joseph Lizier (He/Him)

drawing

Talk title: Enabling tools to model information processing in complex systems

Slides

Abstract: The space-time dynamics of interactions in complex systems are often described using terminology of information processing, or distributed computation, in particular with reference to information being stored, transferred and modified in these systems. In this talk, I will introduce an information-theoretic framework -- information dynamics -- that we use to model each of these operations on information within a complex system, and their dynamics in space and time. Not only does this framework quantitatively align with natural qualitative descriptions of information processing in neural and other systems, it provides multiple complementary perspectives on how, where and why a system is exhibiting complexity. Specifically, I will describe tools we have produced to enable quantitative analysis of such information processing, including both theoretical advances (such as how to measure information flows between spike trains) and software toolkits (including JIDT and IDTxl). I will focus specifically on the interaction between theory, enabling tools and applications, and how we have addressed methodological challenges at their intersection.

Bio
Associate Professor Joseph Lizier (PhD, 2010) is a member of the School of Computer Science in the Faculty of Engineering at The University of Sydney (since 2015). His research studies the dynamics of information processing in biological and bio-inspired complex systems and networks, focussing both on fundamental theoretical advances as well as applications to neural systems and collective animal behaviour. He is a lead developer of the JIDT and IDTxl toolboxes for using information theory to characterise dynamics of information flows and effective structure of complex systems time-series. A/Prof. Lizier teaches into the University's Master of Complex Systems degree, and is a deputy director of the Centre for Complex Systems. He is a member of the editorial boards of Entropy, Complexity, Theory in Bioscience, Complex Systems, and Frontiers in Robotics and AI. He has held postdoctoral positions at CSIRO and Max Planck Institute Leipzig, and worked in the telecommunications industry for 10 years, including at Seeker Wireless and Telstra Research Laboratories.

Personal Website - Twitter - Google Scholar - GitHub


Romain Brette (He/Him)

drawing

Talk title: Computation in the brain

Slides

Abstract: It is often taken for granted that brains compute over neural representations. Traditionally, this means that properties of neural activity play the role of variables in correspondence with properties of things in the world, while brain processes play the role of algorithms that manipulate those variables. This claim is based on implicit assumptions, which require closer examination. The first one is that all behavior is computational. To the extent that “computational” is meaningful, this is false. The second one is neural reductionism: the idea that alleged psychological units (such as the percept of a face) must correspond to neurophysiological units (activity of specific neurons or groups of neurons), rather than to a mode of activity of the brain. The third one is that neural representations correspond to neurophysiological states, which can then be governed by computational processes. But neural representations are not brain states: they are experimental measurements of firing rates over an extended period after a presented stimulus; that is, they are already properties of brain processes. This confusion undermines the coherence of neurocomputationalism.

Bio
Romain Brette is a theoretical neuroscientist in Paris who has worked in cellular biophysics and systems neuroscience, specifically on modelling auditory perception. This work has led him to reflect on the foundational concepts of neural modeling and computational neuroscience, such as neural codes, computation and information.

Personal Website - Twitter - Google Scholar - GitHub


Gaël Varoquaux (He/Him)

drawing

Talk title: Model predictions advance science more than modeling ingredients

Slides

Abstract: Brain sciences are in a difficult position: there is a profusion of theories, an increasing amount of data on brain and behavior, and yet no emerging framework to link them. One roadblock to how we integrate empirical evidence in scientific theories is that we fit models to data, assuming that these models are correct, and then reason on their ingredients. This methodology can easily lead to circular reasoning. It also encourages idealized experiments on well-controlled instantiations of theoretical constructs, such as mental processes, leading to models with little external validity: no ability to conclude on observations outside the given experimental paradigm. I believe that, for the time being, we need to put less focus on idealized models and strong claims on mental constructs, their validity and organization. Rather, we should focus on models that generalize across many experimental settings, and criticize models more on their predictions than on their ingredients. This agenda has been growing with the use of machine-learning in neuroscience and can lead to more robust empirical evidence. The way forward may lie more on direct fits to behaviour rather than dissociated mental categories, though it requires putting aside short-term promises of a tidy and aesthetic model of brain function.

Bio
Gaël Varoquaux is the Research Director of soda at Inria (National Institute for Research in Digital Science and Technology, Paris) and the scikit-learn operations at Inria foundation. He is a core contributor to scientific computing in Python (scikit-learn, nilearn, and dirty-cat, amongst others), and focuses on the use of machine learning in the public/mental health sector, and its potential in understanding cognition and brain activity in his research.

Personal Website - Twitter - Google Scholar - GitHub


Konrad Kording (He/Him)

drawing

Talk title: Causality in the brain

Slides

Abstract: Konrad Kording will introduce the conceptual role that causation plays in most neuroscientists ways of thinking about the brain – both in the context of medical applications and in the context of basic research. He will contrast this conceptual role with the limitations of scientists to experimentally get at causality. Perturbation, the one methods that does allow reliably establishing causality does not scale to high dimensions. This makes understanding a complex system with many interacting parts highly problematic. He will end with an overview of coming approaches, in particular connectomics, that may change the current status quo in neural causality research.

Bio
Dr. Kording is trying to understand how the world and in particular the brain works using data. Early research in the lab focused on computational neuroscience and in particular movement. But as the approaches matured, the focus has more been on discovering ways in which new data sources as well as emerging data analysis can enable awesome possibilities. The current focus is on Causality in Data science applications - how do we know how things work if we can not randomize? But we are also very much excited about understanding how the brain does credit assignment. Our style of working is transdisciplinary, we collaborate on virtually every project

Personal Website - Twitter - Google Scholar - GitHub


Jessica Flack (She/Her)

drawing

Talk title: Collective Computation in Nature

Abstract: Jessica Flack will discuss how error and subjectivity in information processing and the universal collective property of biological systems shape the foundations of computation and micro-macro relations in nature. Flack will introduce three concepts she and her collaborators have been developing: collective coarse-graining as a downward causation mechanism, hourglass emergence, and the information theoretic concept of channel switching.

Bio
Jessica Flack is a professor at the Santa Fe Institute, Director of SFI's Collective Computation Group and a Chief Editor at Collective Intelligence. Flack considers herself to be a computational Platonist who is interested in the foundations of computation in nature, the origins of biological space and timescales, and micro-macro maps in information processing systems. Flack works across levels of biological organization from cells to populations of neurons to animal societies to markets. Goals of Flack’s research are to discover the computational principles that allow ::nature to overcome subjectivity due to noisy information processing, reduce uncertainty, and compute robust, ordered states. Central ideas include 1) complexity begets complexity (it’s complexity all the way down), 2) noisy information processors compute their macroscopic worlds through collective coarse-graining in evolutionary and/or learning time, 3) collective coarse-graining gives rise to effective downward causation, and facilitates the consolidation of new organizational scales, and 4) emergence is an hourglass process in which compression of microscale complexity produces an information bottleneck and subsequent macroscopic expansion. This work draws on evolutionary theory, cognitive neuroscience and behavior, statistical mechanics, information theory, dynamical systems, and theoretical computer science for concepts and methods, and also involves re-thinking first principles thinking for information processing systems.

Personal Website - Twitter


Melanie Mitchell (She/Her)

drawing

Talk title: Why AI is Harder Than We Think

Slides

Abstract: Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what kinds of new ideas and new science will be needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent.

Bio
Melanie Mitchell is the Davis Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

Personal Website - Twitter - Google Scholar

drawing

Organizers


The symposium is organized within the ‘Sensation and Perception to Awareness: Leverhulme Doctoral Scholarship Programme’ directed by Jamie Ward and Anil Seth at the University of Sussex. It brings together researchers from across neuroscience, philosophy, psychology, robotics, and the arts, with the aim of advancing our understanding of interactions between sensation, perception, and awareness in humans, animals, and machines. As part of the programme, doctoral researchers are encouraged to be involved in seminar and conference organisation. For this symposium, the organizers are:

Tomasz Korbak (He/Him)

drawingIn his doctoral project, Tomek is working on deep reinforcement learning and generative models with Chris Buckley and Anil Seth. He focuses on probabilistic approaches to control, such as active inference and control-as-inference, and controllable generative modelling.

Personal Website - Twitter - Google Scholar - Github - LinkedIn


Federico Micheli (He/Him)

drawing Federico is a PhD student at the University of Sussex, working under the superivsion of Dr. Peter Lush, Prof. Warrick Roseboom and Prof. Anil Seth. His interests span all over consciousness science. In his project, he’s working on the cognitive penetrability of perception thesis, using insights from experimental hypnosis research to answer outstanding questions on the origin and nature of perceptual illusions.

Twitter


Nadine Spychala (She/Her)

drawing Nadine is a doctoral researcher in computational neuroscience and complex systems where she validates information-theoretic measures of complexity and emergence in both simulated and empirical data. Her work can be described as a solid mixture of mathematics, machine learning, neuroscience, as well as philosophy. She cares about open & reproducible research (and, in this context, good research software) that is aligned with ethical research culture & incentives.

Personal Website - Twitter - Google Scholar - GitHub - LinkedIn