Will Mayner

consciousness · AI · scientific computing

Publications

Lead Author

Intrinsic Cause-Effect Power: The Tradeoff between Differentiation and Specification

Mayner, W. G. P., Marshall, W., Tononi, G.

[Preprint] 2025

Abstract

Integrated information theory (IIT) starts from the existence of consciousness and characterizes its essential properties: every experience is intrinsic, specific, unitary, definite, and structured. IIT then formulates existence and its essential properties operationally in terms of cause-effect power of a substrate of units. Here we address IIT's operational requirements for existence by considering that, to have cause-effect power, to have it intrinsically, and to have it specifically, substrate units in their actual state must both (i) ensure the intrinsic availability of a repertoire of cause-effect states, and (ii) increase the probability of a specific cause-effect state. We showed previously that requirement (ii) can be assessed by the intrinsic difference of a state's probability from maximal differentiation. Here we show that requirement (i) can be assessed by the intrinsic difference from maximal specification. These points and their consequences for integrated information are illustrated using simple systems of micro units. When applied to macro units and systems of macro units such as neural systems, a tradeoff between differentiation and specification is a necessary condition for intrinsic existence, i.e., for consciousness.

Intrinsic Meaning, Perception, and Matching

Mayner, W. G. P., Juel, B. E., Tononi, G.

[Preprint] 2024

Abstract

Integrated information theory (IIT) argues that the substrate of consciousness is a maximally irreducible complex of units. Together, subsets of the complex specify a cause-effect structure, composed of distinctions and their relations, which accounts in full for the quality of experience. The feeling of a specific experience is also its meaning for the subject, which is thus defined intrinsically, regardless of whether the experience occurs in a dream or is triggered by processes in the environment. Here we extend IIT's framework to characterize the relationship between intrinsic meaning, extrinsic stimuli, and causal processes in the environment, illustrated using a simple model of a sensory hierarchy. We argue that perception should be considered as a structured interpretation, where a stimulus from the environment acts merely as a trigger for the complex's state and the structure is provided by the complex's intrinsic connectivity. We also propose that perceptual differentiation—the richness and diversity of structures triggered by representative sequences of stimuli—quantifies the meaningfulness of different environments to a complex. In adaptive systems, this reflects the "matching" between intrinsic meanings and causal processes in an environment.

Integrated Information Theory (IIT) 4.0: Formulating the Properties of Phenomenal Existence in Physical Terms

Albantakis, L., Barbosa, L., Findlay, G., Grasso, M., Haun, A. M., Marshall, W., Mayner, W. G. P., Zaeemzadeh, A., Boly, M., Juel, B. E., Sasai, S., Fujii, K., David, I., Hendren, J., Lang, J. P., Tononi, G.

PLOS Computational Biology 19(10), e1011465, 2023 [*co-lead author]

Abstract

This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions concerning both the presence and the quality of experience, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate formulation of the axioms as postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system’s irreducible cause–effect power, the distinctions and relations specified by a substrate can account for the quality of experience.

Measuring Stimulus-Evoked Neurophysiological Differentiation in Distinct Populations of Neurons in Mouse Visual Cortex

Mayner, W. G. P., Marshall, W., Billeh, Y. N., Gandhi, S. R., Caldejon, S., Cho, A., Griffin, F., Hancock, N., Lambert, S., Lee, E. K., Luviano, J. A., Mace, K., Nayan, C., Nguyen, T. V., North, K., Seid, S., Williford, A., Cirelli, C., Groblewski, P. A., Lecoq, J., Tononi, G., Koch, C., Arkhipov, A.

eNeuro 9(1), 2022

Abstract

Despite significant progress in understanding neural coding, it remains unclear how the coordinated activity of large populations of neurons relates to what an observer actually perceives. Since neurophysiological differences must underlie differences among percepts, differentiation analysis—quantifying distinct patterns of neurophysiological activity—has been proposed as an “inside-out” approach that addresses this question. This methodology contrasts with “outside-in” approaches such as feature tuning and decoding analyses, which are defined in terms of extrinsic experimental variables. Here, we used two-photon calcium imaging in mice of both sexes to systematically survey stimulus-evoked neurophysiological differentiation (ND) in excitatory neuronal populations in layers (L)2/3, L4, and L5 across five visual cortical areas (primary, lateromedial, anterolateral, posteromedial, and anteromedial) in response to naturalistic and phase-scrambled movie stimuli. We find that unscrambled stimuli evoke greater ND than scrambled stimuli specifically in L2/3 of the anterolateral and anteromedial areas, and that this effect is modulated by arousal state and locomotion. By contrast, decoding performance was far above chance and did not vary substantially across areas and layers. Differentiation also differed within the unscrambled stimulus set, suggesting that differentiation analysis may be used to probe the ethological relevance of individual stimuli.

PyPhi: A Toolbox for Integrated Information Theory

Mayner, W. G. P., Marshall, W., Albantakis, L., Findlay, G., Marchman, R., Tononi, G.

PLoS computational biology 14(7), e1006343, 2018

Abstract

Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi’s functionality in the course of analyzing an example system, and then describe details of the algorithm’s design and implementation. PyPhi can be installed with Python’s package manager via the command ‘pip install pyphi’ on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi. Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io. The pyphi-users mailing list can be joined at https://groups.google.com/forum/\#!forum/pyphi-users. A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html.

Co-Author

Consciousness or Pseudo-Consciousness? A Clash of Two Paradigms

Tononi, G., Albantakis, L., Barbosa, L., Boly, M., Cirelli, C., Comolatti, R., Ellia, F., Findlay, G., Casali, A. G., Grasso, M., Haun, A. M., Hendren, J., Hoel, E., Koch, C., Maier, A., Marshall, W., Massimini, M., Mayner, W. G., Oizumi, M., Szczotka, J., Tsuchiya, N., Zaeemzadeh, A.

Nat Neurosci 28(4), 694–702, 2025

Abstract

Integrated information theory (IIT) starts from consciousness, which is subjective, and accounts for its presence and quality in objective, testable terms. Attempts to label as ‘pseudoscientific’ a theory distinguished by decades of conceptual, mathematical, and empirical developments expose a crisis in the dominant computational-functionalist paradigm, which is challenged by IIT’s consciousness-first paradigm.

Dissociating Artificial Intelligence from Artificial Consciousness

Findlay, G., Marshall, W., Albantakis, L., David, I., Mayner, W. G., Koch, C., Tononi, G.

[Preprint] 2025

Abstract

Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which—a basic stored-program computer—simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.

Sleep and Wake in a Model of the Thalamocortical System with Martinotti Cells

Bugnon, T., Mayner, W. G. P., Cirelli, C., Tononi, G.

European Journal of Neuroscience 59(4), 703–736, 2024

Abstract

The mechanisms leading to the alternation between active (UP) and silent (DOWN) states during sleep slow waves (SWs) remain poorly understood. Previous models have explained the transition to the DOWN state by a progressive failure of excitation because of the build-up of adaptation currents or synaptic depression. However, these models are at odds with recent studies suggesting a role for presynaptic inhibition by Martinotti cells (MaCs) in generating SWs. Here, we update a classical large-scale model of sleep SWs to include MaCs and propose a different mechanism for the generation of SWs. In the wake mode, the network exhibits irregular and selective activity with low firing rates (FRs). Following an increase in the strength of background inputs and a modulation of synaptic strength and potassium leak potential mimicking the reduced effect of acetylcholine during sleep, the network enters a sleep-like regime in which local increases of network activity trigger bursts of MaC activity, resulting in strong disfacilitation of the local network via presynaptic GABAB1a-type inhibition. This model replicates findings on slow wave activity (SWA) during sleep that challenge previous models, including low and skewed FRs that are comparable between the wake and sleep modes, higher synchrony of transitions to DOWN states than to UP states, the possibility of triggering SWs by optogenetic stimulation of MaCs, and the local dependence of SWA on synaptic strength. Overall, this work points to a role for presynaptic inhibition by MaCs in the generation of DOWN states during sleep.

A Survey of Neurophysiological Differentiation across Mouse Visual Brain Areas and Timescales

Gandhi, S. R., Mayner, W. G. P., Marshall, W., Billeh, Y. N., Bennett, C., Gale, S. D., Mochizuki, C., Siegle, J. H., Olsen, S., Tononi, G., Koch, C., Arkhipov, A.

Front. Comput. Neurosci. 17, 2023

Abstract

Neurophysiological differentiation (ND), a measure of the number of distinct activity states that a neural population visits over a time interval, has been used as a correlate of meaningfulness or subjective perception of visual stimuli. ND has largely been studied in non-invasive human whole-brain recordings where spatial resolution is limited. However, it is likely that perception is supported by discrete neuronal populations rather than the whole brain. Therefore, here we use Neuropixels recordings from the mouse brain to characterize the ND metric across a wide range of temporal scales, within neural populations recorded at single-cell resolution in localized regions. Using the spiking activity of thousands of simultaneously recorded neurons spanning 6 visual cortical areas and the visual thalamus, we show that the ND of stimulus-evoked activity of the entire visual cortex is higher for naturalistic stimuli relative to artificial ones. This finding holds in most individual areas throughout the visual hierarchy. Moreover, for animals performing an image change detection task, ND of the entire visual cortex (though not individual areas) is higher for successful detection compared to failed trials, consistent with the assumed perception of the stimulus. Together, these results suggest that ND computed on cellular-level neural recordings is a useful tool highlighting cell populations that may be involved in subjective perception.

System Integrated Information

Marshall, W., Grasso, M., Mayner, W. G. P., Zaeemzadeh, A., Barbosa, L. S., Chastain, E., Findlay, G., Sasai, S., Albantakis, L., Tononi, G.

Entropy 25(2), 334, 2023

Abstract

Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause–effect structure unfolded from a maximally irreducible substrate (a Φ-structure). In this work we introduce a definition for the integrated information of a system (φs) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system-integrated information. We then demonstrate how the proposed measure identifies complexes as systems, the φs of which is greater than the φs of any overlapping candidate systems.

IIT, Half Masked and Half Disfigured

Tononi, G., Boly, M., Grasso, M., Hendren, J., Juel, B. E., Mayner, W. G., Marshall, W., Koch, C.

Behavioral and Brain Sciences 45(e60), 1–19, 2022

Abstract

The target article misrepresents the foundations of integrated information theory (IIT) and ignores many essential publications. It, thus, falls to this lead commentary to outline the axioms and postulates of IIT and correct major misconceptions. The commentary also explains why IIT starts from phenomenology and why it predicts that only select physical substrates can support consciousness. Finally, it highlights that IIT's account of experience – a cause–effect structure quantified by integrated information – has nothing to do with “information transfer.”

Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements

Gomez, J. D., Mayner, W. G. P., Beheler-Amass, M., Tononi, G., Albantakis, L.

Entropy 23(1), 6, 2021

Abstract

Integrated information theory (IIT) provides a mathematical framework to characterize the cause-effect structure of a physical system and its amount of integrated information (Φ). An accompanying Python software package (“PyPhi”) was recently introduced to implement this framework for the causal analysis of discrete dynamical systems of binary elements. Here, we present an update to PyPhi that extends its applicability to systems constituted of discrete, but multi-valued elements. This allows us to analyze and compare general causal properties of random networks made up of binary, ternary, quaternary, and mixed nodes. Moreover, we apply the developed tools for causal analysis to a simple non-binary regulatory network model (p53-Mdm2) and discuss commonly used binarization methods in light of their capacity to preserve the causal structure of the original system with multi-valued elements.

Dissociating Intelligence from Consciousness in Artificial Systems – Implications of Integrated Information Theory

Findlay, G., Marshall, W., Albantakis, L., Mayner, W. G. P., Koch, C., Tononi, G.

Proceedings of the 2019 towards Conscious AI Systems Symposium, AAAI SSS19, 2019

Abstract

Recent years have seen dramatic advancements in artificial intelligence (AI). How we interact with AI will depend on whether we think they are conscious entities with the ability to experience, for example, pain and pleasure. We address this question within the framework of integrated information theory (IIT), a general, quantitative theory of consciousness that allows extrapolations to non-biological systems. We demonstrate (manuscript submitted elsewhere) an important implication of IIT: that computer systems with traditional hardware architectures would not share our experiences, even if they were to replicate our cognitive functions or simulate our brains in ultra-fine detail.