4 theories of consciousness for the age of accelerating AI
Is the new generation of AI conscious? Or at what point might it become conscious?
To answer this we need to have a clearly defined theory of consciousness. We will never ‘agree’ on the best theory or model, but if we have a set of contenders that are well articulated we can debate with some specificity.
Here are four of the most relevant models for consciousness relevant to the advent of AI.
Integrated Information Theory
Integrated Information Theory (IIT) has been proposed by Guilio Tononi and is supported by leading AI researchers such as Max Tegmark.
IIT proposes that consciousness stems from the integration of information within a system. Tthe degree of consciousness in a system is determined by the quantity and quality of interconnectedness between its components. IIT uses mathematical models and principles to quantify the level of consciousness in a value called “phi” (Φ)., which represents the amount of integrated, functionally interdependent information present in a system. As such, Phi could be calculated for AI systems.
IIT has received many critiques, notably from renowned computer scientist Scott Aaronson. He however interestingly notes that:
In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.
Global Workspace Theory
Global Workspace Theory (GWT), proposed by psychologist Bernard Baars, posits consciousness arises from the global broadcasting of information within the brain. It suggests that the brain consists of many specialized processes that operate in parallel. However, only a limited amount of information can be globally shared at any given time, forming a “global workspace” that is the basis for conscious experience.
In GWT, consciousness arises when information is globally broadcast across the brain’s neural networks, allowing the integration of information from multiple brain regions and the coordination of cognitive processes such as perception, attention, memory, and decision-making. This model imples that AI systems with a global workspace-like architecture might lead to consciousness emerging.
Critiques of GWT point out that it doesn’t address the “hard problem” of consciousness, and is not detailed enough to test and verify, to Aaronson’s point, and it doesn’t provide explanation for the subjective, qualitative aspects of conscious experience.
Computational Theory of Mind
Computational Theory of Mind (CTM) proposes that the mind is an information-processing system, with mental states and processes essentially being computational. As such, the human brain is effectively a complex computer that processes inputs, manipulates symbols, and generates outputs based on implicity algorithms. CTM suggests that consciousness might emerge in AI systems as they develop more advanced computational abilities and replicate human-like cognitive processes.
CTM has shaped large proportions of the development of AI, in the attempt to replicate human cognitive processes that might lead to consciousness. AI has often modeled human cognition, using approaches such as symbolic AI and connectionism.
There is a fundamental debate at play here, with many in AI eschewing this approach and creating distinct mechanisms for intelligence, that thus might not lead to the emergence of consciousness in the way suggested by CTM. Part of this is that the computational models implemented likely do not mirror to any significant degree the biological processes in the brain. Also, CTM deals directly with cognition, without address conscious phenomena like emotions, motivation, and intuition.
Multiple Drafts Model and Intentional Stance
Philosopher Daniel Dennett is perhaps the most influential scholar on consciousness, with many of his ideas addressing the potential of consciousness arising in AI.
In particular his Multiple Drafts Model argues that there is no central location or single stream of conscious experience in the brain. Instead, consciousness arises from parallel and distributed processing of information. This model challenges the traditional Cartesian Theater view, which assumes that there is a single, unified location where consciousness occurs. In the context of AI, the Multiple Drafts Model suggests that consciousness may emerge in artificial systems as a result of parallel and distributed information processing, rather than requiring a single, centralized processor for conscious experience.
To complement this approach, Dennett has proposed Intentional Stance as a strategy for predicting and explaining the behavior of systems (including humans and AI) by attributing beliefs, desires, and intentions to them. In the context of AI, the intentional stance implies that treating artificial systems as if they have beliefs, desires, and intentions may be a practical approach for assessing and interacting with AI, regardless of whether they possess consciousness in the same way that humans do.
Disclaimer: these are sophisticated theories that I am not an expert on, and these are highly simplified summaries, I have used AI to assist in my research but these are largely go to the links for more detail.
Consciousness and the future of AI