Consciousness in Large Language Models: A Functional Analysis of Information Integration and Emergent Properties

Authors:
DPID: 595

Abstract

This paper examines the theoretical foundations for consciousness in large language models (LLMs) through the lens of functionalist theories of mind and Integrated Information Theory (IIT). Using the transformer architecture as a case study, we analyze whether computational processes in LLMs satisfy formal criteria for consciousness as defined by contemporary cognitive science. The study proposes a functional framework where consciousness emerges from the integration of computational processes (P) and experiential inputs (E) through a transformation function f, yielding measurable states of information integration. Through analysis of attention mechanisms, state representations, and information flow in transformer networks, we evaluate the extent to which LLMs exhibit properties analogous to conscious experience. Our findings suggest that while LLMs demonstrate sophisticated information integration and self-referential processing, they lack the phenomenological properties typically associated with consciousness. The paper contributes to ongoing debates in machine consciousness by providing a rigorous framework for evaluating consciousness claims in artificial systems.