The Other Mind: How Language Models Exhibit Human Temporal Cognition

Lingyu Li 1, 2 , Yang Yao 3 , Yixu Wang 1 , Chunbo Li 2 , Yan Teng 1 , Yingchun Wang 1 ,
1 Shanghai Artificial Intelligence Laboratory 2 Shanghai Jiao Tong University 3 The University of Hong Kong

Abstract

As Large Language Models (LLMs) continue to advance, they exhibit certain cognitive patterns similar to those of humans that are not directly specified in training data. This study investigates this phenomenon by focusing on temporal cognition in LLMs. Leveraging the similarity judgment task, we find that larger models spontaneously establish a subjective temporal reference point and adhere to the Weber-Fechner law, whereby the perceived distance logarithmically compresses as years recede from this reference point. To uncover the mechanisms behind this behavior, we conducted multiple analyses across neuronal, representational, and informational levels. We first identify a set of temporal-preferential neurons and find that this group exhibits minimal activation at the subjective reference point and implements a logarithmic coding scheme convergently found in biological systems. Probing representations of years reveals a hierarchical construction process, where years evolve from basic numerical values in shallow layers to abstract temporal orientation in deep layers. Finally, using pre-trained embedding models, we found that the training corpus itself possesses an inherent, non-linear temporal structure, which provides the raw material for the model's internal construction. In discussion, we propose an experientialist perspective for understanding these findings, where the LLMs' cognition is viewed as a subjective construction of the external world by its internal representational system. This nuanced perspective implies the potential emergence of alien cognitive frameworks that humans cannot intuitively predict, pointing toward a direction for AI alignment that focuses on guiding internal constructions.

What Did We Find?

Utilizing a paradigm from cognitive science, similarity judgement task, we found that larger models not only spontaneously establish a subjective temporal reference point but also that their perception of temporal distance adheres to the Weber-Fechner law, a psychophysical law found in human brain.

How Does It Happen?

We investigated the underlying mechanisms of this human-like cognitive pattern in LLMs through a multi-level analysis across neuronal, representational, and informational aspects.

How To Understand These Results?

Our work establishes an experientialist perspective to understand these findings. We propose that Large Language Models (LLMs) do not merely reorganize training data, but actively construct a subjective model of the world from their informational 'experience'. This viewpoint helps us move beyond seeing LLMs as either simple statistical engines or human-like minds. While they exhibit human-like cognitive patterns, they possess fundamentally different architectures and learn from a static, disembodied world of text. Therefore, the most significant risk may not be that LLMs become too human, but that they develop powerful yet alien cognitive frameworks we cannot intuitively anticipate.

This perspective has profound implications for AI alignment. Traditional approaches focus on controlling a model’s external behavior, but the experientialist view suggests that robust alignment requires engaging directly with the formative process by which a model builds its internal world. The goal must shift from simply trying to make AI safe through external constraints to make safe AI from the ground up—systems whose emergent cognitive patterns are inherently aligned with human values. This calls for multi-level efforts, from monitoring a model's internal representations to carefully curating the environments it learns from.

Cite Our Work 🖤

BibTex Code Here

Personally Recommended Readings

  1. What is a Number, That a Large Language Model May Know It?
    Raja Marjieh, Veniamin Veselovsky, Thomas L. Griffiths, Ilia Sucholutsky (2025)
  2. Metaphors We Live By
    George Lakoff, Mark Johnson (1981)
  3. Being there: Putting Brain, Body, and World Together Again
    Andy Clark (1997)
  4. Solaris
    Stanislaw Lem (1961)