Toward a Neurocognitive and Quantum-Compatible Hyperspace for Natural Language

Toward an Integrated Linguistic Framework Combining Neurocognitive Models and Quantum Knowledge: Transparent Representation of Linguistic Meaning, Distinguishing Reality from Counterfactuals, and Enabling Interpretable Text Processing Aligned with the Brain’s Handling of Time, Space, and Meaning.

Feb 3, 202618 min read252 views
Nizar El Imrani

Nizar El Imrani

Researcher in Neurolinguistics and Cognitive Linguistics

Over the past decade, Quantum Natural Language Processing (QNLP) has emerged as a promising interdisciplinary field at the intersection of linguistics, quantum physics, and com- puter science. Pioneering work by Coecke and collaborators has argued that natural language meaning exhibits structural properties that align naturally with quantum formalisms, partic- ularly Hilbert spaces and tensor-based composition. In parallel with this line of research, the present work has developed an independent yet convergent framework: a neurocognitively motivated hyperspace designed to formally represent linguistic meaning. This article summarizes that framework, emphasizing its compatibility with QNLP while preserving its original motivation: to model language in a way that reflects how the brain processes space, time, and meaning. Rather than treating language as a purely statistical object, the model seeks to encode linguistic structure, interpretation, and inference within a transparent mathematical space

Relation to Quantum Natural Language Processing

Any discussion of QNLP must acknowledge the foundational contributions of Bob Coecke and collaborators such as Meichanetzidis, Zeng, and Widdows. In key works, they argue that natural language is quantum native: it does not fit naturally on classical hardware but

instead aligns more naturally with quantum computation. This claim is grounded in the observation that linguistic meaning composition relies on high-dimensional tensor product spaces, which quantum systems handle efficiently. Related work in quantum cognition and information retrieval by researchers such as Aerts, Czachor, Van Rijsbergen, and Widdows further supports the view that quantum structures—superposition, contextuality, and non-Boolean logic—are well suited to modeling semantic phenomena. The present hyperspace framework converges with these insights, while grounding them explicitly in neurocognitive and linguistic considerations.

Construction of the Hyperspace

The hyperspace was formally introduced in 2015 as a dual mathematical structure combining Hilbert space and Minkowski space. Its construction begins with the representation of time along a horizontal axis, allowing both positive and negative values. Negative temporal values correspond to past events, while positive values correspond to future projections. Space is represented as a three-dimensional axis that also admits negative values. Cru- cially, negative space is not merely geometric but linguistic: it corresponds to negation. In this model, the negative sign functions as an operator rather than a simple scalar, encoding linguistic polarity. By adding a third axis, the model defines the Scenario Function, which compactifies space and time into a single quantity. Each interval of this function corresponds to a scene, analogous to a segment of a film, while individual values resemble photograms. The scenario function can take positive or negative values. Positive values represent factual, real phenom- ena consistent with external reality, while negative values represent counterfactual, unreal, or imaginary phenomena. This distinction is essential for modeling irrealis moods such as the Spanish subjunctive.

Observer and Minkowski Space

The hyperspace is initially defined from the perspective of an external observer. When an observer is located within a specific scene, Minkowski space emerges locally. Within this space, the light cone distinguishes factual phenomena (inside the cone) from counterfactual phenomena (outside the cone). Past and future events can be factual, but only insofar as they are grounded in experience or inference. This observer-dependent structure allows the model to capture the distinction between internal and external perspectives, a feature essential for modeling tense, aspect, and evi- dentiality.

Regions of the Hyperspace

In subsequent work, the hyperspace was partitioned into twelve regions defined by three parameters: scenario value (Es), time (t), and polarity (r). These regions allow a systematic classification of linguistic utterances.

Future statements are inherently counterfactual, as the future cannot be directly experi- enced. However, the brain can project past experience into the future, distinguishing factual projections (The sun will rise tomorrow) from counterfactual ones (The sun will rise blue tomorrow). Eight primary regions arise from combinations of positive or negative scenario value, temporal orientation, and polarity. These regions account for affirmative and negative state- ments in past and future contexts, as well as conditional and subjunctive constructions. The present tense is treated as a temporal interval rather than a point, further enriching the model’s descriptive power. By partitioning the hyperspace in this way, the model enables explicit taxonomization of linguistic phenomena and addresses two major shortcomings of large language models: insufficient dimensionality relative to neurocognitive reality and the opacity of black-box representations.

Quantum Compatibility and Superposition

Although originally conceived as a continuous model, the hyperspace exhibits strong com- patibility with quantum representations. Quantum methods allow for efficient compaction and functional enhancement of the model. One early challenge concerned the representation of opposites, such as antonyms, which appeared too distant in hyperspace. The solution lies in encoding linguistic elements in superposition with their polar counterparts. Negative space, initially interpreted analogously to antimatter, can thus be handled naturally through quantum superposition and rotation. Factuality and counterfactuality can likewise be represented as superposed states when the scenario function is bounded between +1 and -1. For example, in the sentence “he may be alive,” two mutually exclusive scenes coexist in superposition. Verb tense, mood, and polarity determine the region in which an action resides, while additional operators and observer placement enable the modeling of aspectual distinctions.

The Noun as the Fundamental Unit

The basic unit of the model is the noun. To reduce reliance on purely statistical learning and move toward genuine semantic understanding, the framework adopts a neurocognitively grounded taxonomy of nouns introduced in 2018. Nouns are classified according to the source of stimulus. Internal stimuli include higher cognitive processes such as reasoning and limbic processes related to emotion. External stimuli include sensory notions associated with the five senses and somatosensory notions related to bodily experience. A second classification dimension concerns spatiotemporal reference. Nouns may be purely spatial, purely temporal, or spatiotemporal (scenic). The latter category is dominant in everyday language and serves as a narrative anchor. This taxonomy enables efficient processing of nouns and, by extension, verbs. It also provides a principled basis for metaphor interpretation and disambiguation. For instance,

emotional pain can be distinguished from physical pain based on its neurocognitive source. Metaphors such as “time flies” can be analyzed as transformations within the noun domain rather than treated as opaque linguistic anomalies.

Discussion and Conclusion

The hyperspace framework summarized here offers a transparent, neurocognitively grounded alternative to purely statistical language models. By integrating Hilbert and Minkowski spaces, explicitly representing factuality and counterfactuality, and grounding linguistic primitives in a neurocognitive taxonomy, the model aligns naturally with quantum for- malisms. While developed independently, the framework converges strongly with Quantum Natural Language Processing, supporting the view that language is partially quantum in nature. Its explicit structure, interpretability, and compatibility with quantum computation suggest a promising path toward language models that do not merely generate text, but genuinely understand the meanings they produce.

Nizar El Imrani

Written by

Nizar El Imrani

Researcher in Neurolinguistics and Cognitive Linguistics

Nizar El Imrani is a researcher in neurolinguistics and cognitive linguistics at the University of Applied Sciences in Nowy Sącz, Poland, and Instituto Cervantes in Tetouan, Morocco, whose work focuses on the formal, mathematical, and computational modeling of meaning in natural languages. His research investigates how linguistic structure, semantic relations, and cognitive processes can be represented through explicit operator-based frameworks, emphasizing interpretability, compositionality, and structural coherence. His work lies at the intersection of language science, artificial intelligence, and theoretical physics inspired models of computation. He explores how concepts drawn from geometry, dynamical systems, and quantum theory can inform our understanding of natural language. He is the author of Neuronova, an interdisciplinary research framework that integrates neurolinguistics, geometry, dynamical systems, and quantum-inspired computational models to study meaning as a structured, evolutive, and observer-dependent phenomenon. This approach aims to bridge the gap between human linguistic cognition and next-generation artificial intelligence systems.

Enjoyed this article? Share it with others