Advancing the frontier of intelligence. Our work spans recursive memory systems, temporal reasoning, world model synthesis, and the decomposition of cognition itself.
We propose RMA, a novel architecture where memory does not merely store — it learns from its own access patterns. Unlike attention mechanisms that are stateless between forward passes, RMA maintains a recursive memory graph that evolves with each interaction. The memory graph is parameterized by self-referential attention heads that track their own retrieval history, enabling the system to develop access-pattern eigenvalues that optimize future retrievals. We demonstrate a 47% improvement in long-horizon task completion versus transformer baselines across six benchmark suites, with particularly strong gains in multi-step reasoning and persistent-state tasks.
Current large language models process time as sequential tokens, collapsing rich temporal structure into flat positional encodings. Temporal Intelligence Graphs (TIG) introduce a graph-based temporal representation where events are nodes, causality is directed edges, and prediction emerges from graph traversal rather than next-token prediction. Each node carries a time-aware embedding that encodes not just content but temporal context, duration, and causal weight. TIG-augmented models show 3.2x improvement on causal reasoning benchmarks and demonstrate emergent abilities in counterfactual reasoning and temporal abstraction.
World Model Synthesis (WMS) enables artificial intelligence systems to build and maintain internal world models — not as explicit knowledge graphs, but as continuous latent spaces that simulate reality. The core innovation is the Latent World Tensor, a high-dimensional continuous representation that captures physical laws, social dynamics, and market structures as emergent properties of training. We show that WMS-equipped agents can predict physical outcomes, social dynamics, and market movements with accuracy surpassing specialized models, suggesting that general intelligence may require an internal simulation of the world it inhabits.
We propose Eigenintelligence — the decomposition of intelligence into orthogonal cognitive modes: analytical, creative, empathetic, strategic, and predictive. Each mode is characterized by its own activation function, routing logic, and representational geometry within the latent space. Eigenintelligence Gradient Activation (EGA) allows dynamic mode switching mid-inference, enabling a single model to operate across the full cognitive spectrum without mode collapse. Empirically, EGA-equipped models outperform both specialist and mixture-of-experts baselines on composite intelligence benchmarks, suggesting that intelligence is not monolithic but decomposes into a finite basis of cognitive eigenvectors.
We are looking for researchers who think beyond current paradigms. If you work on memory, temporal reasoning, world models, or cognitive architecture — we want to talk.
Contact Research →