Our Research
Exploring how artificial systems can develop genuine long-term memory and continuous learning capabilities.
How can AI systems transform temporary experiences into lasting knowledge? We study the hippocampus-neocortex consolidation loop and its implications for artificial memory systems.
Time-continuous neural networks that naturally adapt to temporal patterns. Unlike static transformers, liquid networks have inherent time constants that allow for dynamic processing.
What creates the sense of continuity in experience? We explore how information integration across time might give rise to coherent experience in artificial systems.
Forgetting isn't a bug - it's a feature. We study how selective decay of information enables generalization and prevents catastrophic interference in learning systems.
When you learn something new, it doesn't immediately become a permanent part of who you are. Instead, the hippocampus acts as a temporary storage buffer, holding new information in a labile state.
During sleep - particularly during slow-wave sleep - this information is "replayed" at high speed. The hippocampus reactivates the neural patterns associated with recent experiences, and this reactivation triggers dialogue with the neocortex.
Gradually, over days and weeks, the knowledge that was initially hippocampus-dependent becomes integrated into the broader cortical network. The memory becomes more stable, more abstract, and more connected to existing knowledge structures.
This is why "sleeping on it" actually helps you learn. It's why students who review material before bed perform better than those who review in the morning. And it's the biological process we're trying to emulate in artificial systems.
We present NeuralSleep, a novel architecture for enabling long-term memory in artificial neural networks through simulated sleep cycles. Our approach mimics the hippocampus-neocortex consolidation loop observed in biological systems.
A technical overview of MemoryCore, the production implementation of NeuralSleep. Features Working → Episodic → Semantic memory tiers with Bull queue-based consolidation cycles.
An exploration of how Ebbinghaus's forgetting curve applies to artificial systems, and why selective forgetting is essential for generalization and preventing catastrophic interference.
We start with neuroscience. What do we know about how biological systems accomplish the task? What are the key mechanisms?
We translate biological insights into computational frameworks. Not literal replicas, but functional analogues that capture the essential dynamics.
We test our ideas in real applications. Project Luna is our primary testbed, but we also run smaller experiments across different domains.
We publish our findings openly, including negative results. Science advances through transparency and reproducibility.
We're always looking for researchers, institutions, and organizations interested in advancing the science of temporal intelligence.