MemoryCore

Active Development

MemoryCore is a production implementation of NeuralSleep's theoretical architecture. It provides a three-tier memory consolidation system that mirrors biological memory formation - transforming temporary experiences into lasting structural knowledge.

Architecture

  • Three-Tier Memory System

    Working (Redis) → Episodic (PostgreSQL) → Semantic (PostgreSQL) with progressive consolidation

  • Temporal Dynamics (LTC Approximation)

    Exponential Moving Averages with multi-timescale time constants (100ms to 1 day)

  • Bull Queue Consolidation

    Immediate (session end), Daily (2 AM), Weekly (3 AM Sunday) consolidation cycles

  • Pattern Extraction

    Error clusters, success sequences, performance trends, learning style detection

// Three-Tier Memory Flow
// 1. Working Memory (Redis) - Session state
POST /api/memory/session/start
POST /api/memory/interaction
POST /api/memory/session/end

// 2. Consolidation triggers automatically
// Immediate: Working → Episodic
// Daily: Episodic → Semantic patterns
// Weekly: Deep semantic updates

// 3. Query user's learned model
GET /api/memory/user/:id/model
GET /api/memory/user/:id/recommendations

Tech Stack

TypeScript Redis PostgreSQL Bull Queues Node.js Express

Three Memory Tiers

01
Working Memory

Real-time session state in Redis. Time constants: 100ms-1s. High plasticity.

02
Episodic Memory

Recent events in PostgreSQL. Time constants: 1s-10min. Pattern extraction.

03
Semantic Memory

User models and mastery. Time constants: 10min-1day. Permanent structure.

Consolidation Cycles

  • Immediate: Working → Episodic after each session ends
  • Daily: Episodic → Semantic pattern promotion at 2 AM
  • Weekly: Deep semantic updates and pruning at 3 AM Sunday

NeuralSleep

Theoretical Framework

NeuralSleep is our theoretical framework for understanding consciousness through temporal integration. It proposes that genuine memory requires structural modification, not storage and retrieval - implemented practically through MemoryCore.

Core Principles

Memory as structural modification: Past experiences shape present processing through integrated weight updates, not database lookups. The system genuinely changes with each consolidation cycle.

Multi-timescale integration: Working memory (100ms-1s), Episodic memory (1s-10min), and Semantic memory (10min-1day) operate on different time constants, approximating Liquid Time-Constant networks.

Sleep-like consolidation: Periodic offline processing (immediate, daily, weekly) transforms temporary experiences into permanent structural changes - mirroring biological memory consolidation.

Luna Chat v7

Open Source

Luna Chat is an AI-powered personal assistant with multi-agent capabilities, persistent memory, and extensible abilities. It routes between multiple LLM providers and uses specialized agents for focused tasks - from research to coding to creative writing.

Key Features

  • Multi-Agent System

    Five specialized agents (researcher, coder, writer, analyst, planner) powered by Claude CLI

  • Multi-Model Routing

    Seamlessly routes between OpenAI, Anthropic, and other LLM providers

  • Persistent Memory

    Long-term memory with facts, preferences, and conversation history

  • Extensible Abilities

    Calendar, email, documents, code execution, web search, and knowledge base

  • Persona System

    Customizable personality with mood tracking for natural conversations

Agent System

R
Researcher

Deep research and fact-finding for complex questions

C
Coder

Code writing, debugging, and review

W
Writer

Creative and professional content creation

A
Analyst

Data analysis and calculations

P
Planner

Task breakdown and project planning

// API Endpoints
// Chat Sessions
POST /api/chat/sessions
POST /api/chat/sessions/:id/send
GET /api/chat/sessions/:id/messages

// Agent System
GET /api/abilities/agents
POST /api/abilities/agents/execute
POST /api/abilities/agents/orchestrate

Tech Stack

Node.js TypeScript PostgreSQL pgvector Redis Claude CLI JWT

Study with Luna

Beta Testing

Study with Luna is our real-world testbed for temporal AI - a language learning assistant that genuinely adapts to each user over time. Not through better prompts or larger context windows, but through actual structural learning.

What Makes Luna Different

  • Persistent Learning

    Luna remembers your learning style, vocabulary gaps, and progress across sessions

  • Adaptive Curriculum

    Lessons evolve based on what you struggle with and what you've mastered

  • Spaced Repetition

    Intelligent review scheduling based on forgetting curve science

  • Natural Conversation

    Practice through genuine dialogue, not rote memorization

studywithluna.com
L

Good morning! I noticed you mixed up 休 and 体 yesterday - they share the same radical but have different meanings. Want to practice similar characters?

Yes! I keep confusing characters that look alike.

U
L

Great! I've grouped characters by their radicals so you can see the patterns. Let's start with the 人 (person) radical family.

Currently Supporting

Mandarin Chinese

Japanese support planned

System Architecture

Frontend React + TypeScript + Material-UI
Backend FastAPI + SQLAlchemy + Celery
Database PostgreSQL + Redis
Deploy Docker Compose + Nginx

Recommendation Engine

  • Playback Tracking: Monitors listening behavior in real-time
  • Rating System: Like/dislike feedback trains the model
  • Context-Aware: Different playlists for workout, focus, chill
  • ML Backend: Collaborative filtering with implicit feedback

Tech Stack

Python FastAPI React TypeScript PostgreSQL Redis Celery Docker

AutoMusic

Active Development

Spotify recommends what other people listen to. AutoMusic learns from your behavior - what you skip, what you replay, what you rate. No crowd-sourced algorithms pushing popular tracks. Just your taste, refined over time.

Key Features

  • Smart Playback Tracking

    Monitors what you listen to and learns from your behavior patterns

  • AI-Powered Recommendations

    Generates personalized playlists based on mood, activity, and preferences

  • Analytics Dashboard

    Visualize your listening patterns and discover trends over time

  • Real-time Spotify Integration

    Seamless OAuth 2.0 authentication and WebSocket sync

Music Recommendations Pipeline

Active Development

A reproducible pipeline for building music recommendations at scale. Combines Discogs metadata (50M+ releases) with Yambda interaction data to train collaborative filtering models that power personalized recommendations.

Pipeline Stages

  • Discogs XML Parsing

    Stream large XML dumps to normalized Parquet tables (artists, labels, masters, releases)

  • Yambda Interaction Processing

    Build implicit feedback matrices from listening data with weighted interactions

  • ALS Model Training

    Train collaborative filtering models using the implicit library with GPU support

  • Recommendation Serving

    Generate personalized recommendations with user/item factor matrices

# Pipeline Commands
# 1. Parse Discogs data to Parquet
python scripts/discogs_to_parquet.py

# 2. Build Yambda interaction matrix
python scripts/yambda_build_interactions.py

# 3. Train ALS model
python scripts/train_als.py --factors 128 --gpu

# 4. Generate recommendations
python scripts/test_recommendations.py --top-n 20

Interaction Weighting

  • played_ratio >= 50%: weight 1.0
  • played_ratio > 100%: weight 1.5
  • like action: adds +1.5 to weight
  • Multiple listens: summed

Tech Stack

Python Polars Parquet implicit DuckDB YAML

Memory Usage

  • Discogs parsing: ~2-4 GB (streaming XML)
  • Yambda processing: ~10-20 GB (lazy eval)
  • ALS training: ~5-15 GB (sparse matrices)

System Architecture

How It All Fits Together

🌙

NeuralSleep

Theoretical Framework

🗄️

MemoryCore

Implementation Layer

🤖

Luna Chat v7

AI Assistant

📚

Study with Luna

Language Learning

NeuralSleep provides the theoretical foundation for consciousness through temporal integration. MemoryCore implements these principles with a three-tier memory system. Luna Chat v7 and Study with Luna demonstrate the complete system in production - one as a multi-agent AI assistant, the other as an adaptive language tutor.

Want to build with us?

Our infrastructure is open source. Whether you want to contribute, integrate our tools, or just explore the code - we'd love to have you.