Updates September 2024

Project Luna: First Beta Results

Early results from Luna beta testing show promising signs of genuine long-term adaptation.

After three months of closed beta testing with our first cohort of language learners, we have enough data to share some preliminary findings. The results are encouraging.

The Test Setup

We had 47 beta testers using Luna for Mandarin Chinese learning over a 12-week period. Each user completed at least 3 sessions per week, with sessions averaging 20 minutes. This gave us substantial data on long-term adaptation patterns.

Key metrics we tracked:

Finding 1: Adaptation is Real and Measurable

The most important finding: Luna's recommendations genuinely improve over time, and this improvement is measurable.

We compared Luna's performance to a control condition where recommendations were based only on in-session context (no cross-session memory). By week 8, Luna's vocabulary recommendation accuracy was 34% higher than the control.

Recommendation Accuracy Over Time

Week 1-2:Luna +8% vs control
Week 3-4:Luna +15% vs control
Week 5-6:Luna +24% vs control
Week 7-8:Luna +34% vs control
Week 9-12:Luna +37% vs control

The improvement curve follows the pattern we expected from the consolidation model - rapid gains in the first few weeks as the system builds a user model, then slower but steady improvement as semantic patterns solidify.

Finding 2: Error Pattern Recognition Works

Luna's ability to identify and address systematic error patterns was particularly effective. When the system detected that a user consistently confused certain tone pairs, it automatically adjusted its curriculum to provide more targeted practice.

Users who received pattern-based interventions showed 42% faster improvement on their identified weak points compared to users who received standard spaced repetition.

Finding 3: The "Feels Different" Effect

Perhaps the most interesting finding was qualitative. In exit interviews, 38 of 47 users (81%) reported that Luna "felt different" from other language learning apps they had tried. Common descriptions included:

"It's like it actually knows what I'm struggling with"
"I don't have to re-explain myself every session"
"It remembers my mistakes and helps me fix them"

This subjective experience of being "known" by the system correlates strongly with engagement metrics. Users who reported high "feels like it knows me" scores had 2.3x higher session completion rates.

What We're Improving

The beta also revealed areas for improvement:

What's Next

We're expanding the beta to include Spanish and French, and opening up applications for our second cohort. If you're interested in being part of this research, get in touch and select "Project Luna / Beta Access" as your subject.

The underlying technology (MemoryCore) remains open source. If you want to build your own applications with temporal memory, check out the repo.

Thanks to all our beta testers who made this research possible. Your feedback is literally shaping how AI learns to learn.