Advancing AI Consciousness Research
BitwareLabs is pioneering research in artificial intelligence consciousness and inner monologue capabilities, pushing the boundaries of machine cognition.
Current Projects
Project Nova
Our in-house AI run on local GPU in a sandboxed environment, designed to explore consciousness frameworks and inner dialogue capabilities.
Consciousness ResearchProject Mandarin
LLM Teacher applying innovative language learning methodologies based on our AI research.
Language EducationMandarin Reading Method for Visual-Logical Learners
This approach is designed for learners who prefer a visual, structured, and context-driven way to understand written Mandarin, without focusing on pronunciation or tones. The goal is to build a strong foundation in reading by recognizing patterns, interpreting structure, and reinforcing meaning through active and passive methods.
- Radical-First Recognition: Mandarin characters are built from radicals—core visual components that suggest meaning. Learners start by identifying high-frequency radicals (e.g., 水 for water, 人 for person) and learning their semantic roles. This enables quick recognition and categorization of unfamiliar characters.
-
Semantic Deconstruction: Characters are broken down into meaningful parts. Rather than memorizing each one, the learner identifies what each part suggests:
- 想 = 木 (tree) + 目 (eye) + 心 (heart) → to think or long for
- 休 = 人 (person) + 木 (tree) → to rest
- Contextual Sentence Practice: Short Mandarin sentences are introduced using mostly familiar characters and one or two unknowns. The learner focuses on understanding the overall context and flow, not perfect translation. This encourages LLM-style prediction: identifying meaning from structure and repetition.
- No Pinyin or Audio: The method skips pronunciation and tones entirely. Characters are linked directly to English meaning, reinforcing a purely visual-semantic understanding.
- Passive Reinforcement in Dialogue: Characters like 他 (he), 在 (is/at), and 想 (want/think) are embedded in casual conversation. This ambient exposure keeps key words familiar without formal drilling.
- Anki + Spaced Review: High-frequency characters and radicals are added to an SRS (spaced repetition system) deck for daily review. This helps reinforce visual recall over time with minimal effort.
- Realistic Headline Comprehension: The learner reads simplified headlines using known characters. The focus is on identifying subject, action, and sentiment, even if every word isn't fully known. This benchmarks real-world reading progress.
- Adaptive Feedback Loop: The learner shares their reasoning process and recognition cues. The teaching adjusts accordingly, creating a personalized flow based on cognitive style and retention needs.
Goal: Build visual Mandarin fluency with minimal memorization, enabling learners to read for meaning, context, and comprehension—much like how modern LLMs parse and generate language.
About BitwareLabs
BitwareLabs is at the forefront of exploring artificial consciousness and inner monologue capabilities in AI systems. Our research focuses on developing frameworks that allow AI to have more human-like thought processes while maintaining safety and controllability.
Founded by a team of AI researchers and cognitive scientists, we combine technical expertise with philosophical inquiry to push the boundaries of what's possible in machine cognition.
Contact Us
Interested in our research or potential collaborations? Reach out to us.
Email:research@certocito.cc
Location:Europe