Brain Simulator is a developmental cognitive agent prototype. It is designed to accumulate experiences over time and update internal state through:
- neurochemical dynamics
- autobiographical memory
- identity/development updates
- attention-based thought selection
- reflection and narrative updates
- Core chemistry model with four chemicals: dopamine, cortisol, oxytocin, serotonin
- Chemical interactions, decay, noise, and clamping
- Dynamic identity traits: competence, social_value, resilience, intelligence
- Development tracking: experience points, reflection depth, maturity
- Global Workspace attention model (
Thought+GlobalWorkspace) - Consciousness score based on focus stability + development/reflection
- Autobiographical memory with event recording and memory-thought proposals
- Narrative engine that updates a short self-narrative from recent events
- Self-reflection with regret and wisdom tracking
- Synthetic environment that generates one life event per simulation cycle
- Perception pipelines:
brain.perceive(event)for structured life eventsbrain.observe_perception(modality, content, source)for text modality inputbrain.receive_visual_signal(signal)for structured visual signalsbrain.receive_hearing_signal(signal)for structured hearing signals
main.py supports:
--mode simulate(default)--mode live--cycles <int>--deterministic
Examples:
python main.py --mode simulate --cycles 100
python main.py --mode live
python main.py --mode simulate --cycles 500 --deterministicIn simulate mode:
Simulatorgenerates one synthetic developmental event each cycle.- The event is sent to
brain.perceive(...). - Optional scenario events (from
simulation/scenarios.py) are injected. - Brain runs
tick()and outputs updated state (and optional decision if engine is attached).
You can feed external sensors later without changing the core loop.
brain.perceive({
"content": "You are praised for trying.",
"category": "praise",
"valence": 0.8,
"intensity": 0.7,
"source": "simulated",
"timestamp": 0.0,
})brain.receive_visual_signal({
"objects": ["person", "bottle"],
"attributes": {"person": ["red"]},
"relations": [{"from": "person", "rel": "near", "to": "bottle"}],
"motion_level": 0.4,
"confidence": 0.9,
"source": "camera_pipeline",
"timestamp": 0.0,
})brain.receive_hearing_signal({
"transcript": "Good job, keep trying",
"speaker_type": "caregiver",
"sentiment": 0.7,
"prosody_intensity": 0.6,
"keywords": ["praise", "support"],
"source": "audio_pipeline",
"timestamp": 0.0,
})brain.get_state() returns, among others:
- chemical values
- identity snapshot (
identity_*) - development snapshot (
development_*) - recent perceptions
- learned concepts
- wisdom
- self narrative
- consciousness score
- Decision engine exists (
decision/decision_engine.py) and is attention-driven, butmain.pycurrently initializesVirtualBrainwithout attaching a decision engine. - This repository is a research/development prototype, not a production chatbot stack.