Skip to content

Commit be6d76a

Browse files
committed
content modules
1 parent 83608d3 commit be6d76a

3 files changed

Lines changed: 182 additions & 2 deletions

File tree

conclusion.md

Lines changed: 155 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,158 @@ subtitle:
66

77
# II - Cognitive Modules
88

9-
To be announced...
9+
10+
## Session Overview
11+
12+
- **Sensor codelets** read simulator data.
13+
- Build **bottom-up (BU)** and **top-down (TD)** feature maps → merge into **CFM**.
14+
- Convert **CFM** into a **Salience Map** (guides focus).
15+
- **Winner module** selects the region of attention.
16+
- **DecisionMaking + IoR** turn focus into actions, avoiding repetition.
17+
- Code examples from the **`attention_trail`** repository.
18+
19+
---
20+
21+
## Architecture Basics
22+
23+
- A **Mind** holds **Codelets** and **MemoryObjects (MOs)**.
24+
- Each **codelet** runs `proc()`: reads from MOs, writes to MOs.
25+
- Complex behavior emerges from **many small, concurrent codelets**.
26+
- In this session:
27+
28+
sensors → sensor MOs → perception → feature maps MOs → CFM → CFM MO → attention → attention MOs
29+
30+
31+
- Classes ending in `...vrep` pull images/depth from **CoppeliaSim** via the Remote API.
32+
- Tutorial includes:
33+
- `training_obj.ttt` scene
34+
- Instructions to run in **NetBeans + CoppeliaSim**
35+
36+
---
37+
38+
## Sensor Codelets
39+
40+
- **Sensor_Vision**
41+
- Reads RGB frames from the source.
42+
- Publishes them to a **vision MO**.
43+
44+
- **Sensor_Depth**
45+
- Reads a depth frame.
46+
- Time-aligns it with vision.
47+
- Publishes to a **depth MO**.
48+
49+
- **Sensor_ColorRed / Green / Blue**
50+
- Minimal channel-specific readers.
51+
- Prepare per-channel data for downstream processing.
52+
53+
✅ Vision and depth MOs should **update and synchronize** before moving to perception.
54+
55+
---
56+
57+
## Cognitive Feature Map (CFM)
58+
59+
- **CFM = weighted sum of bottom-up + top-down maps**.
60+
- Track **BU vs. TD contributions** each cycle (BU-driven or TD-driven).
61+
62+
---
63+
64+
## Salience & Attention
65+
66+
- **Salience Map = CFM + current Attentional Map**.
67+
- **Winner-takes-all** picks the most salient region.
68+
- **IoR (Inhibition of Return)** suppresses that region → shifts attention.
69+
- Today’s focus: **high-quality sensor data + feature maps for reliable salience**.
70+
71+
---
72+
73+
## Top-Down Feature Maps (TD)
74+
75+
Top-down maps encode **what the agent currently wants**.
76+
They compare the sensed scene to a desired target:
77+
78+
- **Desired Color (goal RGB)**
79+
-`TD_FM_Color` highlights regions closest to the target color.
80+
81+
- **Desired Distance (goal depth)**
82+
-`TD_FM_Depth` highlights regions matching the target range.
83+
84+
👉 These maps are **goal-driven** and shift attention when you change target values.
85+
86+
---
87+
88+
## Winner Module
89+
90+
- Selects **focus** from Salience (and optional disSalMap).
91+
- Computes:
92+
- `argmax region` + confidence
93+
- Tie-breaking & hysteresis
94+
- Outputs:
95+
- **winner index**
96+
- **region coordinates**
97+
- **score** → current attention decision
98+
99+
---
100+
101+
## DecisionMaking & IoR
102+
103+
- **DecisionMaking** maps winner → agent/simulator actions:
104+
- “look at (r,c)”
105+
- “move gripper to (x,y)”
106+
- “center camera”
107+
108+
- Also updates:
109+
- **IoR mask** (to prevent repetition).
110+
- Logs action with **timestamp + confidence**.
111+
112+
---
113+
114+
## Dynamics
115+
116+
- **No goal** → salience driven by strong **bottom-up features**.
117+
- **With goal** → CFM + salience **bias toward goal-consistent regions**.
118+
- **IoR** prevents repetition → smoother exploration.
119+
120+
---
121+
122+
## Logs Should Show 📋
123+
124+
- Stable salience distribution.
125+
- Clear winner transitions.
126+
- Actions aligned with the current winner.
127+
128+
---
129+
130+
131+
## How to run?
132+
133+
### 1. Access branch Tutorial
134+
- Access `attention_trail` present in `File System`, open a terminal and change to branch tutorial.
135+
136+
137+
``` git checkout tutorial```
138+
### 1. Copy CST files from session 1
139+
140+
- Copy the `lib` folder present in ` 1_CSTCore/1_MIMoCoreModel `
141+
- Paste folder into `attention_trail` present in `File System`
142+
143+
144+
### 2. Open Coppelia
145+
146+
- In VNC access `sharevnc` folder and then folder `Coppelia`
147+
- Access folder for the version 4.9 ` CoppeliaSim_Pro_V4_9_0_rev6_Ubuntu22_04 `
148+
- Open a terminal and run script ` CoppeliaSim.sh `
149+
- Open scene present in attention_trail/scenes
150+
151+
### 3. Build and run java
152+
153+
- Build: ` javac -cp 'lib/*' -d build $(find src/main/java -name "*.java") `
154+
155+
- Run: ` java -cp "build:lib/*" cst_attmod_app.CST_AttMod_App `
156+
157+
158+
### Analyze results
159+
160+
- CST should start the simulation on Coppelia and start collecting data
161+
- Perceptual maps should be build
162+
- Attention maps should be calculated with Colombini model
163+

methods.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -169,3 +169,29 @@ Sensor codelets should colect data from MIMo and write into memory.
169169
Processing codelet should process memory objects.
170170

171171
Actuator codelet should sent an action based on processed memory.
172+
173+
174+
## Codelet Customization Challenges
175+
176+
### Sensorial Codelet
177+
- Add confidence and timestamp to sensor readings: Give ValueHolder more semantics: include reading, confidence, and timestamp in a JSONObject.
178+
179+
- Simulate noise and occasional dropouts: Allow parameters (noise, dropout probability) via constructor.
180+
181+
- In SimpleMind, instantiate sensors with distinct parameters (even without changing the topology).
182+
183+
### Processing Codelet
184+
- Apply a moving average filter to smooth sensor values and normalize readings into a 01 evidence scale. Convert value to a filtered evidence field — we also keep the composite confidence.
185+
186+
- Introduce adaptive thresholds with hysteresis (stable labels).Produces a stable binary label (e.g., “HIGH/LOW”) from evidence.
187+
188+
- Generate explainable outputs (logs + JSON with multiple fields) and explore how processed evidence changes actuator behavior.
189+
190+
### Actuator Codelet
191+
- Use a confidence-weighted decision rule (evidence × confidence). Act only when processed confidence supports the decision.
192+
193+
- Implement a refractory period to avoid chattering. Avoid repeating identical actions in short time windows.
194+
195+
- Store structured action objects in memory (should_act, intensity, reason). Send dual outputs: internal memory + external channel (e.g., socket/log). Compare stable vs. unstable actuator behavior under noisy inputs.
196+
197+
# Any questions or problems, please contact us:

projects.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,4 +213,4 @@ java -cp "lib/*:build" ExperimentMain || { log "ExperimentMain failed to run"; e
213213
chmod +x script1.sh
214214
```
215215

216-
216+
# Any questions or problems, please contact us:

0 commit comments

Comments
 (0)