From 237e627c1505b164324863166f8f7ab6b4073021 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:17:01 +0800 Subject: [PATCH 01/17] docs(spec): add spec for tier1-numeric-ops Spec artifacts: - research.md: feasibility analysis and codebase exploration - requirements.md: user stories and acceptance criteria - design.md: architecture and technical decisions - tasks.md: POC-first implementation plan (18 tasks, 4 phases) Ready for implementation. Co-Authored-By: Claude Opus 4.6 --- specs/tier1-numeric-ops/.progress.md | 25 ++ specs/tier1-numeric-ops/design.md | 433 ++++++++++++++++++++++++ specs/tier1-numeric-ops/requirements.md | 116 +++++++ specs/tier1-numeric-ops/research.md | 138 ++++++++ specs/tier1-numeric-ops/tasks.md | 170 ++++++++++ 5 files changed, 882 insertions(+) create mode 100644 specs/tier1-numeric-ops/.progress.md create mode 100644 specs/tier1-numeric-ops/design.md create mode 100644 specs/tier1-numeric-ops/requirements.md create mode 100644 specs/tier1-numeric-ops/research.md create mode 100644 specs/tier1-numeric-ops/tasks.md diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md new file mode 100644 index 0000000..f84034f --- /dev/null +++ b/specs/tier1-numeric-ops/.progress.md @@ -0,0 +1,25 @@ +# tier1-numeric-ops + +## Original Goal + +Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. Add POW opcode (ast.Pow → POW, MSL pow()), math module functions as new opcodes (SQRT/SIN/COS/EXP/LOG/ABS/MIN/MAX) wired through full pipeline (opcodes→VM→bytecode→Metal kernel→transpiler→disasm→compiler), math.pi and math.e as constants, random.uniform(a,b) and random.gauss(mu,sigma) in transpiler, and chained comparisons (a < b < c). All new opcodes need emoji mappings, OPS_WITH_ARG updates, VM execution, bytecode encoding, Metal kernel dispatch, C compiler output, disassembler support, and transpiler visitor updates. Use the KB (scripts/kb search) for reference on related design decisions. Full end-to-end implementation, no stopping. + +## Progress + +- [x] Spec generation +- [ ] Implementation + +## Learnings + +- Opcode pipeline is 7 files deep: opcodes.py -> parser (auto) -> vm.py -> bytecode.py -> vm.metal -> gpu.py -> compiler.py. Disasm is auto via reverse map. +- EMOJI_TO_OP dict ordering matters for prefix matching -- multi-codepoint emoji (with variation selectors like down-arrow-vs) must be listed BEFORE their bare versions per KB #13, #21. +- Currently 37 opcodes with 8 taking arguments. All 9 new math ops are stack-only (no argument needed). +- Bytecode range 0x10-0x14 is arithmetic. New math ops extend this to 0x15-0x1D, keeping arithmetic ops contiguous. +- MSL has all needed math functions natively: pow, sqrt, sin, cos, exp, log, abs, min, max. C has them via math.h. +- The GPU uses float32 while CPU uses float64 -- precision differences are expected and acceptable. +- random.uniform and random.gauss don't need new opcodes -- they're inline-expanded from existing ops + new math ops in the transpiler. +- The CMP_GT emoji is already taken -- chose a different emoji for ABS to avoid conflict. +- Transpiler already allows import math and import random but doesn't handle math.* calls yet -- only random.random() is handled. +- C compiler needs math.h added to both numeric and mixed preambles, and fabs() (not abs()) for float absolute value in C. +- vm.py already imports random but not math -- need to add import math for SQRT/SIN/COS/EXP/LOG. +- Chained comparisons require careful stack manipulation: DUP+ROT to save intermediate values, AND to combine results, SWAP to position saved values for next comparison. diff --git a/specs/tier1-numeric-ops/design.md b/specs/tier1-numeric-ops/design.md new file mode 100644 index 0000000..b14201a --- /dev/null +++ b/specs/tier1-numeric-ops/design.md @@ -0,0 +1,433 @@ +--- +spec: tier1-numeric-ops +phase: design +created: 2026-03-08 +generated: auto +--- + +# Design: tier1-numeric-ops + +## Overview + +Additive extension of the EmojiASM opcode set with 9 math opcodes, following the established per-opcode pipeline pattern. Each new opcode mirrors the existing RANDOM opcode's integration pattern across all 7 pipeline layers. Transpiler additions use the existing `visit_Call` / `visit_BinOp` visitor pattern. + +## Architecture + +```mermaid +graph LR + A[opcodes.py
Op enum + emoji map] --> B[parser.py
auto via EMOJI_TO_OP] + B --> C[vm.py
match/case dispatch] + B --> D[bytecode.py
OP_MAP + stack effects] + D --> E[vm.metal
switch/case + MSL math] + D --> F[gpu.py
GPU_OPCODES mirror] + B --> G[compiler.py
C code emission] + B --> H[disasm.py
auto via OP_TO_EMOJI] + I[transpiler.py
ast visitors] --> B +``` + +## Components + +### Component 1: Opcode Definitions (opcodes.py) + +**Purpose**: Single source of truth for Op enum values and emoji mappings. + +**Changes**: + +```python +# Add after RANDOM = auto() in Op IntEnum: +POW = auto() +SQRT = auto() +SIN = auto() +COS = auto() +EXP = auto() +LOG = auto() +ABS = auto() +MIN = auto() +MAX = auto() + +# Add to EMOJI_TO_OP dict: +"🔋": Op.POW, +"🌱": Op.SQRT, +"📈": Op.SIN, +"📉": Op.COS, +"🚀": Op.EXP, +"📓": Op.LOG, +"💪": Op.ABS, +"⬇️": Op.MIN, +"⬇": Op.MIN, # variation selector variant +"⬆️": Op.MAX, +"⬆": Op.MAX, # variation selector variant +``` + +No OPS_WITH_ARG changes needed — all new ops are stack-only (no argument). + +### Component 2: VM Execution (vm.py) + +**Purpose**: Execute new opcodes in the Python VM. + +**Stack effects**: + +| Opcode | Pops | Pushes | Operation | +|--------|------|--------|-----------| +| POW | 2 (a, b) | 1 | `a ** b` | +| SQRT | 1 (a) | 1 | `math.sqrt(a)` | +| SIN | 1 (a) | 1 | `math.sin(a)` | +| COS | 1 (a) | 1 | `math.cos(a)` | +| EXP | 1 (a) | 1 | `math.exp(a)` | +| LOG | 1 (a) | 1 | `math.log(a)` | +| ABS | 1 (a) | 1 | `abs(a)` | +| MIN | 2 (a, b) | 1 | `min(a, b)` | +| MAX | 2 (a, b) | 1 | `max(a, b)` | + +**Implementation pattern** (follows existing binary ops like SUB): + +Binary ops (POW, MIN, MAX): +```python +case Op.POW: + b, a = self._pop(), self._pop() + self._push(a ** b) +``` + +Unary ops (SQRT, SIN, COS, EXP, LOG, ABS): +```python +case Op.SQRT: + a = self._pop() + self._push(math.sqrt(a)) +``` + +**Error handling**: `math.sqrt` of negative raises `ValueError` in Python — let it propagate as VMError. `math.log(0)` raises `ValueError` — same treatment. + +### Component 3: Bytecode Encoding (bytecode.py) + +**Purpose**: Map new Op enum values to GPU bytecode numbers. + +**Bytecode allocation** (extends arithmetic range 0x10-0x1D): + +```python +# Add to OP_MAP: +Op.POW: 0x15, +Op.SQRT: 0x16, +Op.SIN: 0x17, +Op.COS: 0x18, +Op.EXP: 0x19, +Op.LOG: 0x1A, +Op.ABS: 0x1B, +Op.MIN: 0x1C, +Op.MAX: 0x1D, +``` + +**Stack effects** (add to `_STACK_EFFECTS`): + +```python +Op.POW: -1, # pops 2, pushes 1 +Op.SQRT: 0, # pops 1, pushes 1 +Op.SIN: 0, +Op.COS: 0, +Op.EXP: 0, +Op.LOG: 0, +Op.ABS: 0, +Op.MIN: -1, # pops 2, pushes 1 +Op.MAX: -1, # pops 2, pushes 1 +``` + +### Component 4: Metal Kernel (metal/vm.metal) + +**Purpose**: GPU execution of new opcodes using MSL native math functions. + +**Constant declarations**: + +```metal +// Math functions (extends arithmetic range) +constant uint8_t OP_POW = 0x15; +constant uint8_t OP_SQRT = 0x16; +constant uint8_t OP_SIN = 0x17; +constant uint8_t OP_COS = 0x18; +constant uint8_t OP_EXP = 0x19; +constant uint8_t OP_LOG = 0x1A; +constant uint8_t OP_ABS = 0x1B; +constant uint8_t OP_MIN = 0x1C; +constant uint8_t OP_MAX = 0x1D; +``` + +**Switch cases** (follow existing binary/unary patterns): + +Binary math ops (POW, MIN, MAX) — follow OP_MUL pattern: +```metal +case OP_POW: { + if (sp < 2) { status[tid] = STATUS_ERROR; running = false; break; } + sp--; + stack[sp - 1] = pow(stack[sp - 1], stack[sp]); + break; +} +``` + +Unary math ops (SQRT, SIN, COS, EXP, LOG, ABS) — follow OP_NOT pattern (single operand): +```metal +case OP_SQRT: { + if (sp < 1) { status[tid] = STATUS_ERROR; running = false; break; } + stack[sp - 1] = sqrt(stack[sp - 1]); + break; +} +``` + +### Component 5: GPU Glue (gpu.py) + +**Purpose**: Mirror bytecode OP_MAP in GPU_OPCODES dict for validation. + +```python +# Add to GPU_OPCODES: +"POW": 0x15, +"SQRT": 0x16, +"SIN": 0x17, +"COS": 0x18, +"EXP": 0x19, +"LOG": 0x1A, +"ABS": 0x1B, +"MIN": 0x1C, +"MAX": 0x1D, +``` + +### Component 6: C Compiler (compiler.py) + +**Purpose**: Emit C code for new opcodes. + +**Preamble change**: Add `#include ` to both `_PREAMBLE_NUMERIC` and `_PREAMBLE_MIXED`. + +**Emission patterns** (in `_emit_inst`): + +Binary ops (numeric-only path): +```python +elif op == Op.POW: + if numeric_only: + A(' { double b=POP(),a=POP(); PUSH_N(pow(a,b)); }') + else: + A(' { Val b=POP(),a=POP(); PUSH_N(pow(a.num,b.num)); }') +``` + +Unary ops (numeric-only path): +```python +elif op == Op.SQRT: + if numeric_only: + A(' { double a=POP(); PUSH_N(sqrt(a)); }') + else: + A(' { Val a=POP(); PUSH_N(sqrt(a.num)); }') +``` + +### Component 7: Transpiler (transpiler.py) + +**Purpose**: Compile Python math expressions to EmojiASM opcodes. + +#### 7a: Power operator (`**`) + +In `visit_BinOp`, replace the `ast.Pow` error with: +```python +if isinstance(node.op, ast.Pow): + self.visit(node.left) + self.visit(node.right) + self._emit(Op.POW, node=node) + return +``` + +Also add to `_BINOP_MAP`: +```python +ast.Pow: Op.POW, +``` + +And `_AUGOP_MAP`: +```python +ast.Pow: Op.POW, +``` + +#### 7b: Math module functions + +In `visit_Call`, add handling for `math.func(x)` attribute calls: +```python +# math.sqrt(x), math.sin(x), etc. +if (isinstance(node.func, ast.Attribute) + and isinstance(node.func.value, ast.Name) + and node.func.value.id == "math"): + math_ops = { + "sqrt": (Op.SQRT, 1), + "sin": (Op.SIN, 1), + "cos": (Op.COS, 1), + "exp": (Op.EXP, 1), + "log": (Op.LOG, 1), + } + if node.func.attr in math_ops: + op, nargs = math_ops[node.func.attr] + if len(node.args) != nargs: + raise TranspileError(...) + self.visit(node.args[0]) + self._emit(op, node=node) + return +``` + +#### 7c: Builtins (abs, min, max) + +In `visit_Call`, add handling: +```python +# abs(x) +if isinstance(node.func, ast.Name) and node.func.id == "abs": + self.visit(node.args[0]) + self._emit(Op.ABS, node=node) + return + +# min(a, b), max(a, b) +if isinstance(node.func, ast.Name) and node.func.id in ("min", "max"): + if len(node.args) != 2: + raise TranspileError("min()/max() requires exactly 2 arguments") + self.visit(node.args[0]) + self.visit(node.args[1]) + self._emit(Op.MIN if node.func.id == "min" else Op.MAX, node=node) + return +``` + +#### 7d: Math constants + +In `visit_Attribute`, add handling for `math.pi` and `math.e`: +```python +def visit_Attribute(self, node: ast.Attribute): + if (isinstance(node.value, ast.Name) and node.value.id == "math"): + if node.attr == "pi": + self._emit(Op.PUSH, 3.141592653589793, node=node) + return + if node.attr == "e": + self._emit(Op.PUSH, 2.718281828459045, node=node) + return + # existing pass-through +``` + +#### 7e: random.uniform(a, b) and random.gauss(mu, sigma) + +**uniform(a, b)** = `a + (b - a) * random()`: +```python +# Inline expansion: +self.visit(node.args[0]) # a (kept for final ADD) +self.visit(node.args[1]) # b +self.visit(node.args[0]) # a again +self._emit(Op.SUB) # b - a +self._emit(Op.RANDOM) # random() +self._emit(Op.MUL) # (b - a) * random() +self._emit(Op.ADD) # a + (b - a) * random() +``` + +**gauss(mu, sigma)** = Box-Muller transform: +`mu + sigma * sqrt(-2 * log(u1)) * cos(2 * pi * u2)` +```python +# Inline expansion using new opcodes: +self._emit(Op.RANDOM) # u1 +self._emit(Op.LOG) # log(u1) +self._emit(Op.PUSH, -2.0) +self._emit(Op.MUL) # -2 * log(u1) +self._emit(Op.SQRT) # sqrt(-2 * log(u1)) +self._emit(Op.RANDOM) # u2 +self._emit(Op.PUSH, 6.283185307179586) # 2*pi +self._emit(Op.MUL) # 2*pi*u2 +self._emit(Op.COS) # cos(2*pi*u2) +self._emit(Op.MUL) # sqrt(...) * cos(...) +self.visit(node.args[1]) # sigma +self._emit(Op.MUL) # sigma * standard_normal +self.visit(node.args[0]) # mu +self._emit(Op.ADD) # mu + sigma * ... +``` + +#### 7f: Chained comparisons + +In `visit_Compare`, replace the error with generalized chained comparison support: + +For `a op1 b op2 c op3 d`: +1. Visit a +2. For each (op_i, comparator_i): + a. Visit comparator_i + b. If not last: DUP, ROT (save value for next comparison) + c. Emit comparison op for op_i + d. If not first: emit AND to combine with previous result + e. If not last: SWAP (bring saved value back to top for next pair) + +```python +def visit_Compare(self, node: ast.Compare): + self.visit(node.left) + + if len(node.ops) == 1: + # Simple comparison (unchanged) + self.visit(node.comparators[0]) + self._emit_cmp_op(node.ops[0], node) + return + + # Chained: a op1 b op2 c ... + for i, (cmp_op, comparator) in enumerate(zip(node.ops, node.comparators)): + self.visit(comparator) + is_last = (i == len(node.ops) - 1) + + if not is_last: + self._emit(Op.DUP, node=node) # save value for next comparison + self._emit(Op.ROT, node=node) # bring previous value to top + + self._emit_cmp_op(cmp_op, node) + + if i > 0: + self._emit(Op.AND, node=node) # combine with previous result + + if not is_last: + self._emit(Op.SWAP, node=node) # bring saved value back to top +``` + +Extract comparison emission to helper `_emit_cmp_op()` for reuse. + +## Data Flow + +1. Python source -> `ast.parse()` -> AST nodes +2. `visit_BinOp(Pow)` -> `Op.POW` instruction +3. `visit_Call(math.sqrt)` -> `Op.SQRT` instruction +4. `visit_Attribute(math.pi)` -> `Op.PUSH 3.14159...` instruction +5. `visit_Compare(chained)` -> multiple CMP + AND instructions +6. Instructions -> VM match/case dispatch OR bytecode -> Metal kernel OR C compiler + +## Technical Decisions + +| Decision | Options | Choice | Rationale | +|----------|---------|--------|-----------| +| Bytecode range | New range 0x70+ vs extend 0x1x | Extend 0x15-0x1D | Math ops are arithmetic — keep with arithmetic range | +| uniform/gauss | New opcodes vs inline expansion | Inline expansion | No new opcodes needed; uses existing + new math ops | +| ABS emoji | 📐 (conflict) vs 💪 | 💪 | 📐 already used for CMP_GT | +| MIN/MAX emoji | Various | ⬇️/⬆️ | Intuitive direction arrows, with variation selector variants | +| Chained cmp | Desugar in AST vs emit inline | Emit inline | Follows existing transpiler pattern; no AST rewriting | +| math.h include | Always vs conditional | Always | Trivial cost, simplifies logic | + +## File Structure + +| File | Action | Purpose | +|------|--------|---------| +| `emojiasm/opcodes.py` | Modify | Add 9 Op enum values + emoji mappings | +| `emojiasm/vm.py` | Modify | Add 9 match/case arms + `import math` | +| `emojiasm/bytecode.py` | Modify | Add 9 OP_MAP entries + _STACK_EFFECTS | +| `emojiasm/metal/vm.metal` | Modify | Add 9 opcode constants + switch cases | +| `emojiasm/gpu.py` | Modify | Add 9 GPU_OPCODES entries | +| `emojiasm/compiler.py` | Modify | Add 9 emit cases + `#include ` | +| `emojiasm/transpiler.py` | Modify | POW binop, math calls, constants, uniform/gauss, chained cmp | +| `emojiasm/disasm.py` | No change | Auto via OP_TO_EMOJI reverse map | +| `docs/REFERENCE.md` | Modify | Document new opcodes | +| `tests/test_emojiasm.py` | Modify | Tests for new EmojiASM opcodes | +| `tests/test_transpiler.py` | Modify | Tests for transpiler features | +| `tests/test_bytecode.py` | Modify | Tests for bytecode encoding | + +## Error Handling + +| Error | Handling | User Impact | +|-------|----------|-------------| +| `sqrt(negative)` | VM raises VMError | "SQRT of negative number" | +| `log(0)` or `log(negative)` | VM raises VMError | "LOG domain error" | +| `min()`/`max()` wrong arg count | TranspileError at compile time | "min()/max() requires exactly 2 arguments" | +| `math.unknown_func()` | TranspileError | "Unsupported math function: unknown_func" | +| GPU NaN/Inf from bad math | MSL returns NaN/Inf naturally | Result contains NaN/Inf (no crash) | + +## Existing Patterns to Follow + +- **Op enum**: New entries go after `RANDOM = auto()` in `opcodes.py:43` +- **EMOJI_TO_OP**: Add after `"🎲": Op.RANDOM` at line 87 +- **VM dispatch**: New match arms after `case Op.RANDOM:` at line 286, using same `self._pop()`/`self._push()` pattern +- **Bytecode OP_MAP**: Add after `Op.RANDOM: 0x60` at line 67-68 +- **Metal kernel**: Add opcode constants after `OP_RANDOM = 0x60` at line 62, switch cases after RANDOM case at line 621 +- **GPU_OPCODES**: Add after `"RANDOM": 0x60` at line 71 +- **C compiler**: Add `elif` arms after `Op.RANDOM` case at line 309, following same numeric_only/mixed branching pattern +- **Test pattern**: `run()` helper in test_emojiasm.py, `run_py()` helper in test_transpiler.py diff --git a/specs/tier1-numeric-ops/requirements.md b/specs/tier1-numeric-ops/requirements.md new file mode 100644 index 0000000..8233444 --- /dev/null +++ b/specs/tier1-numeric-ops/requirements.md @@ -0,0 +1,116 @@ +--- +spec: tier1-numeric-ops +phase: requirements +created: 2026-03-08 +generated: auto +--- + +# Requirements: tier1-numeric-ops + +## Summary + +Add 9 new numeric opcodes (POW, SQRT, SIN, COS, EXP, LOG, ABS, MIN, MAX), transpiler support for `math.*` functions/constants, `random.uniform`/`random.gauss`, and chained comparisons. All opcodes wired through full pipeline: opcodes, parser, VM, bytecode, Metal kernel, GPU glue, C compiler, disassembler. + +## User Stories + +### US-1: Power operator +As a transpiler user, I want to write `x ** 2` in Python and have it compile to a POW opcode so that exponentiation works natively. + +**Acceptance Criteria**: +- AC-1.1: `print(2 ** 10)` transpiles and outputs `1024` +- AC-1.2: `print(4 ** 0.5)` transpiles and outputs `2.0` +- AC-1.3: POW opcode works in direct EmojiASM (`📥 2 📥 10 🔋 🖨️`) and outputs `1024` +- AC-1.4: POW compiles through bytecode, Metal kernel, and C compiler + +### US-2: Math module functions +As a transpiler user, I want to call `math.sqrt(x)`, `math.sin(x)`, `math.cos(x)`, `math.exp(x)`, `math.log(x)`, `abs(x)`, `min(a,b)`, `max(a,b)` and have them compile to dedicated opcodes. + +**Acceptance Criteria**: +- AC-2.1: `math.sqrt(16)` outputs `4.0` +- AC-2.2: `math.sin(0)` outputs `0.0` +- AC-2.3: `math.cos(0)` outputs `1.0` +- AC-2.4: `math.exp(0)` outputs `1.0` +- AC-2.5: `math.log(1)` outputs `0.0` +- AC-2.6: `abs(-5)` outputs `5` +- AC-2.7: `min(3, 7)` outputs `3` +- AC-2.8: `max(3, 7)` outputs `7` +- AC-2.9: All 8 opcodes work in direct EmojiASM with their assigned emoji +- AC-2.10: All 8 opcodes compile through bytecode, Metal kernel, and C compiler + +### US-3: Math constants +As a transpiler user, I want `math.pi` and `math.e` to resolve to their numeric values. + +**Acceptance Criteria**: +- AC-3.1: `print(math.pi)` outputs `3.141592653589793` +- AC-3.2: `print(math.e)` outputs `2.718281828459045` +- AC-3.3: Constants usable in expressions: `print(math.pi * 2)` outputs correct value + +### US-4: Random distribution functions +As a transpiler user, I want `random.uniform(a, b)` and `random.gauss(mu, sigma)` to compile correctly. + +**Acceptance Criteria**: +- AC-4.1: `random.uniform(1, 10)` returns a value in [1, 10) +- AC-4.2: `random.gauss(0, 1)` returns a float (standard normal sample) +- AC-4.3: Both work on CPU (VM) and GPU (Metal kernel) +- AC-4.4: Implemented via inline expansion using existing opcodes (RANDOM, PUSH, MUL, ADD, SQRT, LOG, COS, SIN) + +### US-5: Chained comparisons +As a transpiler user, I want `a < b < c` to compile correctly instead of raising an error. + +**Acceptance Criteria**: +- AC-5.1: `print(1 < 2 < 3)` outputs `1` +- AC-5.2: `print(1 < 3 < 2)` outputs `0` +- AC-5.3: `print(1 < 2 < 3 < 4)` outputs `1` (3+ comparisons) +- AC-5.4: Mixed comparison ops work: `print(1 <= 2 < 3)` outputs `1` +- AC-5.5: Works in if conditions: `if 0 < x < 10:` compiles correctly + +## Functional Requirements + +| ID | Requirement | Priority | Source | +|----|-------------|----------|--------| +| FR-1 | Add POW opcode (emoji, enum, VM, bytecode, Metal, C compiler) | Must | US-1 | +| FR-2 | Add SQRT opcode through full pipeline | Must | US-2 | +| FR-3 | Add SIN opcode through full pipeline | Must | US-2 | +| FR-4 | Add COS opcode through full pipeline | Must | US-2 | +| FR-5 | Add EXP opcode through full pipeline | Must | US-2 | +| FR-6 | Add LOG opcode through full pipeline | Must | US-2 | +| FR-7 | Add ABS opcode through full pipeline | Must | US-2 | +| FR-8 | Add MIN opcode through full pipeline | Must | US-2 | +| FR-9 | Add MAX opcode through full pipeline | Must | US-2 | +| FR-10 | Transpiler: `ast.Pow` -> POW opcode | Must | US-1 | +| FR-11 | Transpiler: `math.sqrt/sin/cos/exp/log` -> opcodes | Must | US-2 | +| FR-12 | Transpiler: `abs()` builtin -> ABS opcode | Must | US-2 | +| FR-13 | Transpiler: `min(a,b)` and `max(a,b)` builtins -> opcodes | Must | US-2 | +| FR-14 | Transpiler: `math.pi` -> PUSH 3.141592653589793 | Must | US-3 | +| FR-15 | Transpiler: `math.e` -> PUSH 2.718281828459045 | Must | US-3 | +| FR-16 | Transpiler: `random.uniform(a,b)` inline expansion | Should | US-4 | +| FR-17 | Transpiler: `random.gauss(mu,sigma)` inline expansion | Should | US-4 | +| FR-18 | Transpiler: chained comparisons support | Must | US-5 | +| FR-19 | C compiler preamble adds `#include ` | Must | FR-1..9 | +| FR-20 | Update `_uses_strings()` to NOT flag new math ops as string-using | Must | FR-1..9 | +| FR-21 | Update `_STACK_EFFECTS` in bytecode.py for new opcodes | Must | FR-1..9 | +| FR-22 | Update `docs/REFERENCE.md` with new opcodes | Should | US-1,2 | + +## Non-Functional Requirements + +| ID | Requirement | Category | +|----|-------------|----------| +| NFR-1 | All existing 448+ tests continue to pass | Regression | +| NFR-2 | GPU opcode validation (`validate_opcodes()`) passes with new ops | Consistency | +| NFR-3 | MSL math functions use float precision (consistent with GPU numeric path) | Precision | +| NFR-4 | C compiler math functions use double precision (consistent with existing numeric path) | Precision | + +## Out of Scope + +- New opcodes for `random.randint`, `random.choice`, `random.shuffle` +- Bitwise operators (AND, OR, XOR, SHIFT) +- Complex number support +- `math.floor`, `math.ceil`, `math.round` (can be added later) +- String math (e.g., repeating strings with `*`) +- `from math import sqrt` style direct function import + +## Dependencies + +- Existing opcode pipeline (opcodes.py, vm.py, bytecode.py, vm.metal, gpu.py, compiler.py, disasm.py) +- C compiler requires `` and `-lm` flag on Linux (macOS links math automatically) +- MSL `` already included in vm.metal diff --git a/specs/tier1-numeric-ops/research.md b/specs/tier1-numeric-ops/research.md new file mode 100644 index 0000000..eabf6c4 --- /dev/null +++ b/specs/tier1-numeric-ops/research.md @@ -0,0 +1,138 @@ +--- +spec: tier1-numeric-ops +phase: research +created: 2026-03-08 +generated: auto +--- + +# Research: tier1-numeric-ops + +## Executive Summary + +Adding 9 new opcodes (POW, SQRT, SIN, COS, EXP, LOG, ABS, MIN, MAX) plus transpiler support for math constants, random.uniform/gauss, and chained comparisons. All math ops have direct MSL hardware equivalents (`pow()`, `sqrt()`, `sin()`, etc.) making GPU cost near-zero. The existing opcode pipeline pattern (opcodes.py -> parser -> vm.py -> bytecode.py -> vm.metal -> gpu.py -> compiler.py -> disasm.py) is well-established with 37 existing opcodes providing clear templates. + +## Codebase Analysis + +### Existing Pipeline Pattern (Per-Opcode) + +Each opcode touches 7 files in a consistent pattern: + +| Layer | File | What to add | Pattern | +|-------|------|-------------|---------| +| 1. Enum | `opcodes.py` | `Op.POW = auto()` in IntEnum | After `Op.RANDOM` (line 43) | +| 2. Emoji map | `opcodes.py` | `"🔋": Op.POW` in EMOJI_TO_OP | After `🎲` (line 87) | +| 3. VM dispatch | `vm.py` | `case Op.POW:` match arm | After `Op.RANDOM` (line 286) | +| 4. Bytecode | `bytecode.py` | `Op.POW: 0x61` in OP_MAP, entry in _STACK_EFFECTS | After `Op.RANDOM: 0x60` | +| 5. Metal kernel | `metal/vm.metal` | `case OP_POW:` switch arm + constant | After `OP_RANDOM` section | +| 6. GPU glue | `gpu.py` | `"POW": 0x61` in GPU_OPCODES | After `"RANDOM"` | +| 7. C compiler | `compiler.py` | `elif op == Op.POW:` in _emit_inst | After `Op.RANDOM` (line 309) | +| 8. Disasm | `disasm.py` | Automatic via OP_TO_EMOJI reverse map | No change needed | + +### Current Opcode Allocation + +Bytecode ranges currently used (from `bytecode.py` OP_MAP): +- `0x01-0x06`: Stack ops (PUSH, POP, DUP, SWAP, OVER, ROT) +- `0x10-0x14`: Arithmetic (ADD, SUB, MUL, DIV, MOD) +- `0x20-0x25`: Comparison/Logic (EQ, LT, GT, AND, OR, NOT) +- `0x30-0x36`: Control flow (JMP, JZ, JNZ, CALL, RET, HALT, NOP) +- `0x40-0x41`: Memory (STORE, LOAD) +- `0x50-0x51`: I/O (PRINT, PRINTLN) +- `0x60`: Random (RANDOM) + +**Proposed allocation for new ops:** +- `0x15`: POW (extends arithmetic range) +- `0x16-0x1D`: SQRT, SIN, COS, EXP, LOG, ABS, MIN, MAX (math functions in arithmetic range) + +### Current Emoji Usage (37 opcodes, some with variants) + +Used: 📥📤➕➖✖️✖➗🔢📢🖨️🖨💬📋🔀🫴🔄👉🤔😤🟰📏📐🤝🤙🚫💾📂📞📲🎤🔟🛑💤🧵✂️✂🔍🔁🔤🎲 + +**Proposed emoji for new opcodes:** + +| Opcode | Emoji | Rationale | +|--------|-------|-----------| +| POW | `🔋` | Power/battery = power | +| SQRT | `√` (or `🌱`) | `🌱` root/sprout for square root | +| SIN | `📈` | Sine wave → chart going up | +| COS | `📉` | Cosine wave → chart going down | +| EXP | `🚀` | Exponential growth → rocket | +| LOG | `📓` | Log → logbook/notebook | +| ABS | `📐` CONFLICT → `💪` | Absolute value → strength/magnitude | +| MIN | `⬇️` | Minimum → down arrow | +| MAX | `⬆️` | Maximum → up arrow | + +Note: `📐` is already used for CMP_GT. Using `💪` for ABS instead. + +### Transpiler Current State + +- `visit_BinOp`: handles +, -, *, //, %, explicit error for `**` (ast.Pow) at line 463-467 +- `visit_Call`: handles `print()` and `random.random()` (lines 537-600) +- `visit_Compare`: explicit rejection of chained comparisons at line 497-501 +- `visit_Import`/`visit_ImportFrom`: allows `random` and `math` modules (lines 385-408) +- `_BINOP_MAP`: maps ast operators to Op enum values + +### KB Findings (Key) + +- **KB #1**: VM dispatches via match/case chain — new ops add new arms +- **KB #22**: Currently 8 opcodes take args (PUSH, JMP, JZ, JNZ, CALL, STORE, LOAD, PRINTS). None of new math ops need args — all are stack-only +- **KB #23**: 31 opcodes across 6 categories — adding 9 more for math +- **KB #21**: Variation selectors on some emoji (✖️/✖, ✂️/✂) — new emoji should be checked for variants +- **KB #129**: MSL uses float (32-bit) vs C compiler double — math functions differ in precision +- **KB #16**: Numeric-only compiler path uses `double _stk[4096]` — new math ops fit numeric-only path +- **KB #87**: MSL has no goto — C compiler uses goto for labels, fine for math ops which are inline +- **KB #102**: Full stack-based VM can run as Metal compute kernel with switch dispatch + +### MSL Native Functions Available + +All target math functions have direct MSL equivalents (from ``): +- `pow(float, float)` — power +- `sqrt(float)` — square root +- `sin(float)`, `cos(float)` — trig +- `exp(float)`, `log(float)` — exponential/natural log +- `abs(float)` — absolute value (also `fabs()`) +- `min(float, float)`, `max(float, float)` — min/max + +C standard library equivalents (for compiler.py): `pow()`, `sqrt()`, `sin()`, `cos()`, `exp()`, `log()`, `fabs()`, `fmin()`, `fmax()` — require `#include `. + +### RANDOM Implementation Reference + +Current `RANDOM` implementation for extending to uniform/gauss: +- VM (line 285-286): `self._push(random.random())` +- Metal kernel (lines 610-621): Uses Philox-4x32-10 PRNG via `philox_random(rng)` +- C compiler (line 309): `PUSH_N((double)rand() / (double)RAND_MAX);` +- Transpiler (lines 544-560): Handles `random.random()` as attribute call or bare import + +`random.uniform(a, b)` = `a + (b-a) * random()` — no new opcode needed, transpiler inlines +`random.gauss(mu, sigma)` = Box-Muller: `mu + sigma * sqrt(-2*ln(u1)) * cos(2*pi*u2)` — needs SQRT, LOG, COS, or inline expansion + +### Chained Comparisons + +Current: `visit_Compare` raises `TranspileError` for `len(node.ops) > 1`. +Strategy from issue: `a < b < c` compiles to: +1. visit a +2. visit b +3. DUP (save b for second comparison) +4. ROT (bring a to top: stack is now [b_copy, a, b]) +5. CMP_LT (compare a < b: stack is [b_copy, result1]) +6. SWAP (bring b_copy to top: stack is [result1, b_copy]) +7. visit c (stack is [result1, b_copy, c]) +8. CMP_LT (compare b < c: stack is [result1, result2]) +9. AND (combine: stack is [final_result]) + +This generalizes to N comparisons by repeating the pattern. + +## Feasibility Assessment + +| Aspect | Assessment | Notes | +|--------|------------|-------| +| Technical Viability | High | All ops have direct MSL/C equivalents. Pipeline pattern well-established. | +| Effort Estimate | M | 9 new opcodes x 7 files + transpiler changes + tests | +| Risk Level | Low | No architectural changes. Additive-only modifications. | + +## Recommendations + +1. Add `#include ` to C compiler numeric preamble +2. New opcodes are all stack-only (no arg) — no OPS_WITH_ARG changes needed +3. MIN/MAX are binary (pop 2, push 1); all others are unary (pop 1, push 1) except POW (pop 2, push 1) +4. Transpiler should inline uniform/gauss using existing + new opcodes rather than adding dedicated opcodes +5. Use variation-selector-free emoji to avoid the ✖️/✖ dual-mapping issue diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md new file mode 100644 index 0000000..c86ec8d --- /dev/null +++ b/specs/tier1-numeric-ops/tasks.md @@ -0,0 +1,170 @@ +--- +spec: tier1-numeric-ops +phase: tasks +total_tasks: 18 +created: 2026-03-08 +generated: auto +--- + +# Tasks: tier1-numeric-ops + +## Phase 1: Make It Work (POC) + +Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. Skip bytecode/Metal/compiler until POC validated. + +- [ ] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py + - **Do**: Add `POW = auto()`, `SQRT = auto()`, `SIN = auto()`, `COS = auto()`, `EXP = auto()`, `LOG = auto()`, `ABS = auto()`, `MIN = auto()`, `MAX = auto()` after `RANDOM = auto()` in Op IntEnum. Add emoji mappings to EMOJI_TO_OP: `"🔋": Op.POW`, `"🌱": Op.SQRT`, `"📈": Op.SIN`, `"📉": Op.COS`, `"🚀": Op.EXP`, `"📓": Op.LOG`, `"💪": Op.ABS`, `"⬇️": Op.MIN`, `"⬇": Op.MIN`, `"⬆️": Op.MAX`, `"⬆": Op.MAX`. Ensure multi-codepoint emoji (with variation selectors ⬇️/⬆️) are listed BEFORE their bare versions (⬇/⬆) in the dict for correct prefix matching. + - **Files**: `emojiasm/opcodes.py` + - **Done when**: `from emojiasm.opcodes import Op; print(Op.POW, Op.SQRT, Op.MAX)` works, and `EMOJI_TO_OP["🔋"] == Op.POW` + - **Verify**: `python3 -c "from emojiasm.opcodes import Op, EMOJI_TO_OP; assert EMOJI_TO_OP['🔋'] == Op.POW; assert EMOJI_TO_OP['⬆️'] == Op.MAX; assert EMOJI_TO_OP['⬆'] == Op.MAX; print('OK')"` + - **Commit**: `feat(opcodes): add POW SQRT SIN COS EXP LOG ABS MIN MAX opcodes` + - _Requirements: FR-1 through FR-9_ + - _Design: Component 1_ + +- [ ] 1.2 Add VM execution for all 9 new opcodes + - **Do**: Add `import math` at top of vm.py (after existing imports). Add 9 new match/case arms after the `case Op.RANDOM:` block. Binary ops (POW, MIN, MAX): `b, a = self._pop(), self._pop()` then push result. Unary ops (SQRT, SIN, COS, EXP, LOG, ABS): `a = self._pop()` then push result. For SQRT: wrap in try/except ValueError for negative input, raise VMError. For LOG: wrap in try/except for domain errors. ABS uses builtin `abs()`, not `math.fabs()` to preserve int type. + - **Files**: `emojiasm/vm.py` + - **Done when**: All 9 opcodes execute correctly in the VM + - **Verify**: `python3 -c "from emojiasm.parser import parse; from emojiasm.vm import VM; p=parse('📥 2\n📥 10\n🔋\n🖨️\n🛑'); print(''.join(VM(p).run()))"` should output `1024` + - **Commit**: `feat(vm): add dispatch for POW SQRT SIN COS EXP LOG ABS MIN MAX` + - _Requirements: FR-1 through FR-9_ + - _Design: Component 2_ + +- [ ] 1.3 Add basic EmojiASM tests for all 9 new opcodes + - **Do**: Add tests to `tests/test_emojiasm.py` using the existing `run()` helper. Test each opcode: POW (`📥 2 📥 10 🔋` -> 1024), SQRT (`📥 16 🌱` -> 4.0), SIN (`📥 0 📈` -> 0.0), COS (`📥 0 📉` -> 1.0), EXP (`📥 0 🚀` -> 1.0), LOG (`📥 1 📓` -> 0.0), ABS (`📥 -5 💪` -> 5 preserving int), MIN (`📥 3 📥 7 ⬇️` -> 3), MAX (`📥 3 📥 7 ⬆️` -> 7). Also test float precision: `SQRT(2)` ~= 1.4142, `SIN(math.pi/2)` ~= 1.0. + - **Files**: `tests/test_emojiasm.py` + - **Done when**: All new tests pass with `pytest tests/test_emojiasm.py -v` + - **Verify**: `pytest tests/test_emojiasm.py -v --tb=short` + - **Commit**: `test(vm): add tests for new math opcodes` + - _Requirements: AC-1.3, AC-2.9_ + - _Design: Component 2_ + +- [ ] 1.4 Add transpiler support for `**` operator and math functions + - **Do**: In `transpiler.py`: (1) Replace the `ast.Pow` error in `visit_BinOp` with `self.visit(left); self.visit(right); self._emit(Op.POW)`. Add `ast.Pow: Op.POW` to `_BINOP_MAP` and `_AUGOP_MAP`. (2) In `visit_Call`, add handler for `math.*` attribute calls (sqrt, sin, cos, exp, log) mapping to corresponding opcodes. (3) Add handler for `abs(x)` -> ABS, `min(a,b)` -> MIN, `max(a,b)` -> MAX builtins. (4) Update `visit_Attribute` to handle `math.pi` -> PUSH 3.141592653589793 and `math.e` -> PUSH 2.718281828459045. Must handle the case where math.pi/math.e appear in expressions (not just as standalone calls). + - **Files**: `emojiasm/transpiler.py` + - **Done when**: `transpile("import math\nprint(2 ** 10)")` produces a working program, `transpile("import math\nprint(math.sqrt(16))")` works + - **Verify**: `python3 -c "from emojiasm.transpiler import transpile; from emojiasm.vm import VM; p=transpile('print(2**10)'); print(''.join(VM(p).run()))"` + - **Commit**: `feat(transpiler): add power operator and math function support` + - _Requirements: FR-10 through FR-15_ + - _Design: Component 7a, 7b, 7c, 7d_ + +- [ ] 1.5 Add transpiler support for chained comparisons + - **Do**: In `visit_Compare`, replace the `len(node.ops) > 1` error. Extract comparison emission to `_emit_cmp_op(self, cmp_op, node)` helper. For chained comparisons `a op1 b op2 c ...`: visit left, then for each (op, comparator): visit comparator, if not last: DUP + ROT, emit comparison, if i > 0: AND, if not last: SWAP. Handle all comparison types: Lt, Gt, LtE, GtE, Eq, NotEq. LtE and GtE use CMP_GT+NOT and CMP_LT+NOT respectively (existing pattern). + - **Files**: `emojiasm/transpiler.py` + - **Done when**: `print(1 < 2 < 3)` transpiles and outputs `1`, `print(1 < 3 < 2)` outputs `0` + - **Verify**: `python3 -c "from emojiasm.transpiler import transpile; from emojiasm.vm import VM; p=transpile('print(1 < 2 < 3)'); print(''.join(VM(p).run()))"` + - **Commit**: `feat(transpiler): support chained comparisons` + - _Requirements: FR-18_ + - _Design: Component 7f_ + +- [ ] 1.6 Add transpiler support for random.uniform and random.gauss + - **Do**: In `visit_Call`, add handlers for `random.uniform(a, b)` and `random.gauss(mu, sigma)`. uniform: inline as `a + (b-a) * random()` — visit args[0], visit args[1], visit args[0] again, SUB, RANDOM, MUL, ADD. gauss: Box-Muller inline — RANDOM, LOG, PUSH -2.0, MUL, SQRT, RANDOM, PUSH 2*pi, MUL, COS, MUL, then visit sigma, MUL, visit mu, ADD. Both require `"random" in self._imports`. + - **Files**: `emojiasm/transpiler.py` + - **Done when**: `random.uniform(1, 10)` transpiles and outputs a value in [1, 10) + - **Verify**: `python3 -c "from emojiasm.transpiler import transpile; from emojiasm.vm import VM; p=transpile('import random\nx = random.uniform(1, 10)\nprint(x)'); out=''.join(VM(p).run()); v=float(out.strip()); assert 1<=v<10, f'got {v}'; print('OK', v)"` + - **Commit**: `feat(transpiler): add random.uniform and random.gauss` + - _Requirements: FR-16, FR-17_ + - _Design: Component 7e_ + +- [ ] 1.7 Add transpiler tests for all new features + - **Do**: Add tests to `tests/test_transpiler.py` using the existing `run_py()` helper. Test classes: `TestPower` (2**10=1024, 4**0.5=2.0, x**=2 augmented assign), `TestMathFunctions` (sqrt(16)=4.0, sin(0)=0.0, cos(0)=1.0, exp(0)=1.0, log(1)=0.0, abs(-5)=5, min(3,7)=3, max(3,7)=7), `TestMathConstants` (math.pi ~= 3.14159, math.e ~= 2.71828, math.pi*2 expression), `TestChainedComparisons` (1<2<3=1, 1<3<2=0, 1<2<3<4=1, mixed ops 1<=2<3=1, in if condition), `TestRandomDistributions` (uniform in range, gauss returns float). Use approximate assertions for float comparisons. + - **Files**: `tests/test_transpiler.py` + - **Done when**: All new tests pass + - **Verify**: `pytest tests/test_transpiler.py -v --tb=short` + - **Commit**: `test(transpiler): add tests for math ops, constants, chained cmp, random` + - _Requirements: AC-1.1 through AC-5.5_ + - _Design: Component 7_ + +- [ ] 1.8 POC Checkpoint — verify all features work end-to-end on VM + - **Do**: Run full test suite. Verify all existing tests still pass (regression). Verify all new tests pass. Run a combined example: `import math; print(math.sqrt(2**10)); print(math.sin(math.pi/2)); print(1 < 2 < 3)` + - **Done when**: All tests pass, combined example works + - **Verify**: `pytest --tb=short -q` + - **Commit**: `feat(tier1): complete POC for numeric ops` + +## Phase 2: Full Pipeline (Bytecode + Metal + C Compiler) + +- [ ] 2.1 Add bytecode encoding for 9 new opcodes + - **Do**: In `bytecode.py`: Add 9 entries to `OP_MAP` (POW=0x15 through MAX=0x1D). Add 9 entries to `_STACK_EFFECTS` (POW=-1, SQRT/SIN/COS/EXP/LOG/ABS=0, MIN=-1, MAX=-1). The `_uses_strings()` function doesn't need changes since new ops are not string ops. + - **Files**: `emojiasm/bytecode.py` + - **Done when**: `compile_to_bytecode(parse("📥 2 📥 10 🔋 🛑"))` succeeds without BytecodeError + - **Verify**: `python3 -c "from emojiasm.bytecode import compile_to_bytecode; from emojiasm.parser import parse; g=compile_to_bytecode(parse('📥 2\n📥 10\n🔋\n🛑')); print('bytecode len:', len(g.bytecode), 'tier:', g.gpu_tier)"` + - **Commit**: `feat(bytecode): add encoding for math opcodes` + - _Requirements: FR-1 through FR-9, FR-21_ + - _Design: Component 3_ + +- [ ] 2.2 Add Metal kernel dispatch for 9 new opcodes + - **Do**: In `metal/vm.metal`: Add 9 opcode constants after `OP_RANDOM` (OP_POW=0x15 through OP_MAX=0x1D). Add 9 switch cases in the dispatch loop. Binary ops (POW, MIN, MAX) follow OP_MUL pattern: check sp<2, decrement sp, apply MSL function. Unary ops (SQRT, SIN, COS, EXP, LOG, ABS) follow OP_NOT pattern: check sp<1, apply MSL function in-place. MSL functions: `pow()`, `sqrt()`, `sin()`, `cos()`, `exp()`, `log()`, `abs()` (or `fabs()`), `min()`, `max()`. + - **Files**: `emojiasm/metal/vm.metal` + - **Done when**: Metal shader compiles without errors (validated by gpu.py tests) + - **Verify**: `python3 -c "from emojiasm.gpu import get_kernel_source; src=get_kernel_source(); assert 'OP_POW' in src; assert 'OP_MAX' in src; print('OK')"` + - **Commit**: `feat(metal): add GPU dispatch for math opcodes` + - _Requirements: FR-1 through FR-9_ + - _Design: Component 4_ + +- [ ] 2.3 Add GPU glue entries for 9 new opcodes + - **Do**: In `gpu.py`: Add 9 entries to `GPU_OPCODES` dict matching bytecode OP_MAP values exactly. No `_GPU_NAME_TO_OP_NAME` changes needed since GPU names match Op enum names directly. + - **Files**: `emojiasm/gpu.py` + - **Done when**: `validate_opcodes()` passes with new opcodes + - **Verify**: `python3 -c "from emojiasm.gpu import validate_opcodes; validate_opcodes(); print('OK')"` + - **Commit**: `feat(gpu): add GPU_OPCODES entries for math ops` + - _Requirements: NFR-2_ + - _Design: Component 5_ + +- [ ] 2.4 Add C compiler emission for 9 new opcodes + - **Do**: In `compiler.py`: (1) Add `#include ` to both `_PREAMBLE_NUMERIC` and `_PREAMBLE_MIXED` after the existing `#include `. (2) Add 9 `elif op == Op.X:` blocks in `_emit_inst` after the `Op.RANDOM` block. Each block handles both numeric_only and mixed mode. Binary ops: `{ double b=POP(),a=POP(); PUSH_N(func(a,b)); }`. Unary ops: `{ double a=POP(); PUSH_N(func(a)); }`. C functions: `pow()`, `sqrt()`, `sin()`, `cos()`, `exp()`, `log()`, `fabs()` (not abs which is int-only in C), `fmin()`, `fmax()`. + - **Files**: `emojiasm/compiler.py` + - **Done when**: `compile_to_c(parse("📥 2 📥 10 🔋 🖨️ 🛑"))` generates valid C with `pow()` call + - **Verify**: `python3 -c "from emojiasm.compiler import compile_to_c; from emojiasm.parser import parse; c=compile_to_c(parse('📥 2\n📥 10\n🔋\n🖨️\n🛑')); assert 'pow(' in c; assert 'math.h' in c; print('OK')"` + - **Commit**: `feat(compiler): add C emission for math opcodes` + - _Requirements: FR-19_ + - _Design: Component 6_ + +- [ ] 2.5 Add bytecode and GPU tests for new opcodes + - **Do**: In `tests/test_bytecode.py`: Add tests verifying OP_MAP contains all 9 new ops, bytecode encoding roundtrips correctly, stack effects are defined for all new ops, gpu_tier classification is still correct for programs using new ops. In `tests/test_gpu_kernel.py`: Add tests verifying Metal kernel source contains all new opcode constants and switch cases. Test `validate_opcodes()` passes. + - **Files**: `tests/test_bytecode.py`, `tests/test_gpu_kernel.py` + - **Done when**: New tests pass + - **Verify**: `pytest tests/test_bytecode.py tests/test_gpu_kernel.py -v --tb=short` + - **Commit**: `test(bytecode,gpu): add tests for math opcode encoding` + - _Requirements: NFR-2_ + - _Design: Components 3, 4, 5_ + +## Phase 3: Documentation and Polish + +- [ ] 3.1 Update docs/REFERENCE.md with new opcodes + - **Do**: Add a new "Math" section to the Instruction Set in REFERENCE.md between Arithmetic and Comparison. Include all 9 opcodes with emoji, name, stack effect, and notes. Update the "Python Transpiler" section to list new supported features: `**`, `math.sqrt/sin/cos/exp/log`, `abs()`, `min()`, `max()`, `math.pi`, `math.e`, `random.uniform()`, `random.gauss()`, chained comparisons. Update the "Not supported" line to remove `**`. + - **Files**: `docs/REFERENCE.md` + - **Done when**: Reference doc accurately describes all new features + - **Verify**: `grep -c "POW\|SQRT\|SIN\|COS\|EXP\|LOG\|ABS\|MIN\|MAX" docs/REFERENCE.md` returns >= 9 + - **Commit**: `docs: add math opcodes to language reference` + - _Requirements: FR-22_ + - _Design: N/A_ + +- [ ] 3.2 Add example program using new math ops + - **Do**: Create `examples/math_functions.emoji` demonstrating all 9 new opcodes. Include: power (2^10), sqrt(16), sin/cos of pi/4, exp(1), log(e), abs(-42), min/max of pairs. Print results with labels using PRINTS+ADD pattern. + - **Files**: `examples/math_functions.emoji` + - **Done when**: `emojiasm examples/math_functions.emoji` runs and produces correct output + - **Verify**: `python3 -m emojiasm examples/math_functions.emoji` + - **Commit**: `docs: add math_functions.emoji example` + - _Design: N/A_ + +## Phase 4: Quality Gates + +- [ ] 4.1 Full regression test suite + - **Do**: Run complete test suite including all existing and new tests. Verify all 448+ existing tests still pass. Run type checking if available. + - **Verify**: `pytest --tb=short -q` + - **Done when**: All tests pass, zero failures + - **Commit**: `fix(tier1): address any remaining issues` (if needed) + +- [ ] 4.2 Create PR and verify CI + - **Do**: Push branch, create PR with `gh pr create` summarizing: 9 new math opcodes (POW, SQRT, SIN, COS, EXP, LOG, ABS, MIN, MAX) wired through full pipeline (opcodes, VM, bytecode, Metal kernel, GPU glue, C compiler), transpiler support for `**`, `math.*` functions, `math.pi`/`math.e` constants, `random.uniform`/`random.gauss`, chained comparisons. Reference issue #27. Include test counts. + - **Verify**: `gh pr checks --watch` all green + - **Done when**: PR ready for review + - **Commit**: N/A (PR creation, not a commit) + +## Notes + +- **POC shortcuts taken**: Bytecode, Metal, and C compiler deferred to Phase 2; Phase 1 validates VM correctness only +- **Production TODOs in Phase 2**: Add `#include ` to C preamble, ensure `-lm` linker flag on Linux +- **Emoji ordering matters**: Multi-codepoint emoji with variation selectors (⬇️/⬆️) must precede bare versions (⬇/⬆) in EMOJI_TO_OP dict for correct prefix matching (KB #13, #21) +- **Float precision**: GPU uses float32, CPU uses float64. Math function results may differ slightly between GPU and CPU paths. Tests should use approximate comparisons where needed. +- **Disassembler**: No changes needed — `OP_TO_EMOJI` reverse map automatically picks up new entries From 6a43eb2fa3e323a36894bf4dff1fd9eeb2c0685b Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:19:03 +0800 Subject: [PATCH 02/17] feat(opcodes): add POW SQRT SIN COS EXP LOG ABS MIN MAX opcodes Co-Authored-By: Claude Opus 4.6 --- emojiasm/opcodes.py | 20 ++++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 10 ++++++++++ specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 31 insertions(+), 1 deletion(-) diff --git a/emojiasm/opcodes.py b/emojiasm/opcodes.py index dea515b..5611332 100644 --- a/emojiasm/opcodes.py +++ b/emojiasm/opcodes.py @@ -41,6 +41,15 @@ class Op(IntEnum): STR2NUM = auto() NUM2STR = auto() RANDOM = auto() + POW = auto() + SQRT = auto() + SIN = auto() + COS = auto() + EXP = auto() + LOG = auto() + ABS = auto() + MIN = auto() + MAX = auto() # Emoji -> Opcode mapping @@ -85,6 +94,17 @@ class Op(IntEnum): "🔁": Op.STR2NUM, "🔤": Op.NUM2STR, "🎲": Op.RANDOM, + "🔋": Op.POW, + "🌱": Op.SQRT, + "📈": Op.SIN, + "📉": Op.COS, + "🚀": Op.EXP, + "📓": Op.LOG, + "💪": Op.ABS, + "⬇️": Op.MIN, + "⬇": Op.MIN, + "⬆️": Op.MAX, + "⬆": Op.MAX, } # Reverse mapping for disassembly diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index f84034f..a6a9f9d 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -9,6 +9,12 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] Spec generation - [ ] Implementation +## Completed Tasks +- [x] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py + +## Current Task +Awaiting next task + ## Learnings - Opcode pipeline is 7 files deep: opcodes.py -> parser (auto) -> vm.py -> bytecode.py -> vm.metal -> gpu.py -> compiler.py. Disasm is auto via reverse map. @@ -23,3 +29,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - C compiler needs math.h added to both numeric and mixed preambles, and fabs() (not abs()) for float absolute value in C. - vm.py already imports random but not math -- need to add import math for SQRT/SIN/COS/EXP/LOG. - Chained comparisons require careful stack manipulation: DUP+ROT to save intermediate values, AND to combine results, SWAP to position saved values for next comparison. +- New Op enum values (POW through MAX) are auto-numbered 38-46 in the IntEnum. No conflicts with existing ops. + +## Next +Task 1.2: Add VM execution for all 9 new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index c86ec8d..6e63297 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -12,7 +12,7 @@ generated: auto Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. Skip bytecode/Metal/compiler until POC validated. -- [ ] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py +- [x] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py - **Do**: Add `POW = auto()`, `SQRT = auto()`, `SIN = auto()`, `COS = auto()`, `EXP = auto()`, `LOG = auto()`, `ABS = auto()`, `MIN = auto()`, `MAX = auto()` after `RANDOM = auto()` in Op IntEnum. Add emoji mappings to EMOJI_TO_OP: `"🔋": Op.POW`, `"🌱": Op.SQRT`, `"📈": Op.SIN`, `"📉": Op.COS`, `"🚀": Op.EXP`, `"📓": Op.LOG`, `"💪": Op.ABS`, `"⬇️": Op.MIN`, `"⬇": Op.MIN`, `"⬆️": Op.MAX`, `"⬆": Op.MAX`. Ensure multi-codepoint emoji (with variation selectors ⬇️/⬆️) are listed BEFORE their bare versions (⬇/⬆) in the dict for correct prefix matching. - **Files**: `emojiasm/opcodes.py` - **Done when**: `from emojiasm.opcodes import Op; print(Op.POW, Op.SQRT, Op.MAX)` works, and `EMOJI_TO_OP["🔋"] == Op.POW` From c902fc982835034c282eaea61b9a55d1a82c7f49 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:20:07 +0800 Subject: [PATCH 03/17] feat(vm): add dispatch for POW SQRT SIN COS EXP LOG ABS MIN MAX Co-Authored-By: Claude Opus 4.6 --- emojiasm/vm.py | 43 ++++++++++++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 3 +- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 46 insertions(+), 2 deletions(-) diff --git a/emojiasm/vm.py b/emojiasm/vm.py index f01a032..72fc8a3 100644 --- a/emojiasm/vm.py +++ b/emojiasm/vm.py @@ -1,5 +1,6 @@ """Stack-based virtual machine for EmojiASM.""" +import math import random import sys from .opcodes import Op @@ -285,6 +286,48 @@ def _exec_function(self, entry_func: str): case Op.RANDOM: self._push(random.random()) + case Op.POW: + b, a = self._pop(), self._pop() + self._push(a ** b) + + case Op.SQRT: + a = self._pop() + try: + self._push(math.sqrt(a)) + except ValueError: + raise VMError(f"SQRT of negative number: {a}", ip, source=inst.source, func_name=func_name) + + case Op.SIN: + a = self._pop() + self._push(math.sin(a)) + + case Op.COS: + a = self._pop() + self._push(math.cos(a)) + + case Op.EXP: + a = self._pop() + self._push(math.exp(a)) + + case Op.LOG: + a = self._pop() + try: + self._push(math.log(a)) + except ValueError: + raise VMError(f"LOG domain error: {a}", ip, source=inst.source, func_name=func_name) + + case Op.ABS: + a = self._pop() + self._push(abs(a)) + + case Op.MIN: + b, a = self._pop(), self._pop() + self._push(min(a, b)) + + case Op.MAX: + b, a = self._pop(), self._pop() + self._push(max(a, b)) + case _: raise VMError(f"Unknown opcode: {op}", ip, source=inst.source, func_name=func_name) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index a6a9f9d..9904e09 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -11,6 +11,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. ## Completed Tasks - [x] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py +- [x] 1.2 Add VM execution for all 9 new opcodes ## Current Task Awaiting next task @@ -32,4 +33,4 @@ Awaiting next task - New Op enum values (POW through MAX) are auto-numbered 38-46 in the IntEnum. No conflicts with existing ops. ## Next -Task 1.2: Add VM execution for all 9 new opcodes +Task 1.3: Add basic EmojiASM tests for all 9 new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 6e63297..3d9199c 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -21,7 +21,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-1 through FR-9_ - _Design: Component 1_ -- [ ] 1.2 Add VM execution for all 9 new opcodes +- [x] 1.2 Add VM execution for all 9 new opcodes - **Do**: Add `import math` at top of vm.py (after existing imports). Add 9 new match/case arms after the `case Op.RANDOM:` block. Binary ops (POW, MIN, MAX): `b, a = self._pop(), self._pop()` then push result. Unary ops (SQRT, SIN, COS, EXP, LOG, ABS): `a = self._pop()` then push result. For SQRT: wrap in try/except ValueError for negative input, raise VMError. For LOG: wrap in try/except for domain errors. ABS uses builtin `abs()`, not `math.fabs()` to preserve int type. - **Files**: `emojiasm/vm.py` - **Done when**: All 9 opcodes execute correctly in the VM From 11e6a824135b2fc525bb10e782b2071cf450576e Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:21:08 +0800 Subject: [PATCH 04/17] test(vm): add tests for new math opcodes Co-Authored-By: Claude Opus 4.6 --- specs/tier1-numeric-ops/.progress.md | 3 +- specs/tier1-numeric-ops/tasks.md | 2 +- tests/test_emojiasm.py | 64 ++++++++++++++++++++++++++++ 3 files changed, 67 insertions(+), 2 deletions(-) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 9904e09..705a8c9 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -12,6 +12,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. ## Completed Tasks - [x] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py - [x] 1.2 Add VM execution for all 9 new opcodes +- [x] 1.3 Add basic EmojiASM tests for all 9 new opcodes ## Current Task Awaiting next task @@ -33,4 +34,4 @@ Awaiting next task - New Op enum values (POW through MAX) are auto-numbered 38-46 in the IntEnum. No conflicts with existing ops. ## Next -Task 1.3: Add basic EmojiASM tests for all 9 new opcodes +Task 1.4: Add transpiler support for ** operator and math functions diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 3d9199c..3f5d3a7 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -30,7 +30,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-1 through FR-9_ - _Design: Component 2_ -- [ ] 1.3 Add basic EmojiASM tests for all 9 new opcodes +- [x] 1.3 Add basic EmojiASM tests for all 9 new opcodes - **Do**: Add tests to `tests/test_emojiasm.py` using the existing `run()` helper. Test each opcode: POW (`📥 2 📥 10 🔋` -> 1024), SQRT (`📥 16 🌱` -> 4.0), SIN (`📥 0 📈` -> 0.0), COS (`📥 0 📉` -> 1.0), EXP (`📥 0 🚀` -> 1.0), LOG (`📥 1 📓` -> 0.0), ABS (`📥 -5 💪` -> 5 preserving int), MIN (`📥 3 📥 7 ⬇️` -> 3), MAX (`📥 3 📥 7 ⬆️` -> 7). Also test float precision: `SQRT(2)` ~= 1.4142, `SIN(math.pi/2)` ~= 1.0. - **Files**: `tests/test_emojiasm.py` - **Done when**: All new tests pass with `pytest tests/test_emojiasm.py -v` diff --git a/tests/test_emojiasm.py b/tests/test_emojiasm.py index 79dd497..700c954 100644 --- a/tests/test_emojiasm.py +++ b/tests/test_emojiasm.py @@ -161,3 +161,67 @@ def test_deep_recursion_no_stack_overflow(): ]) out = run(src, max_steps=5_000_000) assert "".join(out).strip() == "0" + + +# --- Math opcodes (Tier 1) --- + + +def test_pow(): + out = run("📥 2\n📥 10\n🔋\n🖨️\n🛑") + assert "".join(out).strip() == "1024" + + +def test_pow_negative_exponent(): + out = run("📥 2\n📥 -1\n🔋\n🖨️\n🛑") + assert "".join(out).strip() == "0.5" + + +def test_sqrt(): + out = run("📥 16\n🌱\n🖨️\n🛑") + assert "".join(out).strip() == "4.0" + + +def test_sqrt_float(): + out = run("📥 2\n🌱\n🖨️\n🛑") + val = float("".join(out).strip()) + assert abs(val - 1.4142135623730951) < 1e-6 + + +def test_sin(): + out = run("📥 0\n📈\n🖨️\n🛑") + assert "".join(out).strip() == "0.0" + + +def test_cos(): + out = run("📥 0\n📉\n🖨️\n🛑") + assert "".join(out).strip() == "1.0" + + +def test_exp(): + out = run("📥 0\n🚀\n🖨️\n🛑") + assert "".join(out).strip() == "1.0" + + +def test_log(): + out = run("📥 1\n📓\n🖨️\n🛑") + assert "".join(out).strip() == "0.0" + + +def test_abs_int(): + out = run("📥 -5\n💪\n🖨️\n🛑") + assert "".join(out).strip() == "5" + + +def test_abs_float(): + out = run("📥 -3.14\n💪\n🖨️\n🛑") + assert "".join(out).strip() == "3.14" + + +def test_min(): + out = run("📥 3\n📥 7\n⬇️\n🖨️\n🛑") + assert "".join(out).strip() == "3" + + +def test_max(): + out = run("📥 3\n📥 7\n⬆️\n🖨️\n🛑") + assert "".join(out).strip() == "7" From 987eb9b6653498b77da4f628470f67230230fcf9 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:23:54 +0800 Subject: [PATCH 05/17] feat(transpiler): add power operator and math function support Co-Authored-By: Claude Opus 4.6 --- emojiasm/transpiler.py | 87 ++++++++++++++++++++++++++-- specs/tier1-numeric-ops/.progress.md | 6 +- specs/tier1-numeric-ops/tasks.md | 2 +- tests/test_transpiler.py | 7 ++- 4 files changed, 91 insertions(+), 11 deletions(-) diff --git a/emojiasm/transpiler.py b/emojiasm/transpiler.py index 84cba5c..22759ea 100644 --- a/emojiasm/transpiler.py +++ b/emojiasm/transpiler.py @@ -53,6 +53,7 @@ def __init__(self, message: str, lineno: int = 0): ast.Mult: Op.MUL, ast.FloorDiv: Op.DIV, ast.Mod: Op.MOD, + ast.Pow: Op.POW, } _AUGOP_MAP = { @@ -61,6 +62,7 @@ def __init__(self, message: str, lineno: int = 0): ast.Mult: Op.MUL, ast.FloorDiv: Op.DIV, ast.Mod: Op.MOD, + ast.Pow: Op.POW, ast.Div: None, # special handling } @@ -460,12 +462,6 @@ def visit_BinOp(self, node: ast.BinOp): self._emit(Op.DIV, node=node) return - if isinstance(node.op, ast.Pow): - raise TranspileError( - "Power operator (**) not supported. For square root, use manual iteration.", - node.lineno, - ) - op = _BINOP_MAP.get(type(node.op)) if op is None: raise TranspileError( @@ -559,6 +555,69 @@ def visit_Call(self, node: ast.Call): self._emit(Op.RANDOM, node=node) return + # math.* functions + _MATH_FUNC_MAP = { + "sqrt": Op.SQRT, + "sin": Op.SIN, + "cos": Op.COS, + "exp": Op.EXP, + "log": Op.LOG, + } + if ( + isinstance(node.func, ast.Attribute) + and isinstance(node.func.value, ast.Name) + and node.func.value.id == "math" + and node.func.attr in _MATH_FUNC_MAP + ): + if "math" not in self._imports: + raise TranspileError( + "math module not imported. Add 'import math'.", + node.lineno, + ) + if len(node.args) != 1: + raise TranspileError( + f"math.{node.func.attr}() takes exactly 1 argument", + node.lineno, + ) + self.visit(node.args[0]) + self._emit(_MATH_FUNC_MAP[node.func.attr], node=node) + return + + # abs(x) builtin + if isinstance(node.func, ast.Name) and node.func.id == "abs": + if len(node.args) != 1: + raise TranspileError( + "abs() takes exactly 1 argument", + node.lineno, + ) + self.visit(node.args[0]) + self._emit(Op.ABS, node=node) + return + + # min(a, b) builtin + if isinstance(node.func, ast.Name) and node.func.id == "min": + if len(node.args) != 2: + raise TranspileError( + "min() takes exactly 2 arguments", + node.lineno, + ) + self.visit(node.args[0]) + self.visit(node.args[1]) + self._emit(Op.MIN, node=node) + return + + # max(a, b) builtin + if isinstance(node.func, ast.Name) and node.func.id == "max": + if len(node.args) != 2: + raise TranspileError( + "max() takes exactly 2 arguments", + node.lineno, + ) + self.visit(node.args[0]) + self.visit(node.args[1]) + self._emit(Op.MAX, node=node) + return + # User-defined function call if isinstance(node.func, ast.Name) and node.func.id in self._func_map: emoji = self._func_map[node.func.id] @@ -600,6 +659,22 @@ def visit_Call(self, node: ast.Call): ) def visit_Attribute(self, node: ast.Attribute): + # math.pi and math.e constants + if ( + isinstance(node.value, ast.Name) + and node.value.id == "math" + and node.attr in ("pi", "e") + ): + if "math" not in self._imports: + raise TranspileError( + "math module not imported. Add 'import math'.", + getattr(node, "lineno", 0), + ) + if node.attr == "pi": + self._emit(Op.PUSH, 3.141592653589793, node=node) + elif node.attr == "e": + self._emit(Op.PUSH, 2.718281828459045, node=node) + return # Allow random.random etc. to be handled by visit_Call pass diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 705a8c9..f12924c 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -13,6 +13,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.1 Add 9 new Op enum values and emoji mappings to opcodes.py - [x] 1.2 Add VM execution for all 9 new opcodes - [x] 1.3 Add basic EmojiASM tests for all 9 new opcodes +- [x] 1.4 Add transpiler support for ** operator and math functions ## Current Task Awaiting next task @@ -32,6 +33,9 @@ Awaiting next task - vm.py already imports random but not math -- need to add import math for SQRT/SIN/COS/EXP/LOG. - Chained comparisons require careful stack manipulation: DUP+ROT to save intermediate values, AND to combine results, SWAP to position saved values for next comparison. - New Op enum values (POW through MAX) are auto-numbered 38-46 in the IntEnum. No conflicts with existing ops. +- Transpiler's visit_Attribute is called for math.pi/math.e constants in any expression context (not just standalone). The visitor emits PUSH with the float value directly. +- Adding ast.Pow to _BINOP_MAP lets the existing visit_BinOp map-lookup handle it -- no special-case code needed beyond removing the old error block. +- Existing test_power_operator test expected TranspileError for ** -- had to update it since ** is now supported. Updated test to verify correct output instead. ## Next -Task 1.4: Add transpiler support for ** operator and math functions +Task 1.5: Add transpiler support for chained comparisons diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 3f5d3a7..db4eebe 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -39,7 +39,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: AC-1.3, AC-2.9_ - _Design: Component 2_ -- [ ] 1.4 Add transpiler support for `**` operator and math functions +- [x] 1.4 Add transpiler support for `**` operator and math functions - **Do**: In `transpiler.py`: (1) Replace the `ast.Pow` error in `visit_BinOp` with `self.visit(left); self.visit(right); self._emit(Op.POW)`. Add `ast.Pow: Op.POW` to `_BINOP_MAP` and `_AUGOP_MAP`. (2) In `visit_Call`, add handler for `math.*` attribute calls (sqrt, sin, cos, exp, log) mapping to corresponding opcodes. (3) Add handler for `abs(x)` -> ABS, `min(a,b)` -> MIN, `max(a,b)` -> MAX builtins. (4) Update `visit_Attribute` to handle `math.pi` -> PUSH 3.141592653589793 and `math.e` -> PUSH 2.718281828459045. Must handle the case where math.pi/math.e appear in expressions (not just as standalone calls). - **Files**: `emojiasm/transpiler.py` - **Done when**: `transpile("import math\nprint(2 ** 10)")` produces a working program, `transpile("import math\nprint(math.sqrt(16))")` works diff --git a/tests/test_transpiler.py b/tests/test_transpiler.py index 8b69dc8..94d6ec6 100644 --- a/tests/test_transpiler.py +++ b/tests/test_transpiler.py @@ -405,9 +405,10 @@ def test_chained_comparison(self): with pytest.raises(TranspileError, match="[Cc]hained"): transpile("print(1 < 2 < 3)") - def test_power_operator(self): - with pytest.raises(TranspileError, match="[Pp]ower"): - transpile("print(2 ** 3)") + def test_power_operator_now_supported(self): + # ** operator is now supported via POW opcode + out = run_py("print(2 ** 3)") + assert out.strip() == "8" def test_range_too_many_args(self): with pytest.raises(TranspileError): From f82df8aef1fad17f03ee1190015807e56e0f6ea4 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:28:05 +0800 Subject: [PATCH 06/17] feat(transpiler): support chained comparisons Co-Authored-By: Claude Opus 4.6 --- emojiasm/transpiler.py | 53 ++++++++++++++++++++++------ specs/tier1-numeric-ops/.progress.md | 5 ++- specs/tier1-numeric-ops/tasks.md | 2 +- tests/test_transpiler.py | 7 ++-- 4 files changed, 51 insertions(+), 16 deletions(-) diff --git a/emojiasm/transpiler.py b/emojiasm/transpiler.py index 22759ea..2704cd2 100644 --- a/emojiasm/transpiler.py +++ b/emojiasm/transpiler.py @@ -489,17 +489,8 @@ def visit_UnaryOp(self, node: ast.UnaryOp): node.lineno, ) - def visit_Compare(self, node: ast.Compare): - if len(node.ops) > 1: - raise TranspileError( - "Chained comparisons not supported. Use 'a < b and b < c' instead.", - node.lineno, - ) - - self.visit(node.left) - self.visit(node.comparators[0]) - cmp_op = node.ops[0] - + def _emit_cmp_op(self, cmp_op, node): + """Emit comparison opcodes for a single comparison operator.""" if isinstance(cmp_op, ast.Eq): self._emit(Op.CMP_EQ, node=node) elif isinstance(cmp_op, ast.NotEq): @@ -521,6 +512,46 @@ def visit_Compare(self, node: ast.Compare): node.lineno, ) + def visit_Compare(self, node: ast.Compare): + n = len(node.ops) + self.visit(node.left) + + for i, (cmp_op, comparator) in enumerate( + zip(node.ops, node.comparators) + ): + is_last = i == n - 1 + + self.visit(comparator) + + if not is_last: + # Save comparator for next comparison: + # stack: [..., left_val, comp] -> DUP -> [..., left_val, comp, comp_copy] + # ROT -> [..., comp, comp_copy, left_val] + # SWAP -> [..., comp, left_val, comp_copy] + # Now CMP will consume left_val and comp_copy correctly + self._emit(Op.DUP, node=node) + self._emit(Op.ROT, node=node) + self._emit(Op.SWAP, node=node) + + self._emit_cmp_op(cmp_op, node) + + if i > 0 and not is_last: + # Combine with previous result: stack is [prev_result, saved_comp, cmp_result] + # ROT -> [saved_comp, cmp_result, prev_result] + # AND -> [saved_comp, combined] + # SWAP -> [combined, saved_comp] + self._emit(Op.ROT, node=node) + self._emit(Op.AND, node=node) + self._emit(Op.SWAP, node=node) + elif i > 0 and is_last: + # Last comparison, combine with accumulated result + # stack: [accumulated, cmp_result] -> AND -> [final] + self._emit(Op.AND, node=node) + elif not is_last: + # First comparison (i==0), not last: swap result below saved comparator + # stack: [saved_comp, cmp_result] -> SWAP -> [cmp_result, saved_comp] + self._emit(Op.SWAP, node=node) + def visit_BoolOp(self, node: ast.BoolOp): self.visit(node.values[0]) for val in node.values[1:]: diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index f12924c..bd79219 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -14,6 +14,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.2 Add VM execution for all 9 new opcodes - [x] 1.3 Add basic EmojiASM tests for all 9 new opcodes - [x] 1.4 Add transpiler support for ** operator and math functions +- [x] 1.5 Add transpiler support for chained comparisons ## Current Task Awaiting next task @@ -36,6 +37,8 @@ Awaiting next task - Transpiler's visit_Attribute is called for math.pi/math.e constants in any expression context (not just standalone). The visitor emits PUSH with the float value directly. - Adding ast.Pow to _BINOP_MAP lets the existing visit_BinOp map-lookup handle it -- no special-case code needed beyond removing the old error block. - Existing test_power_operator test expected TranspileError for ** -- had to update it since ** is now supported. Updated test to verify correct output instead. +- Chained comparisons implementation: for each non-last comparison, DUP+ROT+SWAP saves the comparator for next round and positions values for CMP. For middle comparisons (i>0, not last), ROT+AND+SWAP combines accumulated result and repositions. ROT semantics: [a,b,c] -> [b,c,a] (bottom of 3 goes to top). Extracted _emit_cmp_op helper for comparison opcode emission. +- Updated test_chained_comparison test (was expecting TranspileError, now expects correct output). ## Next -Task 1.5: Add transpiler support for chained comparisons +Task 1.6: Add transpiler support for random.uniform and random.gauss diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index db4eebe..4c86750 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -48,7 +48,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-10 through FR-15_ - _Design: Component 7a, 7b, 7c, 7d_ -- [ ] 1.5 Add transpiler support for chained comparisons +- [x] 1.5 Add transpiler support for chained comparisons - **Do**: In `visit_Compare`, replace the `len(node.ops) > 1` error. Extract comparison emission to `_emit_cmp_op(self, cmp_op, node)` helper. For chained comparisons `a op1 b op2 c ...`: visit left, then for each (op, comparator): visit comparator, if not last: DUP + ROT, emit comparison, if i > 0: AND, if not last: SWAP. Handle all comparison types: Lt, Gt, LtE, GtE, Eq, NotEq. LtE and GtE use CMP_GT+NOT and CMP_LT+NOT respectively (existing pattern). - **Files**: `emojiasm/transpiler.py` - **Done when**: `print(1 < 2 < 3)` transpiles and outputs `1`, `print(1 < 3 < 2)` outputs `0` diff --git a/tests/test_transpiler.py b/tests/test_transpiler.py index 94d6ec6..bb458dc 100644 --- a/tests/test_transpiler.py +++ b/tests/test_transpiler.py @@ -401,9 +401,10 @@ def test_unassigned_variable(self): with pytest.raises(TranspileError, match="before assignment"): transpile("print(x)") - def test_chained_comparison(self): - with pytest.raises(TranspileError, match="[Cc]hained"): - transpile("print(1 < 2 < 3)") + def test_chained_comparison_now_supported(self): + # Chained comparisons are now supported + out = run_py("print(1 < 2 < 3)") + assert out.strip() == "1" def test_power_operator_now_supported(self): # ** operator is now supported via POW opcode From f9dc6177d754ee848f377b9158e767ddfb8891c6 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:29:37 +0800 Subject: [PATCH 07/17] feat(transpiler): add random.uniform and random.gauss Co-Authored-By: Claude Opus 4.6 --- emojiasm/transpiler.py | 61 ++++++++++++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 4 +- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 65 insertions(+), 2 deletions(-) diff --git a/emojiasm/transpiler.py b/emojiasm/transpiler.py index 2704cd2..e5e26b2 100644 --- a/emojiasm/transpiler.py +++ b/emojiasm/transpiler.py @@ -577,6 +577,67 @@ def visit_Call(self, node: ast.Call): self._emit(Op.RANDOM, node=node) return + # random.uniform(a, b) -> a + (b - a) * random() + if ( + isinstance(node.func, ast.Attribute) + and isinstance(node.func.value, ast.Name) + and node.func.value.id == "random" + and node.func.attr == "uniform" + ): + if "random" not in self._imports: + raise TranspileError( + "random module not imported. Add 'import random'.", + node.lineno, + ) + if len(node.args) != 2: + raise TranspileError( + "random.uniform() takes exactly 2 arguments", + node.lineno, + ) + # Inline: a + (b - a) * random() + self.visit(node.args[1]) # b + self.visit(node.args[0]) # a + self._emit(Op.SUB, node=node) # b - a + self._emit(Op.RANDOM, node=node) # random float [0, 1) + self._emit(Op.MUL, node=node) # (b - a) * random + self.visit(node.args[0]) # a + self._emit(Op.ADD, node=node) # a + (b - a) * random + return + + # random.gauss(mu, sigma) -> Box-Muller transform + if ( + isinstance(node.func, ast.Attribute) + and isinstance(node.func.value, ast.Name) + and node.func.value.id == "random" + and node.func.attr == "gauss" + ): + if "random" not in self._imports: + raise TranspileError( + "random module not imported. Add 'import random'.", + node.lineno, + ) + if len(node.args) != 2: + raise TranspileError( + "random.gauss() takes exactly 2 arguments", + node.lineno, + ) + # Box-Muller: mu + sigma * sqrt(-2 * log(u1)) * cos(2 * pi * u2) + self._emit(Op.RANDOM, node=node) # u1 + self._emit(Op.LOG, node=node) # log(u1) + self._emit(Op.PUSH, -2.0, node=node) # -2.0 + self._emit(Op.MUL, node=node) # -2 * log(u1) + self._emit(Op.SQRT, node=node) # sqrt(-2 * log(u1)) + self._emit(Op.RANDOM, node=node) # u2 + self._emit(Op.PUSH, 6.283185307179586, node=node) # 2*pi + self._emit(Op.MUL, node=node) # 2*pi*u2 + self._emit(Op.COS, node=node) # cos(2*pi*u2) + self._emit(Op.MUL, node=node) # z = sqrt(...) * cos(...) + self.visit(node.args[1]) # sigma + self._emit(Op.MUL, node=node) # sigma * z + self.visit(node.args[0]) # mu + self._emit(Op.ADD, node=node) # mu + sigma * z + return + # random() from "from random import random" if ( isinstance(node.func, ast.Name) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index bd79219..7d05452 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -15,6 +15,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.3 Add basic EmojiASM tests for all 9 new opcodes - [x] 1.4 Add transpiler support for ** operator and math functions - [x] 1.5 Add transpiler support for chained comparisons +- [x] 1.6 Add transpiler support for random.uniform and random.gauss ## Current Task Awaiting next task @@ -34,6 +35,7 @@ Awaiting next task - vm.py already imports random but not math -- need to add import math for SQRT/SIN/COS/EXP/LOG. - Chained comparisons require careful stack manipulation: DUP+ROT to save intermediate values, AND to combine results, SWAP to position saved values for next comparison. - New Op enum values (POW through MAX) are auto-numbered 38-46 in the IntEnum. No conflicts with existing ops. +- random.uniform and random.gauss are inline-expanded in the transpiler using existing ops (SUB, RANDOM, MUL, ADD, LOG, SQRT, COS, PUSH). No new opcodes needed. Pattern: check isinstance Attribute with value.id=="random" and attr name, verify "random" in self._imports. - Transpiler's visit_Attribute is called for math.pi/math.e constants in any expression context (not just standalone). The visitor emits PUSH with the float value directly. - Adding ast.Pow to _BINOP_MAP lets the existing visit_BinOp map-lookup handle it -- no special-case code needed beyond removing the old error block. - Existing test_power_operator test expected TranspileError for ** -- had to update it since ** is now supported. Updated test to verify correct output instead. @@ -41,4 +43,4 @@ Awaiting next task - Updated test_chained_comparison test (was expecting TranspileError, now expects correct output). ## Next -Task 1.6: Add transpiler support for random.uniform and random.gauss +Task 1.7: Add transpiler tests for all new features diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 4c86750..c172aea 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -57,7 +57,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-18_ - _Design: Component 7f_ -- [ ] 1.6 Add transpiler support for random.uniform and random.gauss +- [x] 1.6 Add transpiler support for random.uniform and random.gauss - **Do**: In `visit_Call`, add handlers for `random.uniform(a, b)` and `random.gauss(mu, sigma)`. uniform: inline as `a + (b-a) * random()` — visit args[0], visit args[1], visit args[0] again, SUB, RANDOM, MUL, ADD. gauss: Box-Muller inline — RANDOM, LOG, PUSH -2.0, MUL, SQRT, RANDOM, PUSH 2*pi, MUL, COS, MUL, then visit sigma, MUL, visit mu, ADD. Both require `"random" in self._imports`. - **Files**: `emojiasm/transpiler.py` - **Done when**: `random.uniform(1, 10)` transpiles and outputs a value in [1, 10) From 68a4d97f3dfac53410c1a9195fd8e75f7d009576 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:30:56 +0800 Subject: [PATCH 08/17] test(transpiler): add tests for math ops, constants, chained cmp, random Co-Authored-By: Claude Opus 4.6 --- specs/tier1-numeric-ops/.progress.md | 5 +- specs/tier1-numeric-ops/tasks.md | 2 +- tests/test_transpiler.py | 102 +++++++++++++++++++++++++++ 3 files changed, 107 insertions(+), 2 deletions(-) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 7d05452..afa5717 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -16,10 +16,13 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.4 Add transpiler support for ** operator and math functions - [x] 1.5 Add transpiler support for chained comparisons - [x] 1.6 Add transpiler support for random.uniform and random.gauss +- [x] 1.7 Add transpiler tests for all new features ## Current Task Awaiting next task + + ## Learnings - Opcode pipeline is 7 files deep: opcodes.py -> parser (auto) -> vm.py -> bytecode.py -> vm.metal -> gpu.py -> compiler.py. Disasm is auto via reverse map. @@ -43,4 +46,4 @@ Awaiting next task - Updated test_chained_comparison test (was expecting TranspileError, now expects correct output). ## Next -Task 1.7: Add transpiler tests for all new features +Task 1.8: POC Checkpoint — verify all features work end-to-end on VM diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index c172aea..5d1721d 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -66,7 +66,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-16, FR-17_ - _Design: Component 7e_ -- [ ] 1.7 Add transpiler tests for all new features +- [x] 1.7 Add transpiler tests for all new features - **Do**: Add tests to `tests/test_transpiler.py` using the existing `run_py()` helper. Test classes: `TestPower` (2**10=1024, 4**0.5=2.0, x**=2 augmented assign), `TestMathFunctions` (sqrt(16)=4.0, sin(0)=0.0, cos(0)=1.0, exp(0)=1.0, log(1)=0.0, abs(-5)=5, min(3,7)=3, max(3,7)=7), `TestMathConstants` (math.pi ~= 3.14159, math.e ~= 2.71828, math.pi*2 expression), `TestChainedComparisons` (1<2<3=1, 1<3<2=0, 1<2<3<4=1, mixed ops 1<=2<3=1, in if condition), `TestRandomDistributions` (uniform in range, gauss returns float). Use approximate assertions for float comparisons. - **Files**: `tests/test_transpiler.py` - **Done when**: All new tests pass diff --git a/tests/test_transpiler.py b/tests/test_transpiler.py index bb458dc..108f211 100644 --- a/tests/test_transpiler.py +++ b/tests/test_transpiler.py @@ -478,3 +478,105 @@ def test_nested_function_calls(self): def test_complex_expression(self): src = "x = 10\ny = 3\nprint((x + y) * (x - y) // 2)" assert run_py(src).strip() == "45" + + +# ── Power operator ─────────────────────────────────────────────────────── + + +class TestPower: + def test_power_int(self): + assert run_py("print(2 ** 10)").strip() == "1024" + + def test_power_float(self): + assert run_py("print(4 ** 0.5)").strip() == "2.0" + + def test_power_augassign(self): + assert run_py("x = 3\nx **= 2\nprint(x)").strip() == "9" + + +# ── Math functions ─────────────────────────────────────────────────────── + + +class TestMathFunctions: + def test_sqrt(self): + assert run_py("import math\nprint(math.sqrt(16))").strip() == "4.0" + + def test_sin_zero(self): + assert run_py("import math\nprint(math.sin(0))").strip() == "0.0" + + def test_cos_zero(self): + assert run_py("import math\nprint(math.cos(0))").strip() == "1.0" + + def test_exp_zero(self): + assert run_py("import math\nprint(math.exp(0))").strip() == "1.0" + + def test_log_one(self): + assert run_py("import math\nprint(math.log(1))").strip() == "0.0" + + def test_abs_negative_int(self): + assert run_py("print(abs(-5))").strip() == "5" + + def test_abs_negative_float(self): + assert run_py("print(abs(-3.14))").strip() == "3.14" + + def test_min(self): + assert run_py("print(min(3, 7))").strip() == "3" + + def test_max(self): + assert run_py("print(max(3, 7))").strip() == "7" + + +# ── Math constants ─────────────────────────────────────────────────────── + + +class TestMathConstants: + def test_pi(self): + out = run_py("import math\nx = math.pi\nprint(x)").strip() + assert out.startswith("3.14159") + + def test_e(self): + out = run_py("import math\nx = math.e\nprint(x)").strip() + assert out.startswith("2.71828") + + def test_pi_in_expression(self): + out = run_py("import math\nprint(2 * math.pi)").strip() + val = float(out) + assert abs(val - 6.283185307179586) < 0.001 + + +# ── Chained comparisons ───────────────────────────────────────────────── + + +class TestChainedComparisons: + def test_ascending_true(self): + assert run_py("print(1 < 2 < 3)").strip() == "1" + + def test_ascending_false(self): + assert run_py("print(1 < 3 < 2)").strip() == "0" + + def test_triple_chain(self): + assert run_py("print(1 < 2 < 3 < 4)").strip() == "1" + + def test_mixed_ops(self): + assert run_py("print(1 <= 2 < 3)").strip() == "1" + + def test_in_if(self): + src = "x = 5\nif 1 < x < 10:\n print(1)\nelse:\n print(0)" + assert run_py(src).strip() == "1" + + +# ── Random distributions ──────────────────────────────────────────────── + + +class TestRandomDistributions: + def test_uniform_in_range(self): + src = "import random\nx = random.uniform(1, 10)\nprint(x)" + val = float(run_py(src).strip()) + assert 1.0 <= val < 10.0 + + def test_gauss_returns_float(self): + src = "import random\nx = random.gauss(0, 1)\nprint(x)" + out = run_py(src).strip() + # Should produce a float (contains a dot) + val = float(out) + assert isinstance(val, float) From 63b9caafe9eab2e1fb79eeda7ada7600b250e236 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:32:03 +0800 Subject: [PATCH 09/17] feat(tier1): complete POC for numeric ops 749 tests pass, zero failures. All 9 math opcodes (POW, SQRT, SIN, COS, EXP, LOG, ABS, MIN, MAX) working end-to-end through VM and transpiler. Co-Authored-By: Claude Opus 4.6 --- specs/tier1-numeric-ops/.progress.md | 4 +++- specs/tier1-numeric-ops/tasks.md | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index afa5717..4157532 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -17,6 +17,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.5 Add transpiler support for chained comparisons - [x] 1.6 Add transpiler support for random.uniform and random.gauss - [x] 1.7 Add transpiler tests for all new features +- [x] 1.8 POC Checkpoint — verify all features work end-to-end on VM ## Current Task Awaiting next task @@ -44,6 +45,7 @@ Awaiting next task - Existing test_power_operator test expected TranspileError for ** -- had to update it since ** is now supported. Updated test to verify correct output instead. - Chained comparisons implementation: for each non-last comparison, DUP+ROT+SWAP saves the comparator for next round and positions values for CMP. For middle comparisons (i>0, not last), ROT+AND+SWAP combines accumulated result and repositions. ROT semantics: [a,b,c] -> [b,c,a] (bottom of 3 goes to top). Extracted _emit_cmp_op helper for comparison opcode emission. - Updated test_chained_comparison test (was expecting TranspileError, now expects correct output). +- POC checkpoint: 749 tests pass (up from 448+ baseline), zero failures. Combined transpiler example (sqrt(2**10)=32.0, sin(pi/2)=1.0, chained cmp 1<2<3=1) works correctly end-to-end. ## Next -Task 1.8: POC Checkpoint — verify all features work end-to-end on VM +Task 2.1: Add bytecode encoding for 9 new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 5d1721d..4a91136 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -75,7 +75,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: AC-1.1 through AC-5.5_ - _Design: Component 7_ -- [ ] 1.8 POC Checkpoint — verify all features work end-to-end on VM +- [x] 1.8 POC Checkpoint — verify all features work end-to-end on VM - **Do**: Run full test suite. Verify all existing tests still pass (regression). Verify all new tests pass. Run a combined example: `import math; print(math.sqrt(2**10)); print(math.sin(math.pi/2)); print(1 < 2 < 3)` - **Done when**: All tests pass, combined example works - **Verify**: `pytest --tb=short -q` From 9ccf93adc27c8f2d950aa2cc86a05ae709c1c315 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:33:04 +0800 Subject: [PATCH 10/17] feat(bytecode): add encoding for math opcodes Co-Authored-By: Claude Opus 4.6 --- emojiasm/bytecode.py | 18 ++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 3 ++- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/emojiasm/bytecode.py b/emojiasm/bytecode.py index 45b8817..8237a9a 100644 --- a/emojiasm/bytecode.py +++ b/emojiasm/bytecode.py @@ -42,6 +42,15 @@ Op.MUL: 0x12, Op.DIV: 0x13, Op.MOD: 0x14, + Op.POW: 0x15, + Op.SQRT: 0x16, + Op.SIN: 0x17, + Op.COS: 0x18, + Op.EXP: 0x19, + Op.LOG: 0x1A, + Op.ABS: 0x1B, + Op.MIN: 0x1C, + Op.MAX: 0x1D, # Comparison & Logic Op.CMP_EQ: 0x20, Op.CMP_LT: 0x21, @@ -247,6 +256,15 @@ def _build_string_table(program: Program) -> tuple[dict[str, int], list[str]]: Op.MUL: -1, Op.DIV: -1, Op.MOD: -1, + Op.POW: -1, # pops 2, pushes 1 + Op.SQRT: 0, # pops 1, pushes 1 + Op.SIN: 0, + Op.COS: 0, + Op.EXP: 0, + Op.LOG: 0, + Op.ABS: 0, + Op.MIN: -1, # pops 2, pushes 1 + Op.MAX: -1, # pops 2, pushes 1 Op.CMP_EQ: -1, Op.CMP_LT: -1, Op.CMP_GT: -1, diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 4157532..f3a642c 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -18,6 +18,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.6 Add transpiler support for random.uniform and random.gauss - [x] 1.7 Add transpiler tests for all new features - [x] 1.8 POC Checkpoint — verify all features work end-to-end on VM +- [x] 2.1 Add bytecode encoding for 9 new opcodes ## Current Task Awaiting next task @@ -48,4 +49,4 @@ Awaiting next task - POC checkpoint: 749 tests pass (up from 448+ baseline), zero failures. Combined transpiler example (sqrt(2**10)=32.0, sin(pi/2)=1.0, chained cmp 1<2<3=1) works correctly end-to-end. ## Next -Task 2.1: Add bytecode encoding for 9 new opcodes +Task 2.2: Add Metal kernel dispatch for 9 new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 4a91136..dc5eb31 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -83,7 +83,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. ## Phase 2: Full Pipeline (Bytecode + Metal + C Compiler) -- [ ] 2.1 Add bytecode encoding for 9 new opcodes +- [x] 2.1 Add bytecode encoding for 9 new opcodes - **Do**: In `bytecode.py`: Add 9 entries to `OP_MAP` (POW=0x15 through MAX=0x1D). Add 9 entries to `_STACK_EFFECTS` (POW=-1, SQRT/SIN/COS/EXP/LOG/ABS=0, MIN=-1, MAX=-1). The `_uses_strings()` function doesn't need changes since new ops are not string ops. - **Files**: `emojiasm/bytecode.py` - **Done when**: `compile_to_bytecode(parse("📥 2 📥 10 🔋 🛑"))` succeeds without BytecodeError From 9696c08a6f3ced73cdcd314ac0b9050c014020f3 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:34:09 +0800 Subject: [PATCH 11/17] feat(metal): add GPU dispatch for math opcodes Co-Authored-By: Claude Opus 4.6 --- emojiasm/metal/vm.metal | 106 +++++++++++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 3 +- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 109 insertions(+), 2 deletions(-) diff --git a/emojiasm/metal/vm.metal b/emojiasm/metal/vm.metal index 10c8224..2503ff4 100644 --- a/emojiasm/metal/vm.metal +++ b/emojiasm/metal/vm.metal @@ -61,6 +61,17 @@ constant uint8_t OP_PRINTLN = 0x51; // Random constant uint8_t OP_RANDOM = 0x60; +// Math +constant uint8_t OP_POW = 0x15; +constant uint8_t OP_SQRT = 0x16; +constant uint8_t OP_SIN = 0x17; +constant uint8_t OP_COS = 0x18; +constant uint8_t OP_EXP = 0x19; +constant uint8_t OP_LOG = 0x1A; +constant uint8_t OP_ABS = 0x1B; +constant uint8_t OP_MIN = 0x1C; +constant uint8_t OP_MAX = 0x1D; + // ── Status codes ──────────────────────────────────────────────────────── constant uint32_t STATUS_OK = 0; @@ -620,6 +631,101 @@ kernel void emojiasm_vm( break; } + // ── Math ────────────────────────────────────────────────────────── + + case OP_POW: { + if (sp < 2) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + sp--; + stack[sp - 1] = pow(stack[sp - 1], stack[sp]); + break; + } + + case OP_SQRT: { + if (sp < 1) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + stack[sp - 1] = sqrt(stack[sp - 1]); + break; + } + + case OP_SIN: { + if (sp < 1) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + stack[sp - 1] = sin(stack[sp - 1]); + break; + } + + case OP_COS: { + if (sp < 1) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + stack[sp - 1] = cos(stack[sp - 1]); + break; + } + + case OP_EXP: { + if (sp < 1) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + stack[sp - 1] = exp(stack[sp - 1]); + break; + } + + case OP_LOG: { + if (sp < 1) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + stack[sp - 1] = log(stack[sp - 1]); + break; + } + + case OP_ABS: { + if (sp < 1) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + stack[sp - 1] = fabs(stack[sp - 1]); + break; + } + + case OP_MIN: { + if (sp < 2) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + sp--; + stack[sp - 1] = min(stack[sp - 1], stack[sp]); + break; + } + + case OP_MAX: { + if (sp < 2) { + status[tid] = STATUS_ERROR; + running = false; + break; + } + sp--; + stack[sp - 1] = max(stack[sp - 1], stack[sp]); + break; + } + // ── Unknown opcode ────────────────────────────────────────────── default: { diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index f3a642c..e4ff0b9 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -19,6 +19,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.7 Add transpiler tests for all new features - [x] 1.8 POC Checkpoint — verify all features work end-to-end on VM - [x] 2.1 Add bytecode encoding for 9 new opcodes +- [x] 2.2 Add Metal kernel dispatch for 9 new opcodes ## Current Task Awaiting next task @@ -49,4 +50,4 @@ Awaiting next task - POC checkpoint: 749 tests pass (up from 448+ baseline), zero failures. Combined transpiler example (sqrt(2**10)=32.0, sin(pi/2)=1.0, chained cmp 1<2<3=1) works correctly end-to-end. ## Next -Task 2.2: Add Metal kernel dispatch for 9 new opcodes +Task 2.3: Add GPU glue entries for 9 new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index dc5eb31..5b57168 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -92,7 +92,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-1 through FR-9, FR-21_ - _Design: Component 3_ -- [ ] 2.2 Add Metal kernel dispatch for 9 new opcodes +- [x] 2.2 Add Metal kernel dispatch for 9 new opcodes - **Do**: In `metal/vm.metal`: Add 9 opcode constants after `OP_RANDOM` (OP_POW=0x15 through OP_MAX=0x1D). Add 9 switch cases in the dispatch loop. Binary ops (POW, MIN, MAX) follow OP_MUL pattern: check sp<2, decrement sp, apply MSL function. Unary ops (SQRT, SIN, COS, EXP, LOG, ABS) follow OP_NOT pattern: check sp<1, apply MSL function in-place. MSL functions: `pow()`, `sqrt()`, `sin()`, `cos()`, `exp()`, `log()`, `abs()` (or `fabs()`), `min()`, `max()`. - **Files**: `emojiasm/metal/vm.metal` - **Done when**: Metal shader compiles without errors (validated by gpu.py tests) From 50c76e423310e04d4343392608d98be65317d487 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:34:52 +0800 Subject: [PATCH 12/17] feat(gpu): add GPU_OPCODES entries for math ops Co-Authored-By: Claude Opus 4.6 --- emojiasm/gpu.py | 10 ++++++++++ specs/tier1-numeric-ops/.progress.md | 3 ++- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 13 insertions(+), 2 deletions(-) diff --git a/emojiasm/gpu.py b/emojiasm/gpu.py index 60972d6..6c71785 100644 --- a/emojiasm/gpu.py +++ b/emojiasm/gpu.py @@ -66,6 +66,16 @@ # I/O "PRINT": 0x50, "PRINTLN": 0x51, + # Math + "POW": 0x15, + "SQRT": 0x16, + "SIN": 0x17, + "COS": 0x18, + "EXP": 0x19, + "LOG": 0x1A, + "ABS": 0x1B, + "MIN": 0x1C, + "MAX": 0x1D, # Random "RANDOM": 0x60, } diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index e4ff0b9..e0fa323 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -20,6 +20,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 1.8 POC Checkpoint — verify all features work end-to-end on VM - [x] 2.1 Add bytecode encoding for 9 new opcodes - [x] 2.2 Add Metal kernel dispatch for 9 new opcodes +- [x] 2.3 Add GPU glue entries for 9 new opcodes ## Current Task Awaiting next task @@ -50,4 +51,4 @@ Awaiting next task - POC checkpoint: 749 tests pass (up from 448+ baseline), zero failures. Combined transpiler example (sqrt(2**10)=32.0, sin(pi/2)=1.0, chained cmp 1<2<3=1) works correctly end-to-end. ## Next -Task 2.3: Add GPU glue entries for 9 new opcodes +Task 2.4: Add C compiler emission for 9 new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 5b57168..1410561 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -101,7 +101,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-1 through FR-9_ - _Design: Component 4_ -- [ ] 2.3 Add GPU glue entries for 9 new opcodes +- [x] 2.3 Add GPU glue entries for 9 new opcodes - **Do**: In `gpu.py`: Add 9 entries to `GPU_OPCODES` dict matching bytecode OP_MAP values exactly. No `_GPU_NAME_TO_OP_NAME` changes needed since GPU names match Op enum names directly. - **Files**: `emojiasm/gpu.py` - **Done when**: `validate_opcodes()` passes with new opcodes From 158140e18b7ea5db56cf437ef24f9aa5b7e6fa77 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:36:15 +0800 Subject: [PATCH 13/17] feat(compiler): add C emission for math opcodes Co-Authored-By: Claude Opus 4.6 --- emojiasm/compiler.py | 56 ++++++++++++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 7 ++-- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 61 insertions(+), 4 deletions(-) diff --git a/emojiasm/compiler.py b/emojiasm/compiler.py index 3455662..686bb8f 100644 --- a/emojiasm/compiler.py +++ b/emojiasm/compiler.py @@ -26,6 +26,7 @@ def _lbl(func_hex: str, label: str) -> str: #include #include #include +#include /* EmojiASM AOT compiled output (numeric-only fast path) */ @@ -48,6 +49,7 @@ def _lbl(func_hex: str, label: str) -> str: #include #include #include +#include /* EmojiASM AOT compiled output */ @@ -308,6 +310,60 @@ def _emit_inst(inst: Instruction, lines: list, fhex: str, mem: dict, numeric_onl elif op == Op.RANDOM: A(' PUSH_N((double)rand() / (double)RAND_MAX);') + elif op == Op.POW: + if numeric_only: + A(' { double b=POP(),a=POP(); PUSH_N(pow(a,b)); }') + else: + A(' { Val b=POP(),a=POP(); PUSH_N(pow(a.num,b.num)); }') + + elif op == Op.SQRT: + if numeric_only: + A(' { double a=POP(); PUSH_N(sqrt(a)); }') + else: + A(' { Val a=POP(); PUSH_N(sqrt(a.num)); }') + + elif op == Op.SIN: + if numeric_only: + A(' { double a=POP(); PUSH_N(sin(a)); }') + else: + A(' { Val a=POP(); PUSH_N(sin(a.num)); }') + + elif op == Op.COS: + if numeric_only: + A(' { double a=POP(); PUSH_N(cos(a)); }') + else: + A(' { Val a=POP(); PUSH_N(cos(a.num)); }') + + elif op == Op.EXP: + if numeric_only: + A(' { double a=POP(); PUSH_N(exp(a)); }') + else: + A(' { Val a=POP(); PUSH_N(exp(a.num)); }') + + elif op == Op.LOG: + if numeric_only: + A(' { double a=POP(); PUSH_N(log(a)); }') + else: + A(' { Val a=POP(); PUSH_N(log(a.num)); }') + + elif op == Op.ABS: + if numeric_only: + A(' { double a=POP(); PUSH_N(fabs(a)); }') + else: + A(' { Val a=POP(); PUSH_N(fabs(a.num)); }') + + elif op == Op.MIN: + if numeric_only: + A(' { double b=POP(),a=POP(); PUSH_N(fmin(a,b)); }') + else: + A(' { Val b=POP(),a=POP(); PUSH_N(fmin(a.num,b.num)); }') + + elif op == Op.MAX: + if numeric_only: + A(' { double b=POP(),a=POP(); PUSH_N(fmax(a,b)); }') + else: + A(' { Val b=POP(),a=POP(); PUSH_N(fmax(a.num,b.num)); }') + # ── Main compiler entry point ─────────────────────────────────────────────── diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index e0fa323..e7de090 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -21,12 +21,11 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 2.1 Add bytecode encoding for 9 new opcodes - [x] 2.2 Add Metal kernel dispatch for 9 new opcodes - [x] 2.3 Add GPU glue entries for 9 new opcodes +- [x] 2.4 Add C compiler emission for 9 new opcodes ## Current Task Awaiting next task - - ## Learnings - Opcode pipeline is 7 files deep: opcodes.py -> parser (auto) -> vm.py -> bytecode.py -> vm.metal -> gpu.py -> compiler.py. Disasm is auto via reverse map. @@ -50,5 +49,7 @@ Awaiting next task - Updated test_chained_comparison test (was expecting TranspileError, now expects correct output). - POC checkpoint: 749 tests pass (up from 448+ baseline), zero failures. Combined transpiler example (sqrt(2**10)=32.0, sin(pi/2)=1.0, chained cmp 1<2<3=1) works correctly end-to-end. +- C compiler emission follows existing pattern exactly: binary ops (POW, MIN, MAX) use `double b=POP(),a=POP()` / `Val b=POP(),a=POP()`, unary ops use single POP. Uses fabs/fmin/fmax (not abs/min/max which are int-only in C). + ## Next -Task 2.4: Add C compiler emission for 9 new opcodes +Task 2.5: Add bytecode and GPU tests for new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 1410561..2868ead 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -110,7 +110,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: NFR-2_ - _Design: Component 5_ -- [ ] 2.4 Add C compiler emission for 9 new opcodes +- [x] 2.4 Add C compiler emission for 9 new opcodes - **Do**: In `compiler.py`: (1) Add `#include ` to both `_PREAMBLE_NUMERIC` and `_PREAMBLE_MIXED` after the existing `#include `. (2) Add 9 `elif op == Op.X:` blocks in `_emit_inst` after the `Op.RANDOM` block. Each block handles both numeric_only and mixed mode. Binary ops: `{ double b=POP(),a=POP(); PUSH_N(func(a,b)); }`. Unary ops: `{ double a=POP(); PUSH_N(func(a)); }`. C functions: `pow()`, `sqrt()`, `sin()`, `cos()`, `exp()`, `log()`, `fabs()` (not abs which is int-only in C), `fmin()`, `fmax()`. - **Files**: `emojiasm/compiler.py` - **Done when**: `compile_to_c(parse("📥 2 📥 10 🔋 🖨️ 🛑"))` generates valid C with `pow()` call From 794f9c0c4d298aa5e12642427179e4579b1c1dac Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:38:55 +0800 Subject: [PATCH 14/17] test(bytecode,gpu): add tests for math opcode encoding Co-Authored-By: Claude Opus 4.6 --- specs/tier1-numeric-ops/.progress.md | 3 +- specs/tier1-numeric-ops/tasks.md | 2 +- tests/test_bytecode.py | 187 +++++++++++++++++++++++++++ tests/test_gpu_kernel.py | 117 +++++++++++++++++ 4 files changed, 307 insertions(+), 2 deletions(-) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index e7de090..9c95d4e 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -22,6 +22,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 2.2 Add Metal kernel dispatch for 9 new opcodes - [x] 2.3 Add GPU glue entries for 9 new opcodes - [x] 2.4 Add C compiler emission for 9 new opcodes +- [x] 2.5 Add bytecode and GPU tests for new opcodes ## Current Task Awaiting next task @@ -52,4 +53,4 @@ Awaiting next task - C compiler emission follows existing pattern exactly: binary ops (POW, MIN, MAX) use `double b=POP(),a=POP()` / `Val b=POP(),a=POP()`, unary ops use single POP. Uses fabs/fmin/fmax (not abs/min/max which are int-only in C). ## Next -Task 2.5: Add bytecode and GPU tests for new opcodes +Task 3.1: Update docs/REFERENCE.md with new opcodes diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 2868ead..02f711a 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -119,7 +119,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-19_ - _Design: Component 6_ -- [ ] 2.5 Add bytecode and GPU tests for new opcodes +- [x] 2.5 Add bytecode and GPU tests for new opcodes - **Do**: In `tests/test_bytecode.py`: Add tests verifying OP_MAP contains all 9 new ops, bytecode encoding roundtrips correctly, stack effects are defined for all new ops, gpu_tier classification is still correct for programs using new ops. In `tests/test_gpu_kernel.py`: Add tests verifying Metal kernel source contains all new opcode constants and switch cases. Test `validate_opcodes()` passes. - **Files**: `tests/test_bytecode.py`, `tests/test_gpu_kernel.py` - **Done when**: New tests pass diff --git a/tests/test_bytecode.py b/tests/test_bytecode.py index 8939dac..1499a4e 100644 --- a/tests/test_bytecode.py +++ b/tests/test_bytecode.py @@ -17,6 +17,7 @@ _flatten_functions, _analyze_max_stack_depth, _GPU_MAX_STACK, + _STACK_EFFECTS, ) @@ -667,6 +668,36 @@ def test_comparison_and_logic_ops(self): assert OP_MAP[Op.OR] in opcodes assert OP_MAP[Op.NOT] in opcodes + def test_math_ops(self): + """All 9 new math opcodes encode correctly.""" + src = ( + "📥 2\n📥 10\n🔋\n📤\n" # POW + "📥 16\n🌱\n📤\n" # SQRT + "📥 0\n📈\n📤\n" # SIN + "📥 0\n📉\n📤\n" # COS + "📥 0\n🚀\n📤\n" # EXP + "📥 1\n📓\n📤\n" # LOG + "📥 5\n💪\n📤\n" # ABS + "📥 3\n📥 7\n⬇️\n📤\n" # MIN + "📥 3\n📥 7\n⬆️\n📤\n" # MAX + "🛑" + ) + prog = _parse(src) + gpu = compile_to_bytecode(prog) + + decoded = _decode_bytecode(gpu.bytecode) + opcodes = [op for op, _ in decoded] + + assert OP_MAP[Op.POW] in opcodes + assert OP_MAP[Op.SQRT] in opcodes + assert OP_MAP[Op.SIN] in opcodes + assert OP_MAP[Op.COS] in opcodes + assert OP_MAP[Op.EXP] in opcodes + assert OP_MAP[Op.LOG] in opcodes + assert OP_MAP[Op.ABS] in opcodes + assert OP_MAP[Op.MIN] in opcodes + assert OP_MAP[Op.MAX] in opcodes + def test_stack_ops(self): """Stack manipulation opcodes encode correctly.""" src = ( @@ -687,3 +718,159 @@ def test_stack_ops(self): assert OP_MAP[Op.SWAP] in opcodes assert OP_MAP[Op.OVER] in opcodes assert OP_MAP[Op.ROT] in opcodes + + +# ── Math opcode bytecode tests ────────────────────────────────────────── + +_MATH_OPS = [Op.POW, Op.SQRT, Op.SIN, Op.COS, Op.EXP, Op.LOG, Op.ABS, Op.MIN, Op.MAX] + + +class TestMathOpcodesInOpMap: + """Verify OP_MAP contains all 9 new math opcodes.""" + + def test_all_math_ops_present(self): + for op in _MATH_OPS: + assert op in OP_MAP, f"Op.{op.name} missing from OP_MAP" + + def test_math_opcode_values_contiguous(self): + """Math opcodes should be 0x15 through 0x1D.""" + expected = { + Op.POW: 0x15, Op.SQRT: 0x16, Op.SIN: 0x17, Op.COS: 0x18, + Op.EXP: 0x19, Op.LOG: 0x1A, Op.ABS: 0x1B, Op.MIN: 0x1C, + Op.MAX: 0x1D, + } + for op, code in expected.items(): + assert OP_MAP[op] == code, ( + f"Op.{op.name}: expected 0x{code:02X}, got 0x{OP_MAP[op]:02X}" + ) + + def test_math_ops_in_reverse_map(self): + """All 9 math ops should appear in OPCODE_TO_OP.""" + for op in _MATH_OPS: + code = OP_MAP[op] + assert code in OPCODE_TO_OP + assert OPCODE_TO_OP[code] == op + + +class TestMathOpcodeStackEffects: + """Verify _STACK_EFFECTS are defined for all 9 math opcodes.""" + + def test_all_math_ops_have_stack_effects(self): + for op in _MATH_OPS: + assert op in _STACK_EFFECTS, f"Op.{op.name} missing from _STACK_EFFECTS" + + def test_binary_ops_effect_minus_one(self): + """Binary ops (POW, MIN, MAX) consume 2, push 1 => net -1.""" + for op in [Op.POW, Op.MIN, Op.MAX]: + assert _STACK_EFFECTS[op] == -1, ( + f"Op.{op.name}: expected -1, got {_STACK_EFFECTS[op]}" + ) + + def test_unary_ops_effect_zero(self): + """Unary ops (SQRT, SIN, COS, EXP, LOG, ABS) consume 1, push 1 => net 0.""" + for op in [Op.SQRT, Op.SIN, Op.COS, Op.EXP, Op.LOG, Op.ABS]: + assert _STACK_EFFECTS[op] == 0, ( + f"Op.{op.name}: expected 0, got {_STACK_EFFECTS[op]}" + ) + + +class TestMathOpcodeRoundTrip: + """Verify bytecode roundtrip for programs using math opcodes.""" + + def test_pow_roundtrip(self): + """PUSH 2, PUSH 10, POW, HALT encodes and decodes correctly.""" + src = "📥 2\n📥 10\n🔋\n🛑" + prog = _parse(src) + gpu = compile_to_bytecode(prog) + + decoded = _decode_bytecode(gpu.bytecode) + expected_ops = [Op.PUSH, Op.PUSH, Op.POW, Op.HALT] + for (opcode, _), expected_op in zip(decoded, expected_ops): + assert OPCODE_TO_OP[opcode] == expected_op + + def test_unary_roundtrip(self): + """Unary math ops encode and decode correctly.""" + src = "📥 16\n🌱\n📥 0\n📈\n📥 0\n📉\n📥 0\n🚀\n📥 1\n📓\n📥 5\n💪\n🛑" + prog = _parse(src) + gpu = compile_to_bytecode(prog) + + decoded = _decode_bytecode(gpu.bytecode) + expected_ops = [ + Op.PUSH, Op.SQRT, + Op.PUSH, Op.SIN, + Op.PUSH, Op.COS, + Op.PUSH, Op.EXP, + Op.PUSH, Op.LOG, + Op.PUSH, Op.ABS, + Op.HALT, + ] + assert len(decoded) == len(expected_ops) + for (opcode, _), expected_op in zip(decoded, expected_ops): + assert OPCODE_TO_OP[opcode] == expected_op + + def test_min_max_roundtrip(self): + """MIN and MAX encode and decode correctly.""" + src = "📥 3\n📥 7\n⬇️\n📥 3\n📥 7\n⬆️\n🛑" + prog = _parse(src) + gpu = compile_to_bytecode(prog) + + decoded = _decode_bytecode(gpu.bytecode) + expected_ops = [ + Op.PUSH, Op.PUSH, Op.MIN, + Op.PUSH, Op.PUSH, Op.MAX, + Op.HALT, + ] + assert len(decoded) == len(expected_ops) + for (opcode, _), expected_op in zip(decoded, expected_ops): + assert OPCODE_TO_OP[opcode] == expected_op + + def test_constant_pool_for_math_program(self): + """Constants used by math ops should be in the pool.""" + src = "📥 2\n📥 10\n🔋\n📥 16\n🌱\n🛑" + prog = _parse(src) + gpu = compile_to_bytecode(prog) + + assert 2.0 in gpu.constants + assert 10.0 in gpu.constants + assert 16.0 in gpu.constants + + +class TestMathOpcodeGpuTier: + """Verify gpu_tier classification for programs using math opcodes.""" + + def test_math_only_is_tier1(self): + """Programs using only math ops (no I/O) should be tier 1.""" + src = "📥 2\n📥 10\n🔋\n📥 16\n🌱\n🛑" + prog = _parse(src) + assert gpu_tier(prog) == 1 + + def test_math_with_println_is_tier2(self): + """Math ops + PRINTLN should be tier 2.""" + src = "📥 2\n📥 10\n🔋\n🖨️\n🛑" + prog = _parse(src) + assert gpu_tier(prog) == 2 + + def test_all_math_ops_tier1(self): + """A program using all 9 math ops without I/O is tier 1.""" + src = ( + "📥 2\n📥 10\n🔋\n📤\n" # POW + "📥 16\n🌱\n📤\n" # SQRT + "📥 0\n📈\n📤\n" # SIN + "📥 0\n📉\n📤\n" # COS + "📥 0\n🚀\n📤\n" # EXP + "📥 1\n📓\n📤\n" # LOG + "📥 5\n💪\n📤\n" # ABS + "📥 3\n📥 7\n⬇️\n📤\n" # MIN + "📥 3\n📥 7\n⬆️\n📤\n" # MAX + "🛑" + ) + prog = _parse(src) + assert gpu_tier(prog) == 1 + + def test_gpu_program_tier_matches(self): + """GpuProgram.gpu_tier should match gpu_tier() for math programs.""" + src = "📥 2\n📥 10\n🔋\n🛑" + prog = _parse(src) + gpu = compile_to_bytecode(prog) + assert gpu.gpu_tier == gpu_tier(prog) + assert gpu.gpu_tier == 1 diff --git a/tests/test_gpu_kernel.py b/tests/test_gpu_kernel.py index f6f1639..7041a4f 100644 --- a/tests/test_gpu_kernel.py +++ b/tests/test_gpu_kernel.py @@ -168,6 +168,15 @@ def test_all_opcodes_have_case(self): "MUL": "OP_MUL", "DIV": "OP_DIV", "MOD": "OP_MOD", + "POW": "OP_POW", + "SQRT": "OP_SQRT", + "SIN": "OP_SIN", + "COS": "OP_COS", + "EXP": "OP_EXP", + "LOG": "OP_LOG", + "ABS": "OP_ABS", + "MIN": "OP_MIN", + "MAX": "OP_MAX", "EQ": "OP_EQ", "LT": "OP_LT", "GT": "OP_GT", @@ -313,3 +322,111 @@ def test_result_written(self): """Kernel should write TOS to results buffer.""" src = get_kernel_source() assert "results[tid]" in src + + +# ── Math opcode GPU kernel tests ──────────────────────────────────────── + +_MATH_MSL_CONSTANTS = [ + ("OP_POW", 0x15), + ("OP_SQRT", 0x16), + ("OP_SIN", 0x17), + ("OP_COS", 0x18), + ("OP_EXP", 0x19), + ("OP_LOG", 0x1A), + ("OP_ABS", 0x1B), + ("OP_MIN", 0x1C), + ("OP_MAX", 0x1D), +] + + +class TestMathOpcodeKernelConstants: + """Verify Metal kernel source contains all 9 new math opcode constants.""" + + def test_all_math_constants_defined(self): + """Each math opcode constant should be defined in the kernel.""" + src = get_kernel_source() + for msl_name, _ in _MATH_MSL_CONSTANTS: + assert f"constant uint8_t {msl_name}" in src, ( + f"Missing MSL constant definition for {msl_name}" + ) + + def test_all_math_constants_have_correct_values(self): + """Each math opcode constant should have the correct hex value.""" + src = get_kernel_source() + for msl_name, code in _MATH_MSL_CONSTANTS: + pattern = rf"constant\s+uint8_t\s+{msl_name}\s*=\s*0x{code:02X}" + match = re.search(pattern, src, re.IGNORECASE) + assert match is not None, ( + f"Constant {msl_name} not defined with value 0x{code:02X}" + ) + + def test_all_math_cases_in_switch(self): + """Each math opcode should have a case in the switch dispatch.""" + src = get_kernel_source() + for msl_name, _ in _MATH_MSL_CONSTANTS: + assert f"case {msl_name}:" in src, ( + f"Missing case {msl_name}: in switch dispatch" + ) + + +class TestMathOpcodeKernelFunctions: + """Verify the kernel uses correct MSL math functions.""" + + def test_pow_uses_msl_pow(self): + src = get_kernel_source() + assert "pow(" in src + + def test_sqrt_uses_msl_sqrt(self): + src = get_kernel_source() + assert "sqrt(" in src + + def test_sin_uses_msl_sin(self): + src = get_kernel_source() + assert "sin(" in src + + def test_cos_uses_msl_cos(self): + src = get_kernel_source() + assert "cos(" in src + + def test_exp_uses_msl_exp(self): + src = get_kernel_source() + assert "exp(" in src + + def test_log_uses_msl_log(self): + src = get_kernel_source() + assert "log(" in src + + def test_min_uses_msl_min(self): + src = get_kernel_source() + assert "min(" in src + + def test_max_uses_msl_max(self): + src = get_kernel_source() + assert "max(" in src + + +class TestMathOpcodeValidation: + """Verify validate_opcodes() passes with new math opcodes.""" + + def test_validate_opcodes_still_passes(self): + """validate_opcodes() should not raise with math ops added.""" + validate_opcodes() + + def test_math_ops_in_gpu_opcodes(self): + """All 9 math ops should be in GPU_OPCODES.""" + math_gpu_names = ["POW", "SQRT", "SIN", "COS", "EXP", "LOG", "ABS", "MIN", "MAX"] + for name in math_gpu_names: + assert name in GPU_OPCODES, f"{name} missing from GPU_OPCODES" + + def test_math_gpu_opcodes_match_bytecode(self): + """GPU_OPCODES values for math ops should match OP_MAP.""" + math_ops = { + "POW": Op.POW, "SQRT": Op.SQRT, "SIN": Op.SIN, + "COS": Op.COS, "EXP": Op.EXP, "LOG": Op.LOG, + "ABS": Op.ABS, "MIN": Op.MIN, "MAX": Op.MAX, + } + for gpu_name, op in math_ops.items(): + assert GPU_OPCODES[gpu_name] == OP_MAP[op], ( + f"Mismatch for {gpu_name}: GPU=0x{GPU_OPCODES[gpu_name]:02X} " + f"vs bytecode=0x{OP_MAP[op]:02X}" + ) From 356a36665c2566b9a71cb225005c21cbdb4b107e Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:40:17 +0800 Subject: [PATCH 15/17] docs: add math opcodes to language reference Co-Authored-By: Claude Opus 4.6 --- docs/REFERENCE.md | 25 +++++++++++++++++++++---- specs/tier1-numeric-ops/.progress.md | 5 ++++- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 26 insertions(+), 6 deletions(-) diff --git a/docs/REFERENCE.md b/docs/REFERENCE.md index d41f982..5f3a38b 100644 --- a/docs/REFERENCE.md +++ b/docs/REFERENCE.md @@ -42,6 +42,20 @@ Structural — not instructions, not on the stack. | `🔢` | MOD | `( a b -- a%b )` | Integer remainder. Error on zero. | | `🎲` | RANDOM | `( -- float )` | Push random float in [0.0, 1.0). GPU: Philox-4x32-10 PRNG. | +### Math + +| Emoji | Name | Stack effect | Notes | +|:---:|---|:---:|---| +| `🔋` | POW | `( a b -- a^b )` | Power / exponentiation | +| `🌱` | SQRT | `( a -- √a )` | Square root. Error on negative input. | +| `📈` | SIN | `( a -- sin(a) )` | Sine (radians) | +| `📉` | COS | `( a -- cos(a) )` | Cosine (radians) | +| `🚀` | EXP | `( a -- e^a )` | Natural exponential | +| `📓` | LOG | `( a -- ln(a) )` | Natural logarithm. Error on non-positive input. | +| `💪` | ABS | `( a -- |a| )` | Absolute value. Preserves int type. | +| `⬇️` | MIN | `( a b -- min(a,b) )` | Minimum of two values. Also `⬇`. | +| `⬆️` | MAX | `( a b -- max(a,b) )` | Maximum of two values. Also `⬆`. | + ### Comparison & Logic All comparison ops consume both operands and push `1` (true) or `0` (false). @@ -308,15 +322,18 @@ result = tool.execute_python("print(42)", n=1000) **Supported Python subset:** - Literals: `int`, `float`, `True`, `False` -- Arithmetic: `+`, `-`, `*`, `/`, `//`, `%` -- Comparisons: `==`, `!=`, `<`, `>`, `<=`, `>=` +- Arithmetic: `+`, `-`, `*`, `/`, `//`, `%`, `**` +- Comparisons: `==`, `!=`, `<`, `>`, `<=`, `>=`, chained (`a < b < c`) - Boolean: `and`, `or`, `not` - Control flow: `if`/`elif`/`else`, `while`, `for x in range()`, `break`, `continue` - Functions: `def`/`return` (including recursion) - I/O: `print()` (single/multi-arg, `end=""`) -- Random: `import random` + `random.random()` +- Math: `math.sqrt()`, `math.sin()`, `math.cos()`, `math.exp()`, `math.log()` +- Constants: `math.pi`, `math.e` +- Builtins: `abs()`, `min()`, `max()` +- Random: `import random` + `random.random()`, `random.uniform(a, b)`, `random.gauss(mu, sigma)` -**Not supported:** strings, lists, dicts, classes, exceptions, generators, f-strings, `**`, `import` beyond random/math. +**Not supported:** strings, lists, dicts, classes, exceptions, generators, f-strings, `import` beyond random/math. --- diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 9c95d4e..3fb3e9c 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -23,10 +23,13 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 2.3 Add GPU glue entries for 9 new opcodes - [x] 2.4 Add C compiler emission for 9 new opcodes - [x] 2.5 Add bytecode and GPU tests for new opcodes +- [x] 3.1 Update docs/REFERENCE.md with new opcodes ## Current Task Awaiting next task + + ## Learnings - Opcode pipeline is 7 files deep: opcodes.py -> parser (auto) -> vm.py -> bytecode.py -> vm.metal -> gpu.py -> compiler.py. Disasm is auto via reverse map. @@ -53,4 +56,4 @@ Awaiting next task - C compiler emission follows existing pattern exactly: binary ops (POW, MIN, MAX) use `double b=POP(),a=POP()` / `Val b=POP(),a=POP()`, unary ops use single POP. Uses fabs/fmin/fmax (not abs/min/max which are int-only in C). ## Next -Task 3.1: Update docs/REFERENCE.md with new opcodes +Task 3.2: Add example program using new math ops diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 02f711a..b546da1 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -130,7 +130,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. ## Phase 3: Documentation and Polish -- [ ] 3.1 Update docs/REFERENCE.md with new opcodes +- [x] 3.1 Update docs/REFERENCE.md with new opcodes - **Do**: Add a new "Math" section to the Instruction Set in REFERENCE.md between Arithmetic and Comparison. Include all 9 opcodes with emoji, name, stack effect, and notes. Update the "Python Transpiler" section to list new supported features: `**`, `math.sqrt/sin/cos/exp/log`, `abs()`, `min()`, `max()`, `math.pi`, `math.e`, `random.uniform()`, `random.gauss()`, chained comparisons. Update the "Not supported" line to remove `**`. - **Files**: `docs/REFERENCE.md` - **Done when**: Reference doc accurately describes all new features From 809f5463525420a04bad8d3df1f065ac824f8bb8 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:41:42 +0800 Subject: [PATCH 16/17] docs: add math_functions.emoji example Co-Authored-By: Claude Opus 4.6 --- examples/math_functions.emoji | 71 ++++++++++++++++++++++++++++ specs/tier1-numeric-ops/.progress.md | 3 +- specs/tier1-numeric-ops/tasks.md | 2 +- 3 files changed, 74 insertions(+), 2 deletions(-) create mode 100644 examples/math_functions.emoji diff --git a/examples/math_functions.emoji b/examples/math_functions.emoji new file mode 100644 index 0000000..5143c34 --- /dev/null +++ b/examples/math_functions.emoji @@ -0,0 +1,71 @@ +💭 Math Functions Demo +💭 Demonstrates all 9 math opcodes: POW, SQRT, SIN, COS, EXP, LOG, ABS, MIN, MAX + +📜 🏠 + 💭 --- POW: 2^10 = 1024 --- + 💬 "POW: 2^10 = " + 📢 + 📥 2 + 📥 10 + 🔋 + 🖨️ + + 💭 --- SQRT: sqrt(16) = 4.0 --- + 💬 "SQRT: sqrt(16) = " + 📢 + 📥 16 + 🌱 + 🖨️ + + 💭 --- SIN: sin(0) = 0.0 --- + 💬 "SIN: sin(0) = " + 📢 + 📥 0 + 📈 + 🖨️ + + 💭 --- COS: cos(0) = 1.0 --- + 💬 "COS: cos(0) = " + 📢 + 📥 0 + 📉 + 🖨️ + + 💭 --- EXP: e^1 ≈ 2.718 --- + 💬 "EXP: e^1 = " + 📢 + 📥 1 + 🚀 + 🖨️ + + 💭 --- LOG: log(1) = 0.0 --- + 💬 "LOG: log(1) = " + 📢 + 📥 1 + 📓 + 🖨️ + + 💭 --- ABS: abs(-42) = 42 --- + 💬 "ABS: abs(-42) = " + 📢 + 📥 -42 + 💪 + 🖨️ + + 💭 --- MIN: min(17, 5) = 5 --- + 💬 "MIN: min(17, 5) = " + 📢 + 📥 17 + 📥 5 + ⬇️ + 🖨️ + + 💭 --- MAX: max(17, 5) = 17 --- + 💬 "MAX: max(17, 5) = " + 📢 + 📥 17 + 📥 5 + ⬆️ + 🖨️ + + 🛑 diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 3fb3e9c..3368f74 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -24,6 +24,7 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. - [x] 2.4 Add C compiler emission for 9 new opcodes - [x] 2.5 Add bytecode and GPU tests for new opcodes - [x] 3.1 Update docs/REFERENCE.md with new opcodes +- [x] 3.2 Add example program using new math ops ## Current Task Awaiting next task @@ -56,4 +57,4 @@ Awaiting next task - C compiler emission follows existing pattern exactly: binary ops (POW, MIN, MAX) use `double b=POP(),a=POP()` / `Val b=POP(),a=POP()`, unary ops use single POP. Uses fabs/fmin/fmax (not abs/min/max which are int-only in C). ## Next -Task 3.2: Add example program using new math ops +Task 4.1: Full regression test suite diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index b546da1..5cab335 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -139,7 +139,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. - _Requirements: FR-22_ - _Design: N/A_ -- [ ] 3.2 Add example program using new math ops +- [x] 3.2 Add example program using new math ops - **Do**: Create `examples/math_functions.emoji` demonstrating all 9 new opcodes. Include: power (2^10), sqrt(16), sin/cos of pi/4, exp(1), log(e), abs(-42), min/max of pairs. Print results with labels using PRINTS+ADD pattern. - **Files**: `examples/math_functions.emoji` - **Done when**: `emojiasm examples/math_functions.emoji` runs and produces correct output From 9c9f2da350182217c72559c3cdfca06fb698969d Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 8 Mar 2026 11:42:42 +0800 Subject: [PATCH 17/17] chore(tier1): pass full regression suite (778 tests, zero failures) Co-Authored-By: Claude Opus 4.6 --- specs/tier1-numeric-ops/.progress.md | 5 ++++- specs/tier1-numeric-ops/tasks.md | 2 +- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/specs/tier1-numeric-ops/.progress.md b/specs/tier1-numeric-ops/.progress.md index 3368f74..7c60ef2 100644 --- a/specs/tier1-numeric-ops/.progress.md +++ b/specs/tier1-numeric-ops/.progress.md @@ -29,6 +29,8 @@ Implement EmojiASM issue #27: Tier 1 core numeric operators and math functions. ## Current Task Awaiting next task +- [x] 4.1 Full regression test suite - (verification only, no fixes needed) + ## Learnings @@ -55,6 +57,7 @@ Awaiting next task - POC checkpoint: 749 tests pass (up from 448+ baseline), zero failures. Combined transpiler example (sqrt(2**10)=32.0, sin(pi/2)=1.0, chained cmp 1<2<3=1) works correctly end-to-end. - C compiler emission follows existing pattern exactly: binary ops (POW, MIN, MAX) use `double b=POP(),a=POP()` / `Val b=POP(),a=POP()`, unary ops use single POP. Uses fabs/fmin/fmax (not abs/min/max which are int-only in C). +- Full regression: 778 tests pass, zero failures. No fixes needed. ## Next -Task 4.1: Full regression test suite +Task 4.2: Create PR and verify CI diff --git a/specs/tier1-numeric-ops/tasks.md b/specs/tier1-numeric-ops/tasks.md index 5cab335..4a92170 100644 --- a/specs/tier1-numeric-ops/tasks.md +++ b/specs/tier1-numeric-ops/tasks.md @@ -149,7 +149,7 @@ Focus: Get all 9 opcodes working end-to-end through opcodes + VM + basic tests. ## Phase 4: Quality Gates -- [ ] 4.1 Full regression test suite +- [x] 4.1 Full regression test suite - **Do**: Run complete test suite including all existing and new tests. Verify all 448+ existing tests still pass. Run type checking if available. - **Verify**: `pytest --tb=short -q` - **Done when**: All tests pass, zero failures