https://ammchallenge.com/prop-amm
Design a custom price function for an automated market maker. Your goal: maximize edge — the profit your AMM extracts from trading flow.
Your program runs inside a simulation against a benchmark AMM. Retail traders arrive, arbitrageurs keep prices efficient, and an order router splits flow between the two pools based on who offers better prices. The better your pricing, the more flow you attract and the more edge you earn.
- Copy
programs/starter/src/lib.rsas your starting point - Implement your pricing logic in
compute_swap - Submit your
lib.rssource code to the web UI — the server compiles and runs it
For local development, use the CLI:
# Install the CLI once (from this repo)
cargo install --path crates/cli
# Copy the starter template
cp programs/starter/src/lib.rs my_amm.rs
# Edit your pricing logic
edit my_amm.rs
# Validate interface + shape + parity checks before benchmarking/submitting
prop-amm validate my_amm.rs
# Run 1000 simulations locally (~5s on Apple M3 Pro)
prop-amm run my_amm.rs
# Or run without installing the binary globally
cargo run -p prop-amm -- run my_amm.rsThe CLI compiles your source file and runs it natively — no toolchain setup required beyond Rust.
Each simulation runs 10,000 steps. At each step:
- Fair price moves via geometric Brownian motion
- Arbitrageurs trade — they push each AMM's spot price toward the fair price, extracting profit from stale quotes
- Retail orders arrive — random buy/sell orders, routed optimally across both AMMs
Your program competes against a normalizer AMM — a constant-product market maker whose fee and liquidity are sampled per simulation. Both pools start from the same base reserves (100 X, 10,000 Y at price 100), then the normalizer applies its sampled liquidity multiplier.
Without competition, setting 10% fees would appear profitable — huge spreads on the few trades that execute. The normalizer prevents this: if your pricing is too aggressive, retail routes away from your pool and you get little flow.
There's no free lunch from slightly undercutting either. The optimal strategy depends on market conditions, trade patterns, and how you manage the tradeoff between spread revenue and adverse selection.
Price process: S(t+1) = S(t) * exp(-sigma^2/2 + sigma*Z) where Z ~ N(0,1)
- No drift (mu = 0)
- Per-step volatility varies across simulations:
sigma ~ U[0.01%, 0.70%]
Retail flow: Poisson arrival, log-normal sizes, 50/50 buy/sell
- Arrival rate
lambda ~ U[0.4, 1.2]per step - Mean order size
~ U[12, 28]in Y terms
Normalizer parameters:
- Fee varies per simulation:
norm_fee_bps ~ U{30, 80}(integer bps) - Liquidity varies per simulation:
norm_liquidity_mult ~ U[0.4, 2.0]
Arbitrage: Golden-section search for the optimal trade size that maximizes arbitrage profit (then execute only if it clears a minimum profit floor). The search is early-stopped once the trade size is within ~1% (relative bracket width). Trades are skipped unless expected arb profit is at least 0.01 Y (1 cent).
Order routing: Golden-section search over split ratio alpha in [0, 1]. The router picks the split that maximizes total output, and early-stops once the submission trade amount is within ~1% (relative bracket width, with an additional 1% objective-gap stop). Small pricing differences can shift large fractions of volume.
Edge measures profitability using the fair price at trade time:
For each trade on your AMM:
Sell X (AMM receives X, pays Y): edge = amount_x * fair_price - amount_y
Buy X (AMM receives Y, pays X): edge = amount_y - amount_x * fair_price
Retail trades produce positive edge (you profit from the spread). Arbitrage trades produce negative edge (you lose to informed flow). Good strategies maximize the former while minimizing the latter.
Your program receives instruction data with reserves and a 1024-byte read-only storage buffer:
| Offset | Size | Field | Type | Description |
|---|---|---|---|---|
| 0 | 1 | side | u8 | 0=buy X (Y input), 1=sell X |
| 1 | 8 | input_amount | u64 | Input token amount (1e9 scale) |
| 9 | 8 | reserve_x | u64 | Current X reserve (1e9 scale) |
| 17 | 8 | reserve_y | u64 | Current Y reserve (1e9 scale) |
| 25 | 1024 | storage | [u8] | Read-only strategy storage |
Return the output_amount: u64 (1e9 scale) with prop_amm_submission_sdk::set_return_data_u64.
Guideline: decode instruction payloads with wincode rather than manual byte offsets. See wincode docs.
After each real trade (not during quoting), the engine calls your program with tag byte 2. This lets you update your 1024-byte storage and observe the current simulation step — useful for strategies that adapt over time (dynamic fees, volatility tracking, etc.).
| Offset | Size | Field | Type | Description |
|---|---|---|---|---|
| 0 | 1 | tag | u8 | Always 2 |
| 1 | 1 | side | u8 | 0=buy X, 1=sell X |
| 2 | 8 | input_amount | u64 | Input token amount (1e9 scale) |
| 10 | 8 | output_amount | u64 | Output token amount (1e9 scale) |
| 18 | 8 | reserve_x | u64 | Post-trade X reserve |
| 26 | 8 | reserve_y | u64 | Post-trade Y reserve |
| 34 | 8 | step | u64 | Current simulation step |
| 42 | 1024 | storage | [u8] | Current storage (read/write) |
To persist updated storage, call prop_amm_submission_sdk::set_storage with your modified buffer. If you don't call it, storage remains unchanged. The starter program's afterSwap is a no-op, so storage is entirely optional.
When afterSwap is called:
- After arbitrageur executes a trade
- After router executes routed trades
When it is NOT called:
- During router quoting (golden-section search for optimal split)
- During arbitrageur quoting (golden-section search for optimal size)
The runner may request strategy metadata via instruction tag:
3: returnNAMEbytes4: returnget_model_used()bytes
Use "None" for MODEL_USED when the submission is fully human-written.
| Requirement | Description |
|---|---|
| NAME | Must define const NAME: &str = "..."; — shown on the leaderboard. |
| MODEL_USED | Must define model metadata and expose get_model_used() -> &'static str. Use "None" if fully human-written. |
| Safe Rust | unsafe code is rejected. Keep your submission fully safe Rust. |
| Monotonic | Larger input must produce larger output. |
| Concave | Output must be concave in input (diminishing returns per unit). |
| < 100k CU | Must execute within the compute unit limit. |
Start with programs/starter/ — a constant-product AMM with 500 bps fees. The key pieces:
use pinocchio::{account_info::AccountInfo, entrypoint, pubkey::Pubkey, ProgramResult};
use prop_amm_submission_sdk::{set_return_data_bytes, set_return_data_u64};
/// Required: displayed on the leaderboard.
const NAME: &str = "My Strategy";
const MODEL_USED: &str = "GPT-5.3-Codex"; // Use "None" for human-written submissions.
const FEE_NUMERATOR: u128 = 950;
const FEE_DENOMINATOR: u128 = 1000;
const STORAGE_SIZE: usize = 1024;
#[derive(wincode::SchemaRead)]
struct ComputeSwapInstruction {
side: u8,
input_amount: u64,
reserve_x: u64,
reserve_y: u64,
_storage: [u8; STORAGE_SIZE],
}
#[cfg(not(feature = "no-entrypoint"))]
entrypoint!(process_instruction);
pub fn process_instruction(
_program_id: &Pubkey, _accounts: &[AccountInfo], instruction_data: &[u8],
) -> ProgramResult {
if instruction_data.is_empty() {
return Ok(());
}
match instruction_data[0] {
0 | 1 => { // compute_swap
let output = compute_swap(instruction_data);
set_return_data_u64(output);
}
2 => { // afterSwap — update storage here if needed
}
3 => set_return_data_bytes(NAME.as_bytes()),
4 => set_return_data_bytes(get_model_used().as_bytes()),
_ => {}
}
Ok(())
}
pub fn get_model_used() -> &'static str {
MODEL_USED
}
pub fn compute_swap(data: &[u8]) -> u64 {
let decoded: ComputeSwapInstruction = match wincode::deserialize(data) {
Ok(decoded) => decoded,
Err(_) => return 0,
};
let side = decoded.side;
let input_amount = decoded.input_amount as u128;
let reserve_x = decoded.reserve_x as u128;
let reserve_y = decoded.reserve_y as u128;
if reserve_x == 0 || reserve_y == 0 {
return 0;
}
let k = reserve_x * reserve_y;
match side {
0 => {
// Buy X: input is Y, output is X
let net_y = input_amount * FEE_NUMERATOR / FEE_DENOMINATOR;
let new_ry = reserve_y + net_y;
let k_div = (k + new_ry - 1) / new_ry;
reserve_x.saturating_sub(k_div) as u64
}
1 => {
// Sell X: input is X, output is Y
let net_x = input_amount * FEE_NUMERATOR / FEE_DENOMINATOR;
let new_rx = reserve_x + net_x;
let k_div = (k + new_rx - 1) / new_rx;
reserve_y.saturating_sub(k_div) as u64
}
_ => 0,
}
}
/// Optional native hook for local testing.
pub fn after_swap(_data: &[u8], _storage: &mut [u8]) {
// Update storage here if needed
}For local native runs, the CLI auto-generates adapter exports. You only need strategy logic (compute_swap and optionally after_swap) in your submission file.
- Use
u128intermediates to avoid overflow (reserves at 1e9 scale can multiply to ~1e24) - Prefer typed decode with
wincode::deserializefor swap/afterSwap payloads - Test concavity with
prop-amm validatebefore running simulations - Think about how your marginal price schedule affects the routing split
- The arbitrageur is efficient — don't try to extract value from informed flow
- Storage is zero-initialized at the start of each simulation and persists across all trades within a simulation
The CLI compiles and runs your .rs source file directly — no manual build step needed.
# Run simulations (default: 1000 sims, 10k steps each)
prop-amm run my_amm.rs
# Run the same workload on a custom seed range
prop-amm run my_amm.rs --seed-start 100000 --seed-stride 1
# Fewer sims for quick iteration
prop-amm run my_amm.rs --simulations 10
# Build only (native + BPF artifacts)
prop-amm build my_amm.rs
# Validate monotonicity, concavity, and native/BPF parity
prop-amm validate my_amm.rsAlways run prop-amm validate before large benchmarks and before submission.
Normalizer performance varies materially across sampled fee/liquidity regimes, so benchmark edge distribution is wider than in a fixed-fee setting.
By default, prop-amm run compiles your program as a native shared library and runs it directly. This is fast enough for rapid iteration — 1,000 simulations complete in seconds.
BPF mode (--bpf) runs your program through the Solana BPF interpreter, which is ~100x slower. Use it only as a final check before submitting to verify your program compiles and behaves correctly under the BPF runtime. Don't use it for day-to-day development. Run just a few simulations (--simulations 5) to sanity-check — running 1,000 sims in BPF mode will take ~15 minutes and isn't worth it for validation.
# Fast iteration (native, default)
prop-amm run my_amm.rs
# Final validation before submission (BPF, slow)
prop-amm run my_amm.rs --bpf --simulations 10The engine parallelizes across simulations using up to 8 worker threads (configurable with --workers).
- Local CLI runs are deterministic for a given config.
- By default,
prop-amm runuses simulation seeds0..n_sims-1. - Use
--seed-startand--seed-strideto run out-of-sample seed blocks locally. - The server uses a different evaluation seed schedule, so local and server scores can differ slightly even for the same strategy.
| Workload | Time | Platform |
|---|---|---|
| 1,000 sims / 10k steps | ~5s | Apple M3 Pro, native |
| 1,000 sims / 10k steps | ~15 min | Apple M3 Pro, BPF |
Submit your lib.rs source code through the web UI. The server handles compilation, validation, and simulation — you don't need any toolchain beyond what's needed for local testing.
The server validates your program (monotonicity and concavity), then runs 1,000 simulations against the normalizer. Local results may diverge slightly from submission scores due to different RNG seeds and hyperparameter variance.
High-level evaluation invariants:
- The server evaluates each submission on 1,000 simulations.
- Evaluation uses a fixed checker configuration per server release.
- Exact holdout seeds are not published.
Your submitted source code must be a single lib.rs file. Allowed dependencies are pinocchio (for Solana BPF syscalls) and wincode (for instruction decoding). The following are blocked for security:
include!(),include_str!(),include_bytes!()(compile-time file access)env!(),option_env!()(compile-time environment access)extern cratedeclarations- External module files (
mod foo;)