Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,12 @@ node_modules/
site_front_2/flaskapp/static/dist/


#test runner
results/
.test_run/
test-runner/runner_config.json
test-runner/test_cases.json

# py
site_front_2/_playground/

Expand All @@ -42,4 +48,4 @@ docker-emlo/csv_import_files
docker-emlo/solr-conf/solr/home/**/core.properties

# ENV folder
site-front-2
site-front-2
22 changes: 22 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
volumes:
redis:
test_run_state: # persists .test_run/ results across restarts

services:
web:
Expand All @@ -15,6 +16,27 @@ services:
- 5000:5000
command: bash -c "/bin/docker-entrypoint.sh"

# ── Test Runner ───────────────────────────────────────────────────────────
# Self-contained service. All files live in ./test-runner/.
# Proxied via the web container — users access it at localhost:5000/test-run.
# Start independently: docker compose up test-runner
# Remove from stack: comment out this block + TEST_RUNNER_URL in web above
test-runner:
build:
context: ./test-runner
dockerfile: Dockerfile
restart: unless-stopped
expose:
- "8085"
ports:
- 8085:8085 # internal only — not exposed to host
volumes:
- test_run_state:/app/.test_run
environment:
DEBUG: "false"
# ─────────────────────────────────────────────────────────────────────────


solr:
build: ./docker-emlo/solr-conf
restart: always
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Flask==3.1.2
python-dotenv==1.1.1
requests==2.32.5
werkzeug==3.1.3
werkzeug==3.1.3
8 changes: 7 additions & 1 deletion run.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,11 @@
from views.solr import solr_bp
from views.shortURL import shortURL_bp
from views.redirectUUID import redirect_uuid_bp
from views.test_runner import test_runner_bp
from config import Config
import os


def create_app():
app = Flask(__name__)
app.config.from_object(Config)
Expand All @@ -32,14 +34,18 @@ def create_app():
app.register_blueprint(solr_bp)
app.register_blueprint(shortURL_bp)
app.register_blueprint(redirect_uuid_bp)
app.register_blueprint(test_runner_bp)

return app


app = create_app()


def main():
debug_mode = os.getenv('DEBUG', 'false').lower() == 'true'
app.run(host='0.0.0.0', port=int(app.config['PORT']), debug=debug_mode)


if __name__ == '__main__':
main()
main()
19 changes: 19 additions & 0 deletions test-runner/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
FROM python:3.11-slim

WORKDIR /app

RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Install Chromium and ALL its system dependencies in one shot
RUN playwright install --with-deps chromium

COPY . .

EXPOSE 8085

CMD ["python", "app.py"]
103 changes: 103 additions & 0 deletions test-runner/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Test Runner

A visual regression testing tool that compares two versions of your site by loading the same pages on both environments, taking screenshots, and highlighting pixel-level differences. Great for catching unexpected UI changes before they reach users.

When you run a test, the tool opens both sites in a headless browser, captures screenshots at the same viewport size, and overlays them to produce a diff image. Each test gets a match percentage — if it drops below your threshold, the test fails.

Results stream to the UI in real time so you can watch tests pass or fail as they happen.

---

## Setup

**1. Rename the template config files**

```bash
cp runner_config.template.json runner_config.json
cp test_cases.template.json test_cases.json
```

These files tell the runner which sites to compare and which pages to test. Without them the runner won't start.

**2. Start the service**

```bash
docker compose up test-runner
```

Visit `http://<your-domain>/test-run` — you should see the test runner UI.

---

## Settings

Open the **Settings** tab before your first run. This is where you configure the two environments you want to compare.

**Sites**
Enter a name and base URL for each site. The base URL is the root of the site — page paths get appended to it when each test runs. For example if Site A is `https://staging.example.com` and you test the path `/about`, the runner will load `https://staging.example.com/about`.

**Authentication**
If either site sits behind HTTP basic auth, enable it and enter the credentials. This is per-site so you can have auth on one and not the other.

**Runner behaviour**

- **Diff threshold** — how strict the pixel comparison is. `0.1` means a 10% colour difference per pixel is acceptable before it counts as a changed pixel. Lower = stricter.
- **Max diff pixels** — how many changed pixels are allowed before a test is marked as failed. `0` means any difference fails the test.
- **Page timeout** — how long to wait for a page to fully load before giving up. Default is 2 minutes — increase this for slow pages.
- **Viewport** — the browser window size used for screenshots. Both sites are screenshotted at the same size so the comparison is fair.
- **Capture all screenshots** — by default only diff images are saved. Turn this on to also save the individual screenshots for passing tests.
- **Fail fast** — stop the entire run as soon as one test fails instead of continuing through the rest.
- **Clean old results** — delete previous screenshots and diffs before each run so results don't accumulate. Leave this on unless you need to keep historical images.

Click **Save Settings** when done. Settings are written to `runner_config.json`.

---

## Test Cases

Open the **Test Cases** tab to manage the pages you want to compare.

Each test case has:
- **ID** — a unique identifier, auto-generated when you add a new test
- **Test name** — a human-readable label shown in the results sidebar (e.g. `Homepage`, `Search Results`)
- **URI** — the page path to test, starting with `/` (e.g. `/en/`, `/collections/all`)
- **Description** — optional notes about what the page is or what to look out for

Click **+ Add Test** to add a row, fill in the name and URI, then click **Save All**. Test cases are written to `test_cases.json`.

> You can edit test cases between runs but not while a run is in progress.

---

## Running tests

Hit **Execute Tests** on the Run tab. The runner will work through your test cases one by one — you'll see each test move from queued → running → passed or failed in the sidebar as it completes.

Click any test in the sidebar to see:
- **Match percentage** — how similar the two screenshots are
- **Diff image** — a visual overlay showing exactly what changed (red pixels = differences)
- **Screenshots** — side by side images of both sites (only shown if Capture all screenshots is on, or if the test failed)
- **Load times** — how long each site took to load the page
- **Error details** — if a test couldn't run, the reason is shown here

Once all tests complete, a summary bar appears at the top showing totals for passed, failed, and errors.

---

## Troubleshooting

**Tests are stuck in queue after a run**
The runner crashed before it could start. Check the Console Output panel at the bottom of the Run tab — the error message will tell you what went wrong. Most commonly it's a missing or invalid `runner_config.json`.

**A test shows execution error**
The runner couldn't load the page — usually a network issue, bad URL, or the page timing out. Check the error message in the test detail panel and make sure the URL is reachable from inside Docker.

**Settings or test cases aren't saving**
Make sure `runner_config.json` and `test_cases.json` exist in the `test-runner/` folder. If you skipped Step 1 of setup, go back and create them from the templates.

**The service won't start**
```bash
docker compose build test-runner
docker compose up test-runner
```
A rebuild is usually needed after the first install or if dependencies have changed.
30 changes: 30 additions & 0 deletions test-runner/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
"""
Standalone Flask app for the test-runner service.
Runs internally on port 5001. Users access it via the main app at:
http://localhost:5000/test-run
"""
from flask import Flask, send_from_directory
from test_run import test_run_bp
import os

BASE_DIR = os.path.dirname(os.path.abspath(__file__))

def create_app():
app = Flask(__name__)
app.register_blueprint(test_run_bp)

@app.route("/")
def landing():
return send_from_directory(BASE_DIR, "landing.html")

@app.errorhandler(404)
def not_found(e):
return send_from_directory(BASE_DIR, "404.html"), 404

return app

app = create_app()

if __name__ == "__main__":
debug_mode = os.getenv("DEBUG", "false").lower() == "true"
app.run(host="0.0.0.0", port=8085, debug=debug_mode)
89 changes: 89 additions & 0 deletions test-runner/error.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8"/>
<title>404 — Not Found</title>
<meta name="viewport" content="width=device-width,initial-scale=1"/>
<link rel="preconnect" href="https://fonts.googleapis.com"/>
<link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@300;400;500;600&family=DM+Mono:wght@400;500&display=swap" rel="stylesheet"/>
<style>
:root {
--bg: #f5f4f0; --surface: #ffffff; --surface2: #f0ede8;
--border: #e2ddd8; --border2: #d0cbc4;
--text: #1a1814; --text2: #6b6560; --text3: #9e9890;
--accent: #2563eb; --accent-bg: #eff6ff;
--fail: #dc2626; --fail-bg: #fef2f2;
--radius: 10px; --shadow: 0 1px 3px rgba(0,0,0,.08),0 1px 2px rgba(0,0,0,.04);
}
*{box-sizing:border-box;margin:0;padding:0}
body{font-family:'DM Sans',sans-serif;background:var(--bg);color:var(--text);font-size:14px;line-height:1.5;min-height:100vh;display:flex;flex-direction:column}

nav{background:var(--surface);border-bottom:1px solid var(--border);padding:0 32px;display:flex;align-items:center;height:56px;gap:32px;box-shadow:var(--shadow)}
.nav-brand{font-weight:600;font-size:15px;display:flex;align-items:center;gap:8px;text-decoration:none;color:var(--text)}
.nav-brand-dot{width:8px;height:8px;border-radius:50%;background:var(--accent)}

main{flex:1;display:flex;align-items:center;justify-content:center;padding:32px}
.error-wrap{text-align:center;max-width:480px}

.error-code{font-family:'DM Mono',monospace;font-size:96px;font-weight:500;color:var(--border2);line-height:1;margin-bottom:24px;letter-spacing:-4px}

.error-badge{display:inline-flex;align-items:center;gap:6px;font-size:11px;font-weight:600;text-transform:uppercase;letter-spacing:.8px;color:var(--fail);background:var(--fail-bg);border:1px solid rgba(220,38,38,.15);padding:5px 12px;border-radius:20px;margin-bottom:20px}
.error-badge-dot{width:6px;height:6px;border-radius:50%;background:var(--fail)}

h1{font-size:24px;font-weight:600;letter-spacing:-.5px;margin-bottom:10px}
p{font-size:14px;color:var(--text2);line-height:1.7;margin-bottom:32px}

.path-box{display:inline-flex;align-items:center;gap:8px;background:var(--surface);border:1px solid var(--border);border-radius:7px;padding:8px 14px;font-family:'DM Mono',monospace;font-size:12px;color:var(--text2);margin-bottom:32px}
.path-label{font-size:10px;text-transform:uppercase;letter-spacing:.5px;color:var(--text3);font-family:'DM Sans',sans-serif}

.actions{display:flex;align-items:center;justify-content:center;gap:10px}
.btn{font-family:'DM Sans',sans-serif;font-size:13px;font-weight:500;padding:8px 16px;border-radius:7px;border:1px solid var(--border);background:var(--surface);color:var(--text);cursor:pointer;transition:all .15s;display:inline-flex;align-items:center;gap:6px;text-decoration:none;white-space:nowrap}
.btn:hover{border-color:var(--border2);background:var(--surface2)}
.btn-primary{background:var(--accent);border-color:var(--accent);color:white}
.btn-primary:hover{background:#1d4ed8;border-color:#1d4ed8}

footer{border-top:1px solid var(--border);padding:16px 32px;text-align:center;font-size:12px;color:var(--text3)}
</style>
</head>
<body>

<nav>
<a href="/" class="nav-brand">
<div class="nav-brand-dot"></div>
Visual Regression
</a>
</nav>

<main>
<div class="error-wrap">
<div class="error-code">404</div>

<div class="error-badge">
<div class="error-badge-dot"></div>
Page not found
</div>

<h1>Nothing here</h1>
<p>The page you're looking for doesn't exist in this service. This runner only handles test-related routes.</p>

<div class="path-box">
<span class="path-label">Requested</span>
<span id="req-path">—</span>
</div>

<div class="actions">
<a href="/" class="btn">← Home</a>
<a href="/test-run" class="btn btn-primary">Open Test Runner</a>
</div>
</div>
</main>

<footer>
Visual Regression Test Runner
</footer>

<script>
document.getElementById('req-path').textContent = window.location.pathname;
</script>
</body>
</html>
Loading