Skip to content

End-to-End Workflow

import { Steps, Aside } from ‘@astrojs/starlight/components’;

This guide walks through a complete performance testing workflow using the full ecosystem. The scenario: load testing a REST API with user authentication.

Tools used: dummydatagenpro, perf-autocorrelator-pro, perf-lint, perf-containers, perf-reporting, perf-results-db, perf-compare.


Create a project directory and initialise perf-ecosystem.yml:

Terminal window
mkdir perf-test-project && cd perf-test-project
cp perf-ecosystem.example.yml perf-ecosystem.yml

Edit perf-ecosystem.yml with your environment:

version: "1.0"
workspace:
name: "my-api"
environment: staging
services:
perf_results_db:
url: "http://localhost:4000"
api_key: "${PERF_RESULTS_DB_API_KEY}"
project_id: "${PERF_RESULTS_DB_PROJECT_ID}"
perf_lint_api:
url: "https://perflint.martkos-it.co.uk"
api_key: "${PERF_LINT_API_KEY}"
thresholds:
response_time:
p95_max_ms: 1500
p99_max_ms: 3000
error_rate:
max_percent: 1.0
apdex:
min_score: 0.85
integrations:
push_to_results_db: true

Set environment variables (.env, not committed):

Terminal window
export PERF_RESULTS_DB_API_KEY=prdb_your_key
export PERF_RESULTS_DB_PROJECT_ID=your-project-uuid
export PERF_LINT_API_KEY=your-lint-api-key

  1. Go to dummydatagenpro.co.uk or use the API:

    Terminal window
    curl -X POST https://dummydatagenpro.co.uk/api/generate \
    -H "X-API-Key: ${DUMMYDATA_API_KEY}" \
    -H "Content-Type: application/json" \
    -d '{
    "rows": 1000,
    "format": "csv",
    "columns": [
    {"name": "userId", "type": "uuid"},
    {"name": "email", "type": "email"},
    {"name": "password", "type": "password"}
    ]
    }' -o data/users.csv
  2. Verify the CSV:

    Terminal window
    head -5 data/users.csv
    # userId,email,password
    # 550e8400-...,alice@example.com,Qm3#xR9!

  1. Open your application in Chrome, open DevTools → Network.

  2. Perform a complete user journey: login, browse, checkout (or whatever your target scenario is).

  3. Right-click → “Save all as HAR” → save as session.har.

  4. Generate a k6 script with auto-correlation:

    Terminal window
    perf-autocorrelator-pro generate session.har \
    --tool k6 \
    -o tests/load-test.js
  5. Review the generated script. The tool will have:

    • Extracted dynamic tokens (session IDs, CSRF tokens, etc.)
    • Added extractor checks after each response that sets a token
    • Referenced extracted values in subsequent requests
    • Preserved think times from your browser session

Terminal window
perf-lint check tests/load-test.js

Fix any errors before proceeding. Auto-fix rules that support it:

Terminal window
perf-lint check tests/load-test.js --fix

A clean lint means no known anti-patterns, correct timeout configuration, and no hardcoded values that should be variables.


Before the full load test, verify the script works with minimal load:

Terminal window
docker run --rm \
-v $(pwd)/tests:/tests \
-v $(pwd)/data:/data \
-e TARGET_HOST=https://staging.api.example.com \
ghcr.io/markslilley/perf-k6:latest \
k6 run /tests/load-test.js \
-e VUS=3 \
-e DURATION=30s \
--out json=/tests/smoke-results.json

Check for errors:

Terminal window
jq '.metrics.http_req_failed.values.rate' tests/smoke-results.json
# Should be 0 or close to 0

Terminal window
docker run --rm \
-v $(pwd)/tests:/tests \
-v $(pwd)/data:/data \
-e TARGET_HOST=https://staging.api.example.com \
ghcr.io/markslilley/perf-k6:latest \
k6 run /tests/load-test.js \
-e VUS=100 \
-e DURATION=300s \
--out json=/tests/results.json

Terminal window
perf-reporting generate \
--file tests/results.json \
--tool k6 \
--output reports/load-test-report.html

The report includes Apdex score, SLA pass/fail (from perf-ecosystem.yml thresholds), percentile breakdown, and timeline charts.

Open reports/load-test-report.html in a browser.


Terminal window
npx perf-results-db-cli upload \
--url http://localhost:4000 \
--api-key "${PERF_RESULTS_DB_API_KEY}" \
--project-id "${PERF_RESULTS_DB_PROJECT_ID}" \
--file tests/results.json \
--tool k6 \
--tags "branch=main,env=staging"

Open the perf-results-db dashboard at http://localhost:4000 to see the run added to the trend graph.


After several runs, use perf-compare to detect regressions statistically:

Terminal window
npx @martkos-it/perf-compare \
--url http://localhost:4000 \
--project "${PERF_RESULTS_DB_PROJECT_ID}" \
--method statistical \
--baseline 10 \
--current 3
  • Exit code 0: no regression
  • Exit code 1: regression detected
  • Exit code 2: tool error

When you’re satisfied with a run’s results, tag it as the reference baseline:

Terminal window
# Get the run ID from the dashboard or API
curl -s http://localhost:4000/api/projects/${PERF_RESULTS_DB_PROJECT_ID}/runs \
-H "X-API-Key: ${PERF_RESULTS_DB_API_KEY}" \
| jq '.[0].id' # latest run
# Tag as baseline
curl -X POST http://localhost:4000/api/test-runs/${RUN_ID}/baseline \
-H "X-API-Key: ${PERF_RESULTS_DB_API_KEY}" \
-d '{"name": "v1.2.0 post-optimisation"}'

Future runs are compared against this baseline in the dashboard.


  • Automate this pipeline: see the CI/CD Integration guide
  • Schedule as a monitor: see perf-monitor to run this script on a cron against production
  • Migrate to another tool: see perf-migrator if you need a JMeter or Gatling version of the same script