End-to-End Workflow
import { Steps, Aside } from ‘@astrojs/starlight/components’;
This guide walks through a complete performance testing workflow using the full ecosystem. The scenario: load testing a REST API with user authentication.
Tools used: dummydatagenpro, perf-autocorrelator-pro, perf-lint, perf-containers, perf-reporting, perf-results-db, perf-compare.
1. Project Setup
Section titled “1. Project Setup”Create a project directory and initialise perf-ecosystem.yml:
mkdir perf-test-project && cd perf-test-project
cp perf-ecosystem.example.yml perf-ecosystem.ymlEdit perf-ecosystem.yml with your environment:
version: "1.0"
workspace: name: "my-api" environment: staging
services: perf_results_db: url: "http://localhost:4000" api_key: "${PERF_RESULTS_DB_API_KEY}" project_id: "${PERF_RESULTS_DB_PROJECT_ID}"
perf_lint_api: url: "https://perflint.martkos-it.co.uk" api_key: "${PERF_LINT_API_KEY}"
thresholds: response_time: p95_max_ms: 1500 p99_max_ms: 3000 error_rate: max_percent: 1.0 apdex: min_score: 0.85
integrations: push_to_results_db: trueSet environment variables (.env, not committed):
export PERF_RESULTS_DB_API_KEY=prdb_your_keyexport PERF_RESULTS_DB_PROJECT_ID=your-project-uuidexport PERF_LINT_API_KEY=your-lint-api-key2. Generate Test Data
Section titled “2. Generate Test Data”-
Go to dummydatagenpro.co.uk or use the API:
Terminal window curl -X POST https://dummydatagenpro.co.uk/api/generate \-H "X-API-Key: ${DUMMYDATA_API_KEY}" \-H "Content-Type: application/json" \-d '{"rows": 1000,"format": "csv","columns": [{"name": "userId", "type": "uuid"},{"name": "email", "type": "email"},{"name": "password", "type": "password"}]}' -o data/users.csv -
Verify the CSV:
Terminal window head -5 data/users.csv# userId,email,password# 550e8400-...,alice@example.com,Qm3#xR9!
3. Capture a HAR and Generate a Script
Section titled “3. Capture a HAR and Generate a Script”-
Open your application in Chrome, open DevTools → Network.
-
Perform a complete user journey: login, browse, checkout (or whatever your target scenario is).
-
Right-click → “Save all as HAR” → save as
session.har. -
Generate a k6 script with auto-correlation:
Terminal window perf-autocorrelator-pro generate session.har \--tool k6 \-o tests/load-test.js -
Review the generated script. The tool will have:
- Extracted dynamic tokens (session IDs, CSRF tokens, etc.)
- Added extractor checks after each response that sets a token
- Referenced extracted values in subsequent requests
- Preserved think times from your browser session
4. Lint the Script
Section titled “4. Lint the Script”perf-lint check tests/load-test.jsFix any errors before proceeding. Auto-fix rules that support it:
perf-lint check tests/load-test.js --fixA clean lint means no known anti-patterns, correct timeout configuration, and no hardcoded values that should be variables.
5. Run a Smoke Test
Section titled “5. Run a Smoke Test”Before the full load test, verify the script works with minimal load:
docker run --rm \ -v $(pwd)/tests:/tests \ -v $(pwd)/data:/data \ -e TARGET_HOST=https://staging.api.example.com \ ghcr.io/markslilley/perf-k6:latest \ k6 run /tests/load-test.js \ -e VUS=3 \ -e DURATION=30s \ --out json=/tests/smoke-results.jsonCheck for errors:
jq '.metrics.http_req_failed.values.rate' tests/smoke-results.json# Should be 0 or close to 06. Run the Load Test
Section titled “6. Run the Load Test”docker run --rm \ -v $(pwd)/tests:/tests \ -v $(pwd)/data:/data \ -e TARGET_HOST=https://staging.api.example.com \ ghcr.io/markslilley/perf-k6:latest \ k6 run /tests/load-test.js \ -e VUS=100 \ -e DURATION=300s \ --out json=/tests/results.json7. Generate a Report
Section titled “7. Generate a Report”perf-reporting generate \ --file tests/results.json \ --tool k6 \ --output reports/load-test-report.htmlThe report includes Apdex score, SLA pass/fail (from perf-ecosystem.yml thresholds), percentile breakdown, and timeline charts.
Open reports/load-test-report.html in a browser.
8. Upload Results to perf-results-db
Section titled “8. Upload Results to perf-results-db”npx perf-results-db-cli upload \ --url http://localhost:4000 \ --api-key "${PERF_RESULTS_DB_API_KEY}" \ --project-id "${PERF_RESULTS_DB_PROJECT_ID}" \ --file tests/results.json \ --tool k6 \ --tags "branch=main,env=staging"Open the perf-results-db dashboard at http://localhost:4000 to see the run added to the trend graph.
9. Check for Regressions
Section titled “9. Check for Regressions”After several runs, use perf-compare to detect regressions statistically:
npx @martkos-it/perf-compare \ --url http://localhost:4000 \ --project "${PERF_RESULTS_DB_PROJECT_ID}" \ --method statistical \ --baseline 10 \ --current 3- Exit code
0: no regression - Exit code
1: regression detected - Exit code
2: tool error
10. Tag a Baseline
Section titled “10. Tag a Baseline”When you’re satisfied with a run’s results, tag it as the reference baseline:
# Get the run ID from the dashboard or APIcurl -s http://localhost:4000/api/projects/${PERF_RESULTS_DB_PROJECT_ID}/runs \ -H "X-API-Key: ${PERF_RESULTS_DB_API_KEY}" \ | jq '.[0].id' # latest run
# Tag as baselinecurl -X POST http://localhost:4000/api/test-runs/${RUN_ID}/baseline \ -H "X-API-Key: ${PERF_RESULTS_DB_API_KEY}" \ -d '{"name": "v1.2.0 post-optimisation"}'Future runs are compared against this baseline in the dashboard.
Next Steps
Section titled “Next Steps”- Automate this pipeline: see the CI/CD Integration guide
- Schedule as a monitor: see perf-monitor to run this script on a cron against production
- Migrate to another tool: see perf-migrator if you need a JMeter or Gatling version of the same script