Skip to content

perf-monitor

import { Aside, Steps } from ‘@astrojs/starlight/components’;

perf-monitor schedules your existing JMeter, k6, and Gatling scripts as production monitors — running them on a cron against live endpoints and alerting on threshold breaches.

It runs your scripts on your infrastructure. No new DSL to learn, no agents to install on production hosts.

TierMonitorsMin IntervalPrice
Community515 minFree
Professional501 min£299/year
EnterpriseUnlimited1 min + SSO/RBAC£799/year
  • Docker and Docker Compose
  • Access to registry.martkos-it.co.uk for agent images (requires license key)
  • Outbound network access from the Docker host to your production endpoints
  1. Download

    martkos-it.co.uk/store/perf-monitor-download

  2. Extract and configure

    Terminal window
    unzip perf-monitor.zip && cd perf-monitor
    cp .env.example .env

    Minimum required values in .env:

    Terminal window
    APP_SECRET_KEY=generate-a-random-32-char-string
    DATABASE_URL=postgres://perf:changeme@postgres:5432/perfmonitor
    DB_USER=perf
    DB_PASSWORD=changeme
    DB_NAME=perfmonitor
    REDIS_URL=redis://redis:6379
    REDIS_PASSWORD=changeme
    ALERT_ENCRYPTION_KEY=another-32-char-random-string
    PERF_MONITOR_LICENSE_KEY=your-license-key
  3. Start

    Terminal window
    docker compose up -d
    docker compose ps # all services should reach 'healthy'
  4. Access the dashboard

    http://localhost:8000/app/

Via the dashboard → Monitors → New Monitor:

FieldDescription
NameHuman-readable label
ScriptUpload a .jmx, .js, or .scala file
ScheduleCron expression (e.g., */15 * * * *)
ToolAuto-detected from file extension
ThresholdsLatency (ms), error rate (%), Apdex score
*/5 * * * * — every 5 minutes
*/15 * * * * — every 15 minutes (Community minimum)
0 * * * * — hourly
0 6 * * * — daily at 06:00
0 9 * * 1-5 — weekdays at 09:00

Configure alerting channels in .env or via Settings → Alerts in the dashboard.

Terminal window
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/...
SLACK_CHANNEL=#perf-alerts
Terminal window
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USER=alerts@example.com
SMTP_PASSWORD=...
ALERT_FROM_EMAIL=alerts@example.com
ALERT_TO_EMAIL=oncall@example.com
Terminal window
PAGERDUTY_INTEGRATION_KEY=...
Terminal window
ALERT_WEBHOOK_URL=https://your-system.example.com/hooks/perf

Scripts run unmodified. The executor agents handle tool invocation. Requirements:

  • JMeter (.jmx): No GUI listeners. Use non-GUI-friendly samplers. Command-line properties override VU count if needed.
  • k6 (.js): Standard k6 script. __VU and __ITER are available. Avoid open() in VU functions.
  • Gatling (.scala): Standard simulation. The agent invokes gatling.sh -s SimulationClass.

Override script parameters at monitor level (avoids hardcoding in scripts):

{
"parameters": {
"TARGET_HOST": "https://api.production.example.com",
"VUS": "10",
"DURATION": "60s"
}
}

In JMeter: ${__P(TARGET_HOST)}. In k6: __ENV.TARGET_HOST. In Gatling: System.getProperty("TARGET_HOST").

To test from multiple geographic locations, deploy remote executor agents:

Terminal window
# On the remote host
docker run -d \
-e AGENT_ID=eu-west-agent \
-e API_URL=https://monitor.example.com \
-e REGISTRATION_SECRET=${AGENTS_REGISTRATION_SECRET} \
registry.martkos-it.co.uk/perf-monitor-agent:1.0.0

The agent registers with the perf-monitor server and receives script execution jobs.

Push monitor results to perf-results-db for centralised trending:

Terminal window
PERF_RESULTS_DB_URL=http://perf-results-db:4000
PERF_RESULTS_DB_API_KEY=prdb_your_key
PERF_RESULTS_DB_PROJECT_ID=your-project-uuid

When configured, every monitor run is automatically uploaded after completion.

Terminal window
RETENTION_DAYS_DEFAULT=90 # Default retention for monitor runs

Configure per-monitor in the dashboard → Monitor Settings → Data Retention.

  1. Download the new release bundle
  2. Replace docker-compose.yml — do not overwrite .env
  3. docker compose up -d
  4. Verify migrations: docker compose logs api | grep -i migrat

perf-monitor stores execution metadata (timestamps, metrics, thresholds) and your script files. It does not store request/response bodies by default. You are the data controller. Configure retention to comply with your data minimisation obligations.