Skip to main content

K6: Modern Load Testing Tool

k6 is an open-source load testing tool for testing the performance of APIs, microservices, and websites. It is developer-centric, scriptable in JavaScript, and designed for automation and CI/CD pipelines.

Key Features

  • Scripting in JavaScript (ES6)
  • CLI and cloud execution
  • Integrates with CI/CD tools
  • Rich metrics and reporting

Basic Example

A simple k6 script (script.js):

import http from "k6/http";
import { check, sleep } from "k6";

export default function () {
let res = http.get("https://test.k6.io");
check(res, { "status was 200": (r) => r.status === 200 });
sleep(1);
}

Run with:

k6 run script.js

Important Configuration Options

You can configure k6 using the CLI, environment variables, or in the script itself.

1. vus (Virtual Users) and duration

  • vus: Number of concurrent virtual users.
  • duration: How long the test runs.
k6 run --vus 50 --duration 30s script.js

2. Stages (Ramp up/down traffic)

Define traffic patterns in the script:

export const options = {
stages: [
{ duration: "1m", target: 20 }, // ramp up
{ duration: "3m", target: 20 }, // stay
{ duration: "1m", target: 0 }, // ramp down
],
};

3. Thresholds

Set performance goals:

export const options = {
thresholds: {
http_req_duration: ["p(95)<500"], // 95% of requests < 500ms
},
};

4. Environments

Pass variables to your script:

k6 run -e BASE_URL=https://test.k6.io script.js
const BASE_URL = __ENV.BASE_URL;

5. Summary and Output

  • Summary: k6 prints a summary at the end of the test.
  • Output: Export results to JSON, InfluxDB, etc.
k6 run --out json=results.json script.js

Advanced Scenario Configuration

k6 supports advanced traffic patterns using the scenarios option. This allows you to simulate different types of load, such as constant arrival rate, ramping arrival rate, shared iterations, and more.

Example: Constant Arrival Rate

This scenario keeps a constant number of requests per second, regardless of how many virtual users (VUs) are needed to achieve it.

export const options = {
scenarios: {
constant_rps: {
executor: "constant-arrival-rate",
rate: 800, // 800 requests per second
timeUnit: "1s", // per second
duration: "5m", // total test duration
preAllocatedVUs: 100, // initial pool of VUs to sustain the rate
maxVUs: 1000, // maximum VUs allowed to sustain the rate
},
},
};

Common Scenario Executors

  • constant-vus: Fixed number of VUs for a set duration (default if you use vus and duration).
  • ramping-vus: VUs ramp up and down over time.
  • constant-arrival-rate: Maintain a constant request rate (RPS), regardless of VUs.
  • ramping-arrival-rate: Gradually increase/decrease request rate over time.
  • per-vu-iterations: Each VU executes a fixed number of iterations.
  • shared-iterations: A fixed number of iterations are shared among all VUs.

Choose the scenario that best matches your real-world traffic pattern:

  • Use constant-vus or ramping-vus for user-centric tests.
  • Use constant-arrival-rate or ramping-arrival-rate for API or RPS-centric tests.
  • Use per-vu-iterations or shared-iterations for batch jobs or fixed workloads.

See the k6 scenarios documentation for more details and advanced usage.

Conclusion: Choosing the Right Scenario

Selecting the right scenario in k6 depends on what you want to simulate:

  • constant-vus: Use when you want to simulate a fixed number of users performing actions over time. Example: Testing how your site handles 100 users browsing simultaneously for 10 minutes.

  • ramping-vus: Use to simulate gradual increases or decreases in user load. Example: Testing how your API scales as user traffic ramps up from 10 to 500 users and then back down.

  • constant-arrival-rate: Use when you need to generate a specific number of requests per second, regardless of how many users are needed. Example: Load testing an API endpoint to ensure it can handle 800 RPS (requests per second) for 5 minutes.

  • ramping-arrival-rate: Use to gradually increase or decrease the request rate. Example: Simulating a marketing campaign where traffic spikes from 100 to 1000 RPS over 10 minutes.

  • per-vu-iterations: Use when each user should perform a set number of actions. Example: Each user uploads 10 files, regardless of how long it takes.

  • shared-iterations: Use when you want a total number of actions shared among all users. Example: 1000 total logins, split among however many users are available.

Tip:

  • For user-centric tests (websites, apps), use VU-based scenarios.
  • For API or backend performance, use arrival-rate (RPS) scenarios.

Always choose the scenario that best matches your real-world usage pattern for the most accurate results.

Bottlenecks and Limitations of k6

While k6 is a powerful and flexible load testing tool, there are some limitations and bottlenecks to be aware of:

  • Protocol Support:

    • k6 natively supports HTTP/HTTPS, WebSockets, and gRPC (limited beta). It does not support protocols like FTP, SMTP, or raw TCP/UDP out of the box.
  • Single-Machine Resource Limits:

    • Running high numbers of VUs or high RPS from a single machine can quickly exhaust CPU, memory, or network bandwidth. For very large tests, consider distributed/cloud execution (k6 Cloud or k6 Operator for Kubernetes).
  • No Browser Automation:

    • k6 is not a browser automation tool (like Selenium or Playwright). It cannot execute JavaScript in the browser or test front-end rendering and UI interactions.
  • JavaScript Engine:

    • k6 uses a Go-based JS runtime (not Node.js), so some Node.js modules and advanced JS features may not be available.
  • Limited Built-in Reporting:

    • While k6 provides a summary and can export to various backends, advanced reporting and visualization may require integration with external tools (Grafana, InfluxDB, etc.).
  • No Built-in Distributed Execution (OSS):

    • The open-source version does not natively support distributed execution across multiple machines. This is available in k6 Cloud or via custom orchestration.
  • Test Script Complexity:

    • Very complex test logic or large scripts can become hard to maintain. Modularize scripts and use environment variables/configs for flexibility.

Tip: Always monitor your load generator's resource usage during tests to ensure the bottleneck is not on the test machine itself.

References