Go beyond manual testing into repeatable, automated API testing with contract validation, variable chaining, and scheduled runs.
The Test Suite feature lets you group HTTP requests into named suites, define assertions on each response, and run them all with a single click. It is available as a Pro feature.
To open the Test Suite window:
Test Suite button in the document header bar (next to Mock Server and Export).A Test Suite is a named group of test cases that run sequentially. You can create as many suites as you need β for example, one per feature area, one for smoke tests, one for full regression.
+ button in the suite list sidebar.To delete a suite, right-click it in the sidebar and choose Delete.
Each suite has its own variables, test cases, and configurable delay between requests (default: 0.5s).
There are three ways to add test cases to a suite:
Click Add Test in the test case list header. This creates a blank test case where you configure the method, path, base URL, headers, body, and authentication manually.
When a spec is loaded, the Batch Add menu lets you pick individual tags. All endpoints within that tag are added with pre-filled method, path, base URL, and a default status code assertion.
Choose Add all endpoints at the bottom of the Batch Add menu to add every endpoint in your spec at once. Each gets contract testing enabled by default.
Each test case can be individually enabled or disabled with the toggle switch. Disabled tests are skipped during execution.
Assertions define what you expect from the API response. If any enabled assertion fails, the test case is marked as failed.
| Type | Field | Expected | Example |
|---|---|---|---|
| Status Code | (not used) | HTTP status code | Expected: "200" |
| Header Contains | Header name | Substring to find | Field: "Content-Type", Expected: "json" |
| Body Contains | (not used) | Substring in body | Expected: "success" |
| JSON Path | Dot-path to value | Expected value | Field: "data.id", Expected: "42" |
When you add a test case from the spec, a status code assertion is automatically added based on the first response defined in the spec.
Variables let you parameterize your test cases. Use the {{variableName}} syntax in any field: path, base URL, headers, body, query parameters, and authentication values.
Variables follow a hierarchical override system with three scopes:
| Scope | Where to define | Visibility |
|---|---|---|
| Global | Variables tab β Global Variables | All suites and all test cases |
| Suite | Variables tab β Suite Variables | All test cases within the suite |
| Request | On the test case itself | Only that specific test case |
When the same variable name exists at multiple scopes, the narrower scope wins: Request > Suite > Global.
Base URL: {{baseUrl}}
Path: /users/{{userId}}
Header: Authorization: Bearer {{token}}Request chaining lets you extract values from one response and use them in subsequent test cases. This is essential for flows like:
/users β extract id from the response/users/{{userId}} β use the extracted ID/users/{{userId}} β clean upEach test case has an extractAfter configuration where you define:
userId)data.id)Extracted values are available as {{variableName}} in all subsequent test cases within the same run.
Contract testing validates that the live API response matches what your OpenAPI spec promises. When enabled on a test case, SuagerUI checks:
Contract violations are shown separately from assertion failures, marked with a purple badge. A test case fails if it has any contract violation or assertion failure.
Test cases added from the spec have contract testing enabled by default. You can toggle it per test case.
Click the green Run All button in the suite detail header to start execution. Tests run sequentially with the configured delay between requests.
During execution you will see:
Click Cancel to stop the run at any time. Results collected so far are preserved.
After the run completes, switch to the Results tab for the full breakdown. Failed assertions show the expected vs. actual values. Contract violations list the specific schema mismatches.
Export your test results for CI integration or reporting. Click the export icon next to the Run All button and choose a format:
| Format | Use case | File |
|---|---|---|
| JUnit XML | CI/CD integration (Jenkins, GitHub Actions, GitLab CI) | test-results.xml |
| JSON | Custom dashboards, data analysis, programmatic access | test-results.json |
Both formats include the suite name, timestamps, duration per test, assertion details, and contract violations.
Set up recurring test runs to continuously monitor your API. Go to the Schedule tab in the suite detail view:
When a scheduled run completes with failures, SuagerUI sends a desktop notification with the suite name and pass/fail counts. All scheduled runs appear in the run history at the bottom of the Schedule tab.
Scheduled runs use the same variable context as manual runs. Make sure your global and suite variables are set up before enabling the schedule.
All rights reserved by SuagerUI
Made with β₯οΈ from Los Angeles, CL Β· hi@suagerui.app