Currents Documentation
Currents.devGitHubChangelog
  • Getting Started
    • What is Currents?
    • Playwright
      • Playwright: Quick Start
      • Troubleshooting Playwright
    • Cypress
      • Your First Cypress Run
      • Integrating with Cypress
        • Compatibility
        • Alternative Cypress Binaries
      • Troubleshooting Cypress
    • Jest
      • Your First Jest Run
      • Detox + Jest
      • Troubleshooting Jest
    • Others
    • CI Setup
      • GitHub Actions
        • Cypress - GitHub Actions
        • Playwright - GitHub Actions
        • Jest - GitHub Actions
        • Node.js - GitHub Actions
        • Commit data for GitHub Actions
        • Custom Docker runners
        • Named Runners
      • GitLab
        • Cypress - GitLab CI/CD
        • Playwright - GitLab CI/CD
        • Custom Docker runners
      • Jenkins
        • Cypress - Jenkins
        • Playwright - Jenkins
      • CircleCI
        • Cypress - CircleCI
        • Playwright - CircleCI
      • Bitbucket
        • Cypress - Bitbucket Pipelines
      • Azure DevOps
        • Cypress - Azure DevOps
        • Playwright - Azure DevOps
      • AWS Code Build
        • Cypress - AWS Code Build
        • Playwright - AWS Code Build
      • NX
        • Playwright - NX
        • Cypress - NX
  • Guides
    • Record Key
    • CI Build ID
    • Reporting
      • Reporting Strategy
      • Reporting in CI
      • Step-Level Reporting
    • CI Optimization
      • Playwright Parallelization
      • Orchestration Setup
      • Fully Parallel Mode
      • Re-run Only Failed Tests
      • Cloud Spot Instances
      • Failing Fast
      • Load Balancing
    • Code Coverage
      • Code Coverage for Playwright
      • Code Coverage for Cypress
    • Currents Actions
      • Setup Currents Actions
      • Using Currents Actions
      • Reference
        • Conditions
        • Actions
    • Playwright Component Testing
    • Playwright Visual Testing
    • Playwright Annotations
    • Playwright Tags
    • MCP Server
  • Dashboard
    • Projects
      • Projects Summary view
      • Project Settings
      • Archive and Unarchive Projects
    • Runs
      • Run Status
      • Run Details
      • Commit Information
      • Tags
      • Run Timeouts
      • Cancelling Runs
      • Deleting Runs
      • Run Progress
    • Tests
      • Spec File Status
      • Test Status
      • Flaky Tests
      • Test History
    • Test Suite Explorer
      • Test Explorer
        • Tests Performance
      • Spec Files Explorer
        • Spec Files Performance
      • Errors Explorer
  • Automated Reports
  • Insights and Analytics
  • Administration
    • Email Domain Based Access
    • SSO SAML2.0
      • SAML2.0 Configuration
      • SCIM User Provisioning
      • IdP-initiated Sessions
      • JumpCloud
        • JumpCloud User provisioning
      • Okta
        • Okta User provisioning
      • Troubleshooting SSO
    • Billing & Usage
  • Billing and Pricing
  • Resources
    • Reporters
      • cypress-cloud
        • Batched Orchestration
        • Migration to Cypress@13
      • @currents/cli
      • @currents/playwright
        • Configuration
        • pwc
        • pwc-p (orchestration)
        • Playwright Fixtures
      • @currents/jest
      • @currents/node-test-reporter
      • @currents/cmd
        • currents api
        • currents upload
        • currents cache
        • currents convert
      • Data Format Reference
    • Integrations
      • GitHub
        • GitHub App
        • GitHub OAuth
      • GitLab
      • Slack
      • Microsoft Teams
      • HTTP Webhooks
      • Bitbucket
    • API
      • Introduction
      • Authentication
      • API Keys
      • Errors
      • Pagination
      • API Resources
        • Instances
        • Runs
        • Projects
        • Spec Files
        • Test Signature
        • Test Results
    • Data Privacy
      • Access to Customer Data
      • Data Retention
      • Cloud Endpoints
    • Support
Powered by GitBook
On this page
  • Test Explorer Metrics
  • Duration
  • Duration Volume
  • Failure Rate
  • Failure Volume
  • Flakiness Rate
  • Flakiness Volume
  • Executions
  • Value Metrics vs Volume Metrics
  • Filters
  • Controls & Settings
  • Use Cases
  • Next Steps

Was this helpful?

  1. Dashboard
  2. Test Suite Explorer

Test Explorer

Test-level health and performance dashboard - flakiness, failure rate, duration.

PreviousTest Suite ExplorerNextTests Performance

Last updated 25 days ago

Was this helpful?

The Test Explorer view shows performance and health metrics like:

  • flakiness rate

  • failure rate

  • duration

Use it to effectively identify problematic tests, explore the trends and changes in tests behaviour. You can explore each individual test details by clicking on a test title and opening Tests Performance view and also schedule Automated Reports with the top items from the Test Explorer to be delivered to your inbox.


Currents calculates the metrics by aggregating the reported test results. You can fine-tune the aggregations by applying various filters, for example:

  • what are the 30-day flakiest tests from the main branch?

  • what are the 14-fay most failing tests tagged onboarding ?

  • what are the longest tests for mobile viewport?

Test Explorer Metrics

Volume Metrics vs Value Metrics

Metrics like Duration Volume, Flakiness Volume and Failure Volume measure the impact of a test on overall suite performance. The scores are calculated by multiplying the corresponding metric by the number of samples. The actual number has no real meaning. See example below.

Use Explorer Settings to exclude or include certain executions from being included in calculating the metrics.

The Test Explorer metrics help you to evaluate the health and speed of your testing suite.

Duration

The average execution time for fully completed tests, excluding tests that were canceled or skipped during execution.

Duration Volume

Duration Volume measures how much total time a test is contributing to the overall runtime of the test suite. It’s not just about how long a test takes per run, but also how often it runs.

Duration Volume = Avg.Duration × Executions

The raw number isn’t important on its own — it helps prioritize which tests are the biggest time sinks across all runs.

Failure Rate

It measures the percentage of times a test fails when it is executed and provides insights into the reliability and stability of the test. A higher failure rate may indicate potential issues or bugs within the test or the system under test.

Failure Volume

Failure Volume measures how much a test contributes to the total number of failures in your test suite — combining how often it runs with how likely it is to fail. It’s calculated as:

Failure Volume = Failure Rate × Executions

This metric helps you spot which tests are the biggest contributors to failure noise, even if their failure rate isn’t super high.

Flakiness Rate

It measures the percentage of times a test produces an inconsistent pass/fail results. Analyzing the Flakiness Rate metric allows users to focus on improving the reliability and stability of flaky tests, reducing false positives or negatives, and enhancing the overall trustworthiness of the test suite.

Flakiness Volume

Quantify how much a test’s flakiness impacts the overall stability of your test suite. It combines how often a test runs with how flaky it is, giving a sense of how much the test is likely to cause inconsistencies or unreliable results. A test that runs frequently with a low flakiness rate could cause more issues overall than a test that rarely runs but is highly flaky.

Flakiness Volume = Flakiness Rate × Executions

Executions

How many recordings were included for calculating the metrics — i.e. matched the period and the filters.

Value Metrics vs Volume Metrics

Metrics like Duration Volume, Flakiness Volume and Failure Volume measure the impact of the associated test on overall suite performance. The scores are calculated by multiplying the corresponding metric by the number of samples. The actual number has no real meaning - it's just a numerical expression

For example, consider two tests:

  • Test A runs rarely, reported 10 samples, with a 15% flakiness rate.

  • Test B runs often, reported 40 samples, with a 5% flakiness rate.

Test A Flakiness Volume is 10 x 0.15 = 1.5

Test B Flakiness Volume is 50 x 0.05 = 2

Test B has a higher Flakiness Volume because it affects the overall test suite flakiness more, although its rate is lower.

In short, a test that’s a little flaky but runs a lot can be a bigger problem than a test that’s very flaky but rarely runs. The actual number doesn’t matter on its own — it’s just useful to compare tests and see which ones are dragging down reliability the most.

Filters

Only test recordings matching the filters are included for metric calculation.

  • Date Range - include items recorded within the specified period

  • Tag - include items with the matching Playwright Tags

  • Author - include items with the matching Git Author (see Commit Information)

  • Branch - include items with the matching Git Branch (see Commit Information)

  • Group - include items recorded for particular group (e.g. Firefox or Chromium)

  • Search by spec name - narrow down the results by test spec name

  • Search by test title - narrow down the results by test title

Controls & Settings

  • Click on a column header to sort the tests by the corresponding column, click again to change the sorting order.

    • Include Failed Executions - include or exclude the failed execution in calculating the avg.duration

    • Include Skipped Executions - include or exclude skipped tests from being included in metrics

Use Cases

Here are a few examples of what information you can get from the Test Explorer.

  • The flakiest tests from the last months for a specific branch.

  • The top failing tests in the suite.

  • The failure rate change for specific branches for the past months.

  • The lowest tests and how they changed their duration over time.

Clicking on an individual test will reveal a comprehensive drill-down of the specific test's performance, including a detailed history of execution, top errors, etc.

Next Steps

  • Explore individual tests performance in the Tests Performance section.

  • Schedule Automated Reports with the top items from the Test Explorer view to automatically arrive to your inbox for proactive monitoring of test suite health.

Click on the Export icon to download the data in JSON format

Click in the Settings icon to customize the view:

Tests Explorer health and performance dashboard - flakiness, failure rate, duration.
Test Explorer View - 30-day data with no filters, sorted by duration (longests tests first)
Test Explorer Settings