Currents Documentation
Currents.devGitHubChangelog
  • Getting Started
    • What is Currents?
    • Playwright
      • Playwright: Quick Start
      • Troubleshooting Playwright
    • Cypress
      • Your First Cypress Run
      • Integrating with Cypress
        • Compatibility
        • Alternative Cypress Binaries
      • Troubleshooting Cypress
    • Jest
      • Your First Jest Run
      • Detox + Jest
      • Troubleshooting Jest
    • Others
    • CI Setup
      • GitHub Actions
        • Cypress - GitHub Actions
        • Playwright - GitHub Actions
        • Jest - GitHub Actions
        • Node.js - GitHub Actions
        • Commit data for GitHub Actions
        • Custom Docker runners
        • Named Runners
      • GitLab
        • Cypress - GitLab CI/CD
        • Playwright - GitLab CI/CD
        • Custom Docker runners
      • Jenkins
        • Cypress - Jenkins
        • Playwright - Jenkins
      • CircleCI
        • Cypress - CircleCI
        • Playwright - CircleCI
      • Bitbucket
        • Cypress - Bitbucket Pipelines
      • Azure DevOps
        • Cypress - Azure DevOps
        • Playwright - Azure DevOps
      • AWS Code Build
        • Cypress - AWS Code Build
        • Playwright - AWS Code Build
      • NX
        • Playwright - NX
        • Cypress - NX
  • Guides
    • Record Key
    • CI Build ID
    • Reporting
      • Reporting Strategy
      • Reporting in CI
      • Step-Level Reporting
    • CI Optimization
      • Playwright Parallelization
      • Orchestration Setup
      • Fully Parallel Mode
      • Re-run Only Failed Tests
      • Cloud Spot Instances
      • Failing Fast
      • Load Balancing
    • Code Coverage
      • Code Coverage for Playwright
      • Code Coverage for Cypress
    • Currents Actions
      • Setup Currents Actions
      • Using Currents Actions
      • Reference
        • Conditions
        • Actions
    • Playwright Component Testing
    • Playwright Visual Testing
    • Playwright Annotations
    • Playwright Tags
    • MCP Server
  • Dashboard
    • Projects
      • Projects Summary view
      • Project Settings
      • Archive and Unarchive Projects
    • Runs
      • Run Status
      • Run Details
      • Commit Information
      • Tags
      • Run Timeouts
      • Cancelling Runs
      • Deleting Runs
      • Run Progress
    • Tests
      • Spec File Status
      • Test Status
      • Flaky Tests
      • Test History
    • Test Suite Explorer
      • Test Explorer
        • Tests Performance
      • Spec Files Explorer
        • Spec Files Performance
      • Errors Explorer
  • Automated Reports
  • Insights and Analytics
  • Administration
    • Email Domain Based Access
    • SSO SAML2.0
      • SAML2.0 Configuration
      • SCIM User Provisioning
      • IdP-initiated Sessions
      • JumpCloud
        • JumpCloud User provisioning
      • Okta
        • Okta User provisioning
      • Troubleshooting SSO
    • Billing & Usage
  • Billing and Pricing
  • Resources
    • Reporters
      • cypress-cloud
        • Batched Orchestration
        • Migration to Cypress@13
      • @currents/cli
      • @currents/playwright
        • Configuration
        • pwc
        • pwc-p (orchestration)
        • Playwright Fixtures
      • @currents/jest
      • @currents/node-test-reporter
      • @currents/cmd
        • currents api
        • currents upload
        • currents cache
        • currents convert
      • Data Format Reference
    • Integrations
      • GitHub
        • GitHub App
        • GitHub OAuth
      • GitLab
      • Slack
      • Microsoft Teams
      • HTTP Webhooks
      • Bitbucket
    • API
      • Introduction
      • Authentication
      • API Keys
      • Errors
      • Pagination
      • API Resources
        • Instances
        • Runs
        • Projects
        • Spec Files
        • Test Signature
        • Test Results
    • Data Privacy
      • Access to Customer Data
      • Data Retention
      • Cloud Endpoints
    • Support
Powered by GitBook
On this page
  • Individual Spec File View
  • Controls Overview
  • Reading Performance Charts
  • Executions History
  • Overview Histogram (Brush)
  • Executions Timeline

Was this helpful?

  1. Dashboard
  2. Test Suite Explorer
  3. Spec Files Explorer

Spec Files Performance

Advanced Analysis for Individual Spec Files

PreviousSpec Files ExplorerNextErrors Explorer

Last updated 11 months ago

Was this helpful?

Individual Spec File View

Selecting a particular spec file in Spec Files Explorer opens a detailed view of the spec file performance. You can analyze the metrics, including the trend and compare them to the previous period using six graphs that showcase the key metrics.

These include

By analyzing these six metrics through the provided graphs, users can understand the spec file's performance, reliability, and characteristics over time.

This helps to identify trends, patterns, and areas for improvement, ultimately enhancing the quality and efficiency of testing processes.

Controls Overview

Reading Performance Charts

Upon selecting the dates period, the charts show two lines:

  • The primary purple line shows the metric values for the selected period

  • The secondary [gray] line shows the metric value for the period preceding the selected period

For example, selecting a 14-day period reveals this type of chart:

  • The purple line shows the metric value for the selected date range (last 14 days)

  • The purple amounts are the average value for the selected period - i.e. the average duration of the spec file in the example based on the last 14-day recordings was 2m 16s

  • The gray line shows the metric value for the previous date range (28 days ago to 14 days ago)

  • The gray amounts are the average value for the preceding period - i.e. the average duration of the spec file in the example based on the recordings from 28 days ago rill 14 days ago was 2m 47s

  • The trend change value of -18.53% indicates that there was an improvement in the metric value for the recent period, compared to the preceding period

Hovering over a particular day would reveal the metric values and the difference between the selected day and the matching day in the preceding period:

It suggests a potential correlation between the duration of the spec file's executions and its failure rate. The fact that the highest average duration aligns with the highest failure rate indicates that longer execution times may be associated with an increased likelihood of failures for this particular spec file.

By identifying patterns or trends where these metrics coincide, users can find potential performance bottlenecks that could be causing longer execution times and leading to failures.

A similar analysis can be done on any other metric - the charts provide a visual indication of improvements or regressions in spec file performance, and also, together with Spec Files history allow pinpointing the root cause of change in the performance metrics.

Executions History

The Executions History shows the recordings matching the selected filters on a timeline - each bar is an execution, its colour is determined by the spec file outcome, and the height is the relative duration of the recording.

Users can use a brush tool to engage with these histograms, enabling them to zoom in and concentrate on executions in a particular period of interest.

Overview Histogram (Brush)

The upper histogram (the Brush) shows all the executions for the selected period and allows quick navigation and focusing on specific executions. One can change the metric used for the histogram for more efficient navigation.

For example, the recording below shows the executions history for a particular spec file. The default settings show 18 pages (or 856 samples) of matching items - it would be cumbersome to go through all 18 pages.

By switching the histogram to use "Failure Rate" we quickly visually identify the period with an increased failure rate and adjust executions by dragging the histogram brush handles. In the example below we reduced the number of samples to 147 (3 pages) - that allowed us to quickly find the executions that caused the increase in failure rate.

Executions Timeline

Users can also arrange executions based on branches using the "Collapse branch" flag. This functionality enables a more structured and organized view of the data, grouping the executions according to their respective branches.

The Spec Files Execution History feature also organizes every spec file execution in a timeline format, where colors represent their respective execution status (Failed, Passed, and Flaky). Each spec file execution entry provides users with essential information such as the date, time, duration, branch, commit message, and author. Additionally, for failed tests, the feature presents an error preview.

Clicking on a recording (the bar) will open the selected recording and the associated run.

The Individual Spec File view allows users to sort and filter spec files by , , , and . Only the Spec File recordings matching the filters will be included in the graphs and the metrics calculations.

For instance, let's consider the spec file analyzed in the video below. During the analysis, the peak of the for this spec file occurred on the same day as the highest within the selected period.

Zoom into one Spec File metrics
Executions History - Spec Files
Cypress or Playwright Spec File Performance Overview
Avg. Spec File Duration Chart
Comparing Spec File performance for specific date
metrics
Average Duration
Overall Executions
Suite Size
Failure Rate
Flakiness Rate
Timeout Rate
dates
tags
author
branches
average duration
failure rate