Currents Documentation
Currents.devGitHubChangelog
  • Getting Started
    • What is Currents?
    • Playwright
      • Playwright: Quick Start
      • Troubleshooting Playwright
    • Cypress
      • Your First Cypress Run
      • Integrating with Cypress
        • Compatibility
        • Alternative Cypress Binaries
      • Troubleshooting Cypress
    • Jest
      • Your First Jest Run
      • Detox + Jest
      • Troubleshooting Jest
    • Others
    • CI Setup
      • GitHub Actions
        • Cypress - GitHub Actions
        • Playwright - GitHub Actions
        • Jest - GitHub Actions
        • Node.js - GitHub Actions
        • Commit data for GitHub Actions
        • Custom Docker runners
        • Named Runners
      • GitLab
        • Cypress - GitLab CI/CD
        • Playwright - GitLab CI/CD
        • Custom Docker runners
      • Jenkins
        • Cypress - Jenkins
        • Playwright - Jenkins
      • CircleCI
        • Cypress - CircleCI
        • Playwright - CircleCI
      • Bitbucket
        • Cypress - Bitbucket Pipelines
      • Azure DevOps
        • Cypress - Azure DevOps
        • Playwright - Azure DevOps
      • AWS Code Build
        • Cypress - AWS Code Build
        • Playwright - AWS Code Build
      • NX
        • Playwright - NX
        • Cypress - NX
  • Guides
    • Record Key
    • CI Build ID
    • Reporting
      • Reporting Strategy
      • Reporting in CI
      • Step-Level Reporting
    • CI Optimization
      • Playwright Parallelization
      • Orchestration Setup
      • Fully Parallel Mode
      • Re-run Only Failed Tests
      • Cloud Spot Instances
      • Failing Fast
      • Load Balancing
    • Code Coverage
      • Code Coverage for Playwright
      • Code Coverage for Cypress
    • Currents Actions
      • Setup Currents Actions
      • Using Currents Actions
      • Reference
        • Conditions
        • Actions
    • Playwright Component Testing
    • Playwright Visual Testing
    • Playwright Annotations
    • Playwright Tags
    • MCP Server
  • Dashboard
    • Projects
      • Projects Summary view
      • Project Settings
      • Archive and Unarchive Projects
    • Runs
      • Run Status
      • Run Details
      • Commit Information
      • Tags
      • Run Timeouts
      • Canceling Runs
      • Deleting Runs
      • Run Progress
    • Tests
      • Spec File Status
      • Test Status
      • Flaky Tests
      • Test History
    • Test Suite Explorer
      • Test Explorer
        • Tests Performance
      • Spec Files Explorer
        • Spec Files Performance
      • Errors Explorer
  • Automated Reports
  • Insights and Analytics
  • Administration
    • Email Domain Based Access
    • SSO SAML2.0
      • SAML2.0 Configuration
      • SCIM User Provisioning
      • IdP-initiated Sessions
      • JumpCloud
        • JumpCloud User provisioning
      • Okta
        • Okta User provisioning
      • Troubleshooting SSO
    • Billing & Usage
  • Billing and Pricing
  • Resources
    • Reporters
      • cypress-cloud
        • Batched Orchestration
        • Migration to Cypress@13
      • @currents/cli
      • @currents/playwright
        • Configuration
        • pwc
        • pwc-p (orchestration)
        • Playwright Fixtures
      • @currents/jest
      • @currents/node-test-reporter
      • @currents/cmd
        • currents api
        • currents upload
        • currents cache
        • currents convert
      • Data Format Reference
    • Integrations
      • GitHub
        • GitHub App
        • GitHub OAuth
      • GitLab
      • Slack
      • Microsoft Teams
      • HTTP Webhooks
      • Bitbucket
    • API
      • Introduction
      • Authentication
      • API Keys
      • Errors
      • Pagination
      • API Resources
        • Instances
        • Runs
        • Projects
        • Spec Files
        • Test Signature
        • Test Results
    • Data Privacy
      • Access to Customer Data
      • Data Retention
      • Cloud Endpoints
    • Support
Powered by GitBook
On this page
  • Currents Reporting Terms
  • Project
  • Run
  • Group
  • Spec File
  • Test Recording
  • Attempt
  • Reporting Scenarios
  • Single product / repository
  • Multiple standalone products / repositories
  • Mixed reporting, multiple environments
  • Example
  • Using Tags

Was this helpful?

  1. Guides
  2. Reporting

Reporting Strategy

How to structure and organize the reporting of CI test results to Currents

PreviousReportingNextReporting in CI

Last updated 2 months ago

Was this helpful?

This guide aims to help readers understand how to structure and organize the test result reporting in Currents. It covers key concepts such as project, runs and groups hierarchy, test result handling, and integration strategies.

By following this guide, users can optimize their test reporting workflow and gain clearer insights into their test runs.

Currents Reporting Terms

Project

The top-level entity representing reporting destination. Each organization can have multiple projects, there’s no limit on the number of projects.

You specify the project when running tests in your CI pipeline using environment variables CURRENTS_PROJECT_ID or via reporter configuration. Please refer to reporter documentation for available options.

Each project maintains separate settings and data, with no crossover

Test Results and History

  • Test results from different projects remain separate, even for identical tests.

  • Test history displays results only from the same project.

  • Performance metrics only include test recordings from the same project.

Integrations

Each project has its own integrations with 3rd parties (see Integrations). However, multiple projects can be connected to the same destination.

For example, enabling a GitHub integration allows selecting the same GitHub organization and repository for different projects. However, this may result in duplicate PR comments originating from separate projects connected to the same repository.


Run

A recording of test results from a CI run, uniquely identified by a CI Build ID, git commit information and CI provider execution details.

By default each run represents a build (or execution of a CI pipeline), however, you can send results from different stages of a pipeline or even different pipelines to the same run using the same CI Build ID value. See CI Build ID.


Group

A collection of spec files and the corresponding test results. For example:

  • in Playwright you can create two Playwright projects that run the same tests for chrome and firefox - Currents will create a separate group, accordingly.

  • in Postman each “collection” corresponds a separate group.

Each group can contain multiple spec files and test results.


Spec File

A file containing one or more test cases, identified by its reporter filesystem path. Currents collects certain performance metrics for specs. Spec file can contain zero or more test recordings.


Test Recording

A single test execution result, including attempts, attachments (screenshots, videos, traces, arbitrary attachments, visual diff results, annotations, error details. Each test record can have multiple attempts.


Attempt

An attempt to run of a test case. Multiple attempts may occur depending on retry strategies.

Reporting Scenarios

You can implement various reporting scenarios using a combination of Projects, Runs and Groups. Consider the following popular scenarios.

Single product / repository

You have a single product (and a repository) and a testing suite that runs as part of a CI pipeline, all tests run together, optionally multiple groups (e.g. chromium and firefox ) will be automatically detected.

It is recommended to have a single project and create a unique run on every invocation of the CI pipeline.

  • Create a single project

  • Create a unique run for each CI invocation

  • Create single or multiple groups within each run

Multiple standalone products / repositories

For bigger organizations, each product has its own repository and testing suite that runs as part of a CI pipeline.

  • Create a dedicated project for each product / repository

  • Create a single run for each CI invocation

  • Create a single or multiple groups for each run

Mixed reporting, multiple environments

A more complicated scenario is when you have a testing suite that runs in different "environments" and you need to decide how to organize the test results. For example:

  • running different Playwright projects in separate CI steps

  • running the same set of tests in multiple environments, e.g. based on locale (en-us, en-ca ) or a domain (com, .co.uk)

  • running a subset of tests based on tags or glob pattern, e.g. playwright test --grep @desktop

  • running a subset of tests from different CI steps (or different pipelines) while separating the results

To implement the variety of possible scenarios you need to use the combination of Project, CI build ID, Groups and Tags, become familiar with the limitations and implications.

Example

For example, consider testing an e-commerce web app with a slightly different set of Playwright tests for different domains (.com, .co.uk etc.).

You've defined the following

playwright.config.ts
export default {
  // ...
  projects: [
    {
      name:"UK",
      testMatch: ["./e2e/shared/*.spec.ts", "./e2e/uk/*.spec"],
      use: {
        baseURL: "example.co.uk" // 🇬🇧
      }
    },
    {
      name: "US",
      testMatch: ["./e2e/shared/*.spec.ts", "./e2e/us/*.spec"],
      use: {
        baseURL: "example.com" // 🇺🇸
      }
    },
  ]
}

Let's consider various setups and the implications.

Single project, single run, multiple groups

Example: single project, single run, multiple groups

This is the default reporting model that uses the same projectId (hence sends results to the same project) and the same ci-build-id (hence sends the results to the same run) for each command. Assuming Currents reporter is already configured.

  • Running playwright test --project UK will create a new run with a group UK

  • Running playwright test --project US will update the previosuly created run by adding the new group US

Item
Description

Project Settings: Timeout

Each run has its own timeout, both groups will have to finish within the designated time to prevent a timeout

Project Settings: Default branch

Same for all runs and groups

Project Settings: Run title source

Same for all runs and groups (same project)

Project Settings: Fail fast

Each run has its own fail-fast settings

Integration settings are set on a project level. Each run will activate its own notifications. Depending on integration settings you may receive a separate notification for each group completion or only for the whole run.

Actions are defined on Project level and apply to each run and both groups.

Run Results

Each run contains results for run both group and all the included tests.

Run Metrics

Run metrics are based on mixed results from the two groups: aggregated run duration includes both groups, suite size includes test from both groups.

Test Metrics

Test metrics are based on both groups, for example if test A runs in group UK and US then both samples of the test will be included in its metrics (duration, flakiness rate, failure rate). It is possible to include only samples from particular group by using group filter.

Coverage Reports

Coverage reports are collected on a group level

Scheduled Reports

Scheduled automated reports are defined on project level and will contain results for both groups

Error Aggregations

Error aggregations include results from both groups

Test Explorer and History

Test results, including history contain results from both groups. It is possible to filter the results only from particular group

Project Taxonomy

(tags, git info, groups) - filters

Project taxonomy includes items from all runs and groups

Single project, multiple runs, single group

Example: Single project, multiple runs, single group

Instead of sending the results to the same run, you can chose to create a separate run for each playwright group. Run the following commands as part of your CI pipeline (assuming Currents reporter is already configured).

  • Use the same projectId (send results to the same project)

  • Use a different ci-build-id for each group - setting a different value would create a new run:

    • CURRENTS_BUILD_ID=build-001-uk playwright test --project UK (creates a new run with a single group UK)

    • CURRENTS_BUILD_ID=build-001-us playwright test --project US (creates a new run with a single group US

In this case, instead of a single run that contains two groups, you will create two separate runs with with 1 group each.

Item
Description

Project Settings: Timeout

Each run has its own timeout

Project Settings: Default branch

Same for all runs

Project Settings: Run title source

Same for all runs

Project Settings: Fail fast

Each run has its own fail-fast settings

Integration settings are set on a project level. Each run will activate its own notifications

Actions are defined on Project level and apply to each run

Run Results

Each run contains results of a single group

Run Metrics

Test Metrics

Coverage Reports

Coverage reports are collected on a group level.

Scheduled Reports

Error Aggregations

Error aggregations include results from both types of runs.

Test Explorer and History

Project Taxonomy

(tags, git info, groups) - filters

Project taxonomy includes items from all runs

Multiple projects, single run, single group

As an edge case it is possible to use completely different projects. Keep in mind that each project's data is completely separated - each project has into own set of settings, analytics, results and integrations.

Use different projectId for each command (assuming projects already exist)

  • CURRENTS_PROJECT_ID=1cVv3a playwright test --project UK (creates a new run with a single group UK)

  • CURRENTS_PROJECT_ID=aXcR4sa playwright test --project US (creates a new run with a single group US

Using Tags

Regardless of the reporting strategy we recommend to annotate your tests and executions with tags. Adding tags allows granular access to the data.

  • Read more about using Playwright Tags to dynamically add tags to runs, groups and tests

  • Consider using removeTitleTagin @currents/playwright to remove tags from test title and keep test history consistent

Aggregated run metrics are based on mixed results from the two types of runs. For example, suite size includes test from both groups. It is possible to refine the aggregation metrics to only include runs with particular tags. See

Test metrics are based on all runs, for example if test A runs in both environments, then both samples of the test will be included in its metrics (duration, flakiness rate, failure rate). It is possible to include only samples from particular run or group by using tags. See .

Scheduled automated reports are defined on project level and will contain results for both types of runs. It possible to refine the reports using .

Test results, including history contain results from all runs.. It is possible to filter the results only from particular run using ..

Integrations
Currents Actions
Integrations
Currents Actions
Playwright Tags
Playwright Tags
Playwright Tags
Playwright Tags
Reporting entities and their relationship
Reporting scenario: single repository with (optionally) multiple groups
Reporting scenario: multiple standalone repositories report to different Currents Projects
Mixed reporting: CI steps create separate groups in a run
Mixed reporting: Each CI step create a new run