Error Explorer
Error Explorer help discover actionable insights about the errors impacting your CI executions so you can spot patterns, uncover root causes, and prioritize fixes more efficiently.
Currents goes beyond just collecting error messages by analyzing each failure and extracting fields such as:
Target (e.g. CSS selector, URL)
Action (e.g.
click
,toBeVisible
)Category (e.g. Assertion, Timeout)
By combining these fields, you can uncover precise failure patterns across your CI runs — linking what happened (Action), where it happened (Target), and why it failed (Category).
Currents evaluates the impact of test failures across your CI pipeline by measuring
number of failure occurrences
number of affected tests
number of affected branches

This allows you to distinguish between flaky UI selectors, network instability, and true logic errors; identify components or endpoints that frequently break; and visualize how different error types evolve over time.
Error Classification Fields
When a test fails, the raw error message and stack trace are parsed by Currents’ Error Classification Engine. It enriches every captured test error with structured fields that turn unstructured log text into searchable, comparable data.

How to use Error Classification Fields?
When improving testing suite stability, focus on finding the most unstable components across multiple tests and runs. One way to do that is to look at the most frequent failures. For example consider this message:
Error: expect(locator).toBeVisible() failed
This message is too generic — several different CSS selectors could trigger it. That’s why additional context is needed to reason about the error, not just the message itself. By combining Target + Message we uncover what CSS selectors are generating this error.

Think of it as an SQL GROUP BY
statement - by changing the fields and their order, you gain different perspective on your test suite top failures. Let's take a look at the available fields.
Category
A high-level classification bucket that represents the type or nature of the failure. Categories are mutually exclusive and describe why the test failed, not the specific details.
Assertion
A condition asserted in the test did not match the expected result.
expect(received).toBe(expected)
Test logic failed — wrong value or state.
Async Assertion
A retried assertion timed out before the expected condition was met.
Timed out 15000ms waiting for expect(locator).toBeVisible()
Dynamic UI did not reach the expected state.
Action
A Playwright operation (click, navigation, wait) failed to complete.
locator.waitFor: Timeout 30000ms exceeded
Page element not ready or interaction blocked.
Timeout
The overall test, hook, or step exceeded its maximum allowed duration.
Test timeout of 30000ms exceeded
Infrastructure or setup delay; test never finished.
Infra / Misc
Unrelated to test logic — caused by environment or runtime errors.
Error: browserType.launch: Executable doesn't exist at ...
Environment misconfiguration or dependency issue.
Usage examples:
Group by Category
→ Compare the distribution of Assertion vs Action vs Timeout errors.Filter by Category = Action
→ See all failed interactions with UI elements or APIs.
Action
The specific operation or command being executed at the time of failure. It describes what the test was trying to do when the error occurred. Actions are derived from Playwright’s APIs and assertion functions.
toBeVisible
Timed out waiting for expect(locator).toBeVisible()
Element did not become visible within timeout.
toEqual
expect(received).toEqual(expected)
Expected and received values differ.
click
locator.click: Target closed
Element disappeared before click executed.
waitForURL
page.waitForURL: Timeout exceeded
Navigation did not reach the expected URL.
waitForResponse
page.waitForResponse: Timeout 60000ms exceeded
Network request never returned a response.
Usage examples:
Group by Action
→ Identify which operations most often fail (e.g.,toBeVisible
orclick
).Search for Action = waitForResponse
→ Filter network-related issues.
Target
The object of the action or assertion — typically a UI element, locator, or network endpoint affected by the failure. Targets give context to where the failure happened.
Locator / Selector
getByTestId('order-confirmation-v2-header')
Element in the DOM being tested or interacted with.
CSS Selector
locator('[data-q^="activity-log-item"]')
Selector path identifying a UI element.
Text / Role Target
getByText('Thank you for your order')
Element targeted by text content.
Network Endpoint
https://api.labs.livechatinc.com/v3.5/configuration/action/list_bots
Remote resource or API request tied to a failure.
Navigation URL
/#/dashboard
Page or route expected during a navigation action.
Usage examples:
Group by Target
→ Identify which selectors or endpoints produce the most errors.Group by Action + Target
→ Correlate failing operations with their affected element or endpoint.Search by Target = getByTestId('checkoutButton')
→ Isolate all failures tied to a single component.
How It Works
Currents has been trained on hundreds of thousands of CI errors to identify patterns and assign structured fields (tokens) to each one.
Extract raw data - Currents captures the error’s message, stack, and location from the test run.
Apply pattern recognition - the message is scanned for known Playwright formats — such as
expect(locator).toBeVisible()
orpage.waitForURL()
. Each match maps to a classification pattern (e.g., Assertion, Timeout, Action).Aggregate - metrics and charts use these tokens to calculate frequency, impact, and correlations across runs, tests, and branches.
Combining Error Fields
By combining Error Message, Category, Action, and Target fields, you can perform detailed impact analysis to understand not just how often errors occur, but also reveals which parts of your product or test suite contribute most to instability.
For example:
grouping by Action + Target can expose a single flaky selector causing hundreds of failures across multiple branches,
grouping by Category + Branch highlights whether timeouts or assertions dominate your failures in production versus feature branches.
Filtering by Target (such as a specific API endpoint or DOM element) helps quantify how many unique tests, branches, or runs are affected by that component — a direct measure of its impact on CI reliability.
Filtering by an Error Field
Use the search popup to filter items based in the selected fields values. For example:
Target: "toast"
→ show all items that include "toast" valueAction: "waitForURL"
→ isolates navigation issues.
Error Explorer Timeline
The Errors Explorer displays a timeline chart showing the daily distribution of error messages over the selected period. You can switch the metric and adjust how many top errors to display. Top errors are ranked by their total value for the selected metric across the period.

Error Explorer Metrics
Currents shows the following metrics to help estimate an impact of an error item.
Occurrences
Shows how often an error has caused a failure or a flaky behavior during the selected period, based on the active filters. This metric counts all occurrences — including repeated ones from the same test.
For example, if the error message TimeoutError: Navigation timeout of 30000 ms exceeded
occurred 5 times in test A
and 10 more times across other tests, the total count will be 15.
Affected Tests
Shows how many unique tests were impacted by this error during the selected period. Each test is counted once, even if the error occurred multiple times in it.
For example, if the same error appears 5 times in one test and 3 times in another, the Affected Tests count will be 2.
Affected Branches
Shows how many unique branches encountered this error during the selected period. Each branch is counted once, even if the error occurred multiple times on it.
For example, if the error shows up 10 times on main and 3 times on feature/login, the Affected Branches count will be 2.
Individual Error Details
Clicking an error item reveals more details about that specific error.

Affected Tests – A list of tests impacted by the error, sorted by occurrence. These are tests that failed or flaked due to the error. Click a test title to view its details in the Test Explorer.
Recent Executions – A chronologically sorted list of the most recent test runs affected by this error. Clicking on a test title reveal its details in the Test Explorer, clicking on the commit message opens the specific execution details.
Affected Branches – A list of branches where this error occurred, sorted by occurrence count.
Customization
Use filters to fine-tune the data that used to calculate the metrics for the Error View:
Date Range - include items recorded within the specified period
Tag - include items with the matchingPlaywright Tags
Author - include items with the matching Git Author (seeCommit Information)
Branch - include items with the matching Git Branch (seeCommit Information)
Group - include items recorded for particular group (e.g.
Firefox
orChromium
)Search by error message - narrow down the results by Error Message
Additionally, use the Timeline Chart to focus in the time period of interest:

Last updated
Was this helpful?