Data Format Reference
Currents Reporting API - Data Format Reference
Currents can accept results from arbitrary testing frameworks. This document provides detailed instructions for creating test results data that is compatible with Currents.
To upload the results, use the currents upload command.
Refer to currents convert to see how we convert results from various popular testing frameworks to "Currents Format" that conforms to the specification presented below.
Results Directory
To upload the results to Currents, create a "Results Directory" with all the necessary files. The directory will be used as the --output-dir
parameter for the currents upload command, for example:
npx currents convert --output-dir path-to-results-directory
Within the Results Directory, the following structure of files and directories is expected.
results-dir/
├── instances/
│ ├── <instance-id-1>.json
│ ├── <instance-id-2>.json
│ └── <instance-id-n>.json
│── config.json
└── fullTestSuite.json
The output consists of three main components:
fullTestSuite.json
is the Full Test Suite JSON document that contains the tests expected to be reported. It does not contain test results.config.json
is a Configuration File that contains metadata such as test framework name, version, and more.instances
folder contains Instance Files - JSON documents that represent a spec file or logical collection and the associated test results.
Full Test Suite
The Full Test Suite is a JSON-formatted file that contains a list of all tests expected to be reported to the Currents platform for the current build/run.
Each element in the fullTestSuite.json
file array represents a group of tests, organized by the name
property which defines the group name.

Full Test Suite file
The root of the fullTestSuite.json
file is a list of elements of the type Group
.
Group
name
string
Yes
Represents the group ID that will be visualized in the dashboard. All the tests will be organized by the group ID.
tests
Yes
List of included tests, including test title, spec file, test tags and testId. The testId of full test suite file and instance files must match.
SuiteTest
SuiteTest
title
string[]
Yes
Test description plus title
spec
string
Yes
The spec file where this test is defined
tags
string[]
No
A list of tags associated with the test for categorization or filtering
testId
string
Yes
A unique identifier for the test case. It is created with a hash of the spec file property and the title. See how to generate this property
Configuration File
The config.json
file contains the metadata used by Currents to properly display results in the dashboard. Example:
{
"framework": "postman",
"frameworkVersion": "1.12.12",
"frameworkConfig": {
"format": "junit"
}
}
framework
string
Yes
Name of the framework used to execute the tests. The currently accepted values are postman
, vitest
and wdio
(WebDriverIO)
frameworkVersion
string
Yes
Testing framework version used to execute the tests
frameworkConfig
object
No
Contains information about the configuration of the framework. Currently the property format with value junit is allowed
Instance Files
An Instance File is a JSON document that represents a spec file and its included test execution results.
The properties that can be found in an instance file are the following:
Root Object
groupId
string
Yes
Identifier for the test group. It provides a reference for what the executed tests are about
spec
string
Yes
The name of the spec file or logical collection that contains the executed tests. The spec property must be unique across all instance files. Example: tests/utils.spec.ts
startTime
string
Yes
The timestamp indicating when the execution of the spec file started in ISO 8601 format
TestResult
StatsObject
suites
number
Yes
Number of logical groupings or collections of test results
tests
number
Yes
The total number of tests executed in the current instance
passes
number
Yes
The number of tests that passed in the current instance
pending
number
Yes
The number of tests that are pending execution and can be reported later
skipped
number
Yes
The number of tests that were not executed on purpose in the current instance
failures
number
Yes
The number of tests that failed in the current instance
flaky
number
Yes
The number of tests marked as flaky by the testing framework in the current instance
wallClockStartedAt
string
Yes
Time when the first test started its first attempt in ISO 8601 format
wallClockEndedAt
string
Yes
Time when the last test finished its last attempt in ISO 8601 format
wallClockDuration
number
Yes
Total duration of the spec file tests execution in milliseconds
Test
Each object in the tests array represents the execution result of a test, possibly with multiple attempts.
_t
number
Yes
The timestamp indicating when the execution of the spec file started, in milliseconds
title
string[]
Yes
Array containing the specification and title of the test. Example: ["75119228-2d2d-4e59-b426-60a002b8cdce / Get Run", "Response status code is 200"]
state
string
Yes
Final state of the test
isFlaky
boolean
Yes
Indicates whether the test is flaky
expectedStatus
string
Yes
The expected status of the test
timeout
number
Yes
Time in milliseconds that the test execution lasted without a clear state result
retries
number
Yes
Number of retries attempted for the test
Location
column
number
Yes
Column number of the test in the source file
file
string
Yes
The file path where the test is defined
line
number
Yes
Line number of the test in the source file
Attempt
Object that describes an individual attempt of a test.
_s
string
Yes
Intermediate test attempt status (passed
, failed
attempt
number
Yes
Index of the . Defines the order of attempt execution
startTime
string
Yes
Timestamp when the attempt started in ISO 8601 format
duration
number
Yes
Duration of the attempt in milliseconds
status
string
Yes
Final test attempt status passed
, failed
, timedOut
, skipped
, interrupted
stdout
string[]
Yes
Standard output logs for the attempt
stderr
string[]
Yes
Standard error logs for the attempt
Step
title
string
Yes
The title or description of the step. Example: "Validate API response schema"
category
string
No
The category of the step, indicating its classification. Example: "API Test"
duration
number
Yes
The duration of the step in milliseconds. Example: 200
startTime
string
Yes
Step start date time, in ISO 8601 format
Error
message
string
Yes
A description of the error encountered during the test attempt. Example: "expected 631 to be below 200"
stack
string
No
The stack trace related to the error. Example: "AssertionError: expected 631 to be below 200\n at Object.eval sandbox-script.js:2:1)."
value
string
No
The type or categorization of the error. Example: "AssertionFailure"
Generating testId
The testId is a hash composed of the test title and the spec file name. Use this function to generate it.
export function generateTestId(testTitle: string, specFileName: string): string {
const combinedString = `${testTitle}${specFileName}`;
const fullHash = crypto
.createHash('sha256')
.update(combinedString)
.digest('hex');
return fullHash.substring(0, 16);
}
Last updated
Was this helpful?