The panel contains a header with the general information about the test and three tabs with its details:
The full unique name of the test.
The specific format of the full name depends on the Allure adapter. In general, it includes the name of the module and the function containing the test implementation. Click the “Copy” icon next to the full name to copy it to the clipboard.
The status of the test. See Test statuses for what each value means.
Various marks describing the test's behavior in this test launch. May not be present for a specific test.
The possible marks are:
|The test finished because of a product defect, but it had any other status in the previous launch.
|The test finished without any issues, but it had any other status in the previous launch.
|The test finished because of a test defect, but it had any other status in the previous launch.
|The test is considered failed or broken based on the latest retry, but a previous attempt in this launch finished successfully.
|The test was run multiple times in this launch and finished with different results.
Allure can mark a test as New failed, New passed or New broken only if the information from previous reports is available when the current report is generated. See Tests history for more details.
Allure can automatically mark a test as Flaky or Retried only if the launch included two or more consecutive runs of this test. See Retries for more details.
Some Allure adapters provide a way to mark a test as Flaky manually. See your Allure adapter's reference for more details.
The title of the test, as specified via an Allure-specific function or annotation or via a framework-native way. See Title.
If the test finished with an error message and the Allure adapter was able to parse it from the standard error stream, the message will be shown at the top of the Overview tab.
Click on the message to see the full available traceback related to the message.
The Overview tab displays most of the metadata fields assigned to the test via functions and annotations.
The test duration is the number of seconds and milliseconds between the start and the end of the test.
Note that while this information is useful, it can be affected by many factors, including those not under control of the test framework. Because of this, it is usually better to interpret the duration not as an absolute value, but only in comparison with the durations of other tests.
The section is structured as an expandable list of steps and sub-steps. The duration of each step's execution is shown on the right side.
Click on a step to see the list of files that were attached during its execution. For each file, the following actions are available:
|Show the contents below the list item.
|Show the contents in a separate browser tab.
|Show the contents in a popup dialog.
The History tab lists up to 20 results of the test from previous reports and displays the status that each one finished with.
Also, the tab displays the test's success rate, which is calculated as the percentage of times the test finished with the Passed status.
Note that history is available only if the information from previous reports is available when the current report is generated. See Tests history for more details.
If there were multiple runs of the test, they will be listed on the Retries tab.
The retries can be caused either by manual or automatic re-runs. For example, some plugin for a test framework may be configured to run each test up to three times or until a successful run. See Retries for more details.
For each run, the tab displays its date, time, status, and the error message, if any. Click on the retry information to see more details, including the duration of that retry and the Execution section.