Rayrun
← Back to Discord Forum

Classify errors

Hi everybody!

I'm thinking about what is the best way to re-use a test to perform a smoke test that includes:

  • Functional
  • Accessibility
  • """Performance""" (with many quotes)

The scope of the smoke would be to simply log in, navigate to each menu option, and verify that: 1- load all components correctly: I would check this via toHaveScreenshot 2- no accessibility issues have been introduced: I would check this via AxeBuilder.analyze() 3- response times are still optimal: I would verify this through testStep.duration and validating that the value is less than 3000 milliseconds. By the quotes I meant that, I'm not trying to do any kind of sophisticated performance testing, but simply to have a quick understanding that the performance was not abruptly degraded. I am closely following the k6 project with the idea of being able to implement it in the future.

What worries me is that, in any of the three assertions, the test would be marked as "Failed", something that makes sense as an assertion has not passed, but I do not consider the failure due to point 1 to be as important as that due to point 3.

I thought of some solutions, but I don't know if it is currently possible: A- Add a tag after an assertion fails, then you could identify: bug-functional, bug-performance, bug-accessibility. B- Be able to create a new state as a test result, in addition to the existing ones (Passed / Failed / Flaky / Skipped), have a "Warning" state.

Is it possible to do something like what I have in mind? Or what would be the best way to work in this scenario?

Thanks in advance !

This thread is trying to answer question "Is it possible to add a tag after an assertion fails or create a new 'Warning' state as a test result? What would be the best way to work in this scenario?"

0 replies

Answer

Handling Failed Assertions in Playwright

In Playwright, you can handle failed assertions and create a 'warning' state using soft assertions and custom error messages.

Soft Assertions

Soft assertions don't stop the test execution when they fail. Instead, they compile a list of failed assertions to display at the end of the test. Here's how you can use them:

await expect.soft(page.getByTestId('status')).toHaveText('Success');
await expect.soft(page.getByTestId('eta')).toHaveText('1 day');

Custom Error Messages

You can provide additional context to your failed assertions by adding custom error messages. Here's an example:

await expect(page.getByText('Name'), 'should be logged in').toBeVisible();

If the assertion fails, the error message will include "should be logged in" along with other details about the failure.

Checking for Soft Assertion Failures

You can check if there were any soft assertion failures during the test execution by accessing test.info().errors. If there were failures, you can choose to skip further tests or perform additional actions based on your needs. Here's how:

if (test.info().errors.length > 0) {
  // Perform actions based on failure
  // Add tags or create new "warning" state as needed
} else {
  // Continue running further tests
}

By using these techniques, you can effectively handle failed assertions and take appropriate actions based on your requirements. For more tips on efficient Playwright test scripts, check out this blog post.

Related Discord Threads

TwitterGitHubLinkedIn
AboutQuestionsDiscord ForumBrowser ExtensionTagsQA Jobs

Rayrun is a community for QA engineers. I am constantly looking for new ways to add value to people learning Playwright and other browser automation frameworks. If you have feedback, email luc@ray.run.