← Back to Discord Forum

Testing Backend Copy Changes

connnnnnnorlposted in #help-playwright
Open in Discord

Hello all 👋 . I'm resurfacing this question since I didn't get any replies the first time around and i'm really curious to know peoples thoughts.

For as long as I can remember at my current workplace we've struggled with making test assertions against copy changes that are dictated by a Backend Team.

Since Frontend and Backend teams are disjointed, its very easy for Backend teams to make copy updates without notifying Frontend. This leads to rollbacks at work which are pretty noisy and distract us from getting actual work done.

The reason i'm asking this question is because we've talked about moving towards making assertions against just test-ids instead of copy but there are good arguments for both sides.

For example,

Arguments for making assertions against Backend copy: This is what the user actually sees, so asserting against copy gives us a lot more confidence. There are tricky copy changes that a test-id assertion would not be able to give us valuable confidence on like error states.

Arguments for not making making assertions against Backend copy: Here we make assertions on test-ids that are more stable than backend. So backend can make copy changes without hurting the Frontend team. Backend already tests the copy changes themselves. Additionally there's a PR review process for confirming the Backend changes are correct. So, asserting against the copy on Frontend means we're just duplicating these tests across the company.

So, I'm pretty curious what others think about this and what solutions ya'll have come up with at your workplaces to mitigate these types of issues.

Thanks in advance for your time 🙏

This thread is trying to answer question "What are the best practices for making test assertions against copy changes that are dictated by a Backend Team, especially when Frontend and Backend teams are disjointed?"

2 replies

All depends on what your testing exactly. You could mock the data if testing the actual data is not important to you. Or you could use test ids but then your missing out on naturally testing for accessibility so if someone changes the button to a link your test with a test id will pass but the user might have problems. Thats why it’s important to think about what and why you are testing each part of your app. You could do a bit of mocking and test ids. There is no right or wrong way its more about what works best for the actual part of the app you are testing


Mocking with HAR files is also a good option here. Then you can update the HAR when backend changes and fix any tests should you need to


Best Practices for Testing Copy Changes with @playwright/test

When testing copy changes made by a backend team, it's crucial to follow some best practices to ensure your tests are reliable and efficient.

Use Web First Assertions

Web first assertions are your best friend. They allow you to verify that the expected result matches the actual result. For example, if you're testing an alert message, you can use toBeVisible() to ensure the alert message is visible.


Avoid manual assertions that don't await the expect. Instead, use web first assertions like toBeVisible().

Debugging and CI/CD

For local debugging, use VSCode with the Playwright extension. You can pause at specific points in your code and inspect variables and states. For running tests on CI, set up CI/CD pipelines and run your tests frequently. Playwright comes with a GitHub Actions workflow that allows running tests on CI without any additional setup required.

Linting and Parallelism

Lint your tests to catch errors early. Tools like ESLint can help you ensure there are no missing await statements before asynchronous calls to the Playwright API.

Playwright provides built-in support for running tests in parallel. You can configure Playwright to run independent tests in parallel using the test.describe.configure({ mode: 'parallel' }) syntax.

test.describe.configure({ mode: 'parallel' })

Soft Assertions

Consider using soft assertions for improved productivity. They compile and display a list of failed assertions once the test ends, allowing you to make multiple checks without stopping the test at the first failure.

Remember, these best practices are specific to working with Playwright. Happy testing!

Related Discord Threads

AboutQuestionsDiscord ForumBrowser ExtensionTagsQA Jobs

Rayrun is a community for QA engineers. I am constantly looking for new ways to add value to people learning Playwright and other browser automation frameworks. If you have feedback, email luc@ray.run.