This package contains end-to-end tests for Perses written using Playwright and visual tests using Happo.
src
config
- Playwright configurations live here.base.playwright.config.ts
- Base configuration with common settings.ci.playwright.config.ts
- Configuration used when running in continuous integration.local.playwright.config.ts
- Configuration used when running in local development.
fixtures
- Playwright test fixtures live here. These are useful for managing common setup and teardown patterns across many tests. Seepages
for managing common page interactions and selectors.pages
- Page object models live here. These are classes that wrap selectors, page interactions, and other common patterns associated with a page. This helps reduce code duplication and improve test maintenance. In addition to pages, it also includes classes for large, complex page elements (e.g. panel editor) that benefit from their own wrappers.tests
- Playwright tests live here and are named following the patterntestName.spec.ts
.
.happo.js
- Happo configuration.
Tests are run during local development using the configuration in local.playwright.config.ts
. The tests depend on the local development servers (backend and UI) to test against. By default, locally run tests will not take screenshots for visual testing.
- Start the backend server from the project root:
./scripts/api_backend_dev.sh
- Change to the
ui
directory. - Start the UI server:
npm start
- Important to ensure libraries are built since e2e imports from other Perses packages
- Run the end-to-end tests from the command line:
npm run e2e
- (Optional) Run the end-to-end tests in debug mode to walk through a test step by step to debug issues:
npm run e2e:debug
. - (Optional) Install Playwright VS Code extension. This extension has a lot of helpful tools for running tests, debugging, and creating selectors. Select
local.playwright.config.ts
as the profile to use when running locally.
This option is limited to maintainers because it requires access to secrets in the Happo account.
Occasionally, you may want to generate screenshots locally to debug an issue with visual tests.
- Start the backend server from the project root:
./scripts/api_backend_dev.sh
- Change to the
ui
directory. - Start the UI server:
npm start
- Generate a Happo API token for local testing and give it a name that communicates that use case (e.g.
YOURNAME-testing-locally
). - Run the tests in "ci" mode:
HAPPO_API_KEY=*** HAPPO_API_SECRET=*** npm run e2e:ci
using the api values from Happo to specify the environment variables. - Follow the link generated by Happo in the command line at the end of your test run for the results.
Tests are automatically run in CI using the workflow configured in e2e.yml
with the configuration in ci.playwright.config.ts
. In this case, Playwright automatically starts up and waits for the development servers.
You can test the CI configuration locally by running npm run e2e:ci
.
Check out Playwright's documentation for general guidance on writing tests.
- The
testing
project indev/data/project.json
and associated dashboards indev/data/dashboard.json
should be used for end-to-end tests.- Give dashboards names that match the tests they are associated with for ease of debugging and maintenance.
- Set
modifiesDashboard: true
in the test fixture configuration for tests that mutate dashboards to ensure these tests can be run in parallel. When this option is enabled, the fixture will automatically generate a duplicate dashboard for the test and clean it up when the test is finished running.
- The project does not currently have a data source that can be used to test consistent rendering in plugins (e.g. a line chart with time series data). You can work around this by mocking network requests. See
mockQueryRangeRequests
inDashboardPage
for an example.
- Tests live in
ui/e2e/src/tests
and follow thetestName.spec.ts
naming scheme. - Tests should be able to run in parallel. Do not write tests that depend on specific order.
- Tests should not be flaky! Flaky tests are frustrating, waste time, and lead to decreased trust in the entire test suite. Ask for help if you are having trouble writing a non-flaky test for specific functionality.
- Use Page Object Models to reduce code duplication and improve test maintenance.
- Use the recommended locators (Playwright's term for element selectors), when possible. These patterns are very similar to React Testing Library, which is used for the project's Jest tests.
- Use unique names for panel groups and panels for ease of writing tests.
- Recommend using using
toBeCloseTo
when asserting on inexact pixel values to allow a margin of error associated with padding/margins/etc. to avoid flaky tests. Note that the precision (second argument) is calculated asMath.pow(10, -precision) / 2
(e.g. precision 1 will allow for a difference of 0.05).
This project uses a free open source account from Happo for our visual testing. Visual tests generated through the Playwright end-to-end test set use happo-playwright
and are listed under the perses-ui
project in Happo. See the storybook
package for information about visual tests generated using that tooling.
- Use visual tests for use cases where a different type of test will not provide adequate coverage (e.g. canvas-based visualizations, styling).
- Only create visual tests that can reliably be reproduced. Flaky tests are often worse than no tests at all because they lead to toil and reduce trust in the overall test set. Some examples of things that can lead to unreliable tests are:
- Inconsistent data sources. Consider using consistent mock data to avoid this. See
mockQueryRangeRequests
inDashboardPage
for an example. Make sure to reset any mocked routes usingunroute
when the test is finished. - Time zones. Playwright is configured to run tests in
America/Los_Angeles
to avoid this issue. Be careful when overriding this value. - Current time. Consider mocking
Date.now
or other relevant timing functions to avoid differences when the test is run. Tests using thedashboardTest
fixture can do this by settingmockNow
to a specific time in milliseconds. - Dynamic content. Wait for everything to load before taking a snapshot. This may involve a mix of things like: waiting for network requests to complete, waiting for css-based animations to complete (try using the
waitForAnimations
util), and waiting for canvas changes to complete (try using thewaitForStableCanvas
util). - If individual elements are known to cause inconsistencies, consider adding the
data-happo-hide
attribute. This will render the element invisible in the screenshot.
- Inconsistent data sources. Consider using consistent mock data to avoid this. See
- In most cases, visual tests should be generated for both light and dark themes.
- Go to the failing action in GitHub.
- Follow the Playwright instructions for viewing test logs and viewing the html report.
Tests that use the dashboardTest
fixture check for console errors and fail tests when they are found because they are often a sign of a subtle bug we have not accounted for. When debugging these issues, it can be helpful to run the e2e tests with a headed browser (using headed mode or using the vcode extension with "show browser" checked) with the debugger console open. This will often provide you with a more detailed error message and a stack trace.