Compare screens with AI vision
The AI Visual Comparison command compares two device screens like a human tester would. The assistant detects and describes visual differences in test result screenshots, and fails the test if the differences are in certain categories. This AI command is currently available only for mobile, not for tablets nor web testing.
-
You define a screen shown in a test as the baseline. Later test runs are called checkpoints. Perfecto compares the checkpoints to the baseline and highlights differences.
-
To customize the command, select the specific Failure Criteria categories that shall fail the test.
For examples and limitations, see Best practices for working with AI visual comparisons.
To perform a visual comparison, first initialize the baseline.
To initialize the visual comparison:
-
Open a device.
-
In the left sidebar, select the AI tab.
-
Click and drag or double-click the AI Visual Comparison command to add it to the test editor.
-
Navigate to the Baseline parameter and click Edit. Give the baseline a name.
Tip: Choose a unique name. Name the baseline after the screen you are comparing, for example, Login or Shopping Cart. -
Save the test.
-
To initialize the baseline, run the test now.
After its first test run, the AI Visual Comparison pane in the results is disabled, because there is nothing to compare yet.
Next, define failure criteria.
To define failure criteria:
-
Double-click the AI Visual Comparison command in the test editor.
-
Navigate to the Fail Criteria parameter and click Edit.
-
Select one or more visual differences that will fail the test:
-
Device - These are common, expected differences that typically do not fail tests. For example, the device shows a different signal strength, different battery level, different time of the day, or an additional open browser tab.
You typically do not select this as a failure criterion, but you could if needed. -
Style - Text style, font, text size, color, alignment, borders, shading, or similar, have changed in the checkpoint. The visual context and values stayed the same as in the baseline.
-
Value - Text content or numeric values have changed in the checkpoint. The style and context stayed the same as in the baseline.
-
Missing - An element (text, icons, buttons, tooltips, headers, footers) that appears in the baseline is absent in the checkpoint.
-
Moved - An element (text, icons, buttons, tooltips, headers, footers) that appears in the baseline is in a different position in the checkpoint.
-
Addition - An element (text, icons, buttons, tooltips, headers, footers) that is absent in the baseline has appeared in the checkpoint.
-
Error - An error message has appeared, GUI elements are cut off, GUI elements overlay each other, and similar.
-
Uncategorized - Other significant detected differences that don't fit any previous category.
-
Pixel difference - An algorithmic pixel-by-pixel comparison has detected differences, but the AI Assistant does not consider them significant as defined by the above categories. These are usually artifacts caused by normal anti-aliasing, image compression, or image scaling. You typically do not select this as a failure criterion, because it triggers almost every time, but you could in special cases.
-
-
Click Apply and save.
After initializing the baseline and configuring failure criteria, run the test a second time to get results.
To get results:
- Run the test and view the results on the STR screen.
- Click AI Visual Comparison to view results
- In the AI Visual Comparison pane, review the differences highlighted in the baseline and the checkpoint screenshots.
The Assistant highlights all detected categories.
-
View results:
- Hover the pointer over a highlight to read the detailed report for the detected difference.
- Hover the pointer over the baseline header to view the baseline name.
-
Use the buttons to zoom in and out or view the screenshot in fullscreen size.
- If any detected categories are failure criteria, the test fails.
To set a new baseline:
-
Open the STR screen and view the results showing the baseline and the current checkpoint.
-
Click the button next to the checkpoint header to make the current result the new baseline.
-
Confirm the dialog.
The new baseline is applied from the next test run on.
