A ‘locator’ is a testing function (or platform capability) that directs an automated test to interact with or observe the state of one or more specific UI elements.
Many automated tests against User Interfaces (UIs) such as web and mobile apps are currently written in a way that identifies the elements involved in the test using static locators with no backup strategy. As such, they often stop working when changes are made that impact the way that their visual elements are represented. The result is tests that constantly break, report failures that are ambiguous as compared to tests failing for good reason, cause a loss of trust and confidence in the testing resources, and ultimately slow down the overall software delivery process.
A ‘locator’ is a testing function (or platform capability) that directs an automated test to interact with or observe the state of one or more specific UI elements. It can use attributes of that element, even if they’re not visible on-screen, to consistently locate the element(s) in question. Traditional examples of a locator are:
Because many traditional and open-source testing technologies include basic recording functions, the result is many tests that have poor element locator strategies, such as by default XPath or attributes that are supposed to be stable, but the team isn’t using them that way. Even locators like this that work the first few times might often fail the next day due to inconsistent loading times caused by network latency, different browser implementations, or more often that development teams change the UI and don’t realize they have broken a bunch of tests that rely on particular elements and locator attributes.
To add to these difficulties, many tests are written with no fallback or multi-attribute identification strategies. Some testing platforms that include ‘self-healing’ often run delta snapshots between tests which haven’t changed but are failing and that of the same test which was working prior, in order to identify which attributes or characteristics of elements have changed and recommend updates therein.
Poor craftsmen blame their tools. We know we need reliable automation, and testing especially needs to remain a trustworthy source of fast and complete feedback about the quality of our apps. But is it really the fault of a particular approach or paradigm that there’s so many brittle tests out there?
At the core of the problem are two key dynamics:
Addressing the first of these problems requires cross-team and cross-function collaboration to find the right set of standards relative to the project, the languages and frameworks involved, and the existing landscape of test tooling in place. Once found, developer documentation is not enough to ensure that mistakes and non-compliance doesn’t slip into the code (not just the tests). There are often rules that can be put in place, such as in SonarCube or other static code analysis (SCA) tools, to catch known-bad patterns such as a lack of uniquely identifying attributes on elements, misuse of known-bad strategies for locating elements, code review checklists for what to look out for, etc.
The second of these issues, creating or perpetuating assumptions that are not properly backed by guarantees or other reinforcing processes, is tricker because it requires engineers to maintain a spirit of vigilance that they:
Sometimes it’s an uphill battle to change how [much, more] software engineers have to think when building and testing their apps. With recent advances is machine learning, visual testing, and large-language models, modern software testing technology is able to take care of some of this brain-work for us by automatically identifying bad patterns, changes to UI not reflected in test locators, even intent-based analysis of better paths and models of which to test.
Just be wary of ‘buzzword bingo’ around AI as it relates to testing. The goal of testing isn’t to make tests pass, it’s to provide useful feedback about the state and quality of the Systems-under-Test (SuT). Though AI can surely reduce some of the toil related to testing, it cannot (at least yet) define the intent of a user better than the designers, developers, testers, and end-users can. Therefore, don’t expect “AI-powered” or “AI-driven” solutions to be a magic solution to flaky tests; let your teams collaboratively discuss their testing issues and work them out together before simply adding more technology to the table.
The biggest drawbacks to using static or single locators are:
You also never know when static and single locators are going to bite you. Will it be just before a big release? Is it already death by a thousand little cuts that’s hard to quantify? Is it just another one of those big reasons not for the team to refactor like they should? There’s an excuse for every delay, but test locator strategies shouldn’t be one of them.
Your testing tools or platform might not even support anything but static and single locators. Better check that and look to see what the alternatives are.
Static and single locators don’t immediately cause a lot of trouble, but over time bleed the team of valuable velocity and confidence in feedback from continuous testing. There are definitely worse patterns than others in the space, but modern approaches using dynamic and multiple attributes to correctly and consistently identify UI elements allow self-healing tests to remain a trustworthy source of truth for the readiness of your UI changes before releasing to production users. Look for ways to discuss existing and new AI-assisted locator strategies with your quality-minded co-workers and teams.