Artificial intelligence within functional test automation in the context of this article could simply be defined as the ability to read a screen like a human to identify objects and in some cases even write code in plain English, but what does that mean for functional testers who want to do more than create easy to read code? One thing to keep in mind is just as a human child can learn quickly to distinguish between small animals and large animals, the AI within UFT can also receive instruction to distinguish between elements humans often interact with on-screen. Below we'll cover three capabilities the AI can deliver to make life easier.
- Spatial Relationship Between Objects
Just like a human, the AI can identify the positioning of buttons, sliders, or search boxes relative to other elements. This capability is beneficial for testers whose app maintains the same spatial relationship between items throughout. The tester can instruct the AI which objects will always be a) to the left b) to the right c) downwards or d) upwards. For example, if you search for a product, and in the results, each object has a cart icon to add that product to the cart, you could specify the product description, or product number text to be your anchor, and click the shopping cart icon next to that anchor text. This way, you can be sure that you are clicking the correct shopping cart icon on the screen, to add the product you want to the cart, and not click the shopping cart icon that is on the top of the screen, which would open the shopping cart, instead of adding the product you want to your cart.
This allows testers to create automation scripts more efficiently and improves the efficiency of the testing process.
- Analogous Words
In software testing, testers need to identify equivalent actions or similar functionalities across different software. Previously, testers had to write long code to test specific actions, the words used to describe these actions would need to be exact or risk test failure, but thanks to machine learning, the words humans use interchangeably can now be recognized. For example, the action to 'SIGN IN' is equivalent to 'LOG IN', ‘LOGIN’, ‘Login’, ‘Sign In’, etc. Previously, each text string would require its own code, but AI can recognize these the same as a human would - it is effectively the same action. This potentially saves testers a lot of time and tedious work, plus allows a single test to be resilient enough to be reused across multiple applications.
- Recognizing Objects and Text
Finally, historically test automation has allowed most UX defects to leak to production, making it necessary for manual tests to still be executed prior to release to production to capture “look and feel” test cases. The only way to handle this with test automation previously was to do bitmap comparisons, which were fragile, and machine bound.
Lucky for us, the AI can recognize and inform the tester of more and more user experience issues. For example, an arrow pointing upwards when it should be downwards on a drop-down menu/combobox, or if text is too light to be read on a screen. Normally, objects and/or text out of position would NOT show up on a test, but the AI capability greatly assists the tester to deliver a better user experience.
This screenshot shows that the e-mail was not readable, even though it was present. The e-mail text is white on a light grey background (it’s next to the “?” if you can’t see it). AI was able to help identify this user experience defect, because just like you and I can’t see this e-mail address, neither could the computer vision component of AI.
Summary
In conclusion, AI is transforming automated software testing by enabling analogous words recognition, spatial relationship detection, and user experience defect detection. This automation testing process reduces the time spent manually testing software and, simultaneously, increases the efficiency and cost-effectiveness of the overall testing process.