Chris Trimper, Enterprise QA Architect, Independent Health
At Independent Health, we’re committed to taking as much stress and worry out of healthcare as possible. After all, when one of our members is unwell or facing life-changing surgery, the last thing they want to be preoccupied with is the admin associated with accessing healthcare services.
To make lives easier for our members, we recently developed the MyIH mobile application which enables people to manage their healthcare—including everything from tracking their deductibles, reviewing benefits, and viewing their claims. And it’s been a big hit with our customers across the board.
Building a seamless user experience
There’s no doubt that the user-friendliness of MyIH is the reason it has been so widely adopted by our members. So, ensuring that we keep the application as easy to use as possible while delivering brand new features is a top priority. And that makes it crucial to have robust and reliable software testing processes.
While the testing processes that we relied on for many years supported the development of innovative services like MyIH, they struggled to meet the evolving testing requirements of rapid mobile app development. For instance, when we initially created our mobile app, we needed to test it on a wide range of devices to ensure that there were no bugs or issues. This greatly increased our testing workloads as a result.
Whether it’s rounding the edge of a design element, updating fonts, or changing the color of a button, every slight change requires significant alterations to the underlying code. These subtle, and frequently occurring changes required us to develop new testing regimes to ensure that such modifications don’t lead to issues when the app goes live.
To be able to spot errors and other similar bugs, we relied on a team of user experience testers to manually review new versions—increasing the time it would take for us to test and roll out new features and functionality.
Harnessing the power of AI automation
We knew that there must be a better way to spot potential issues in our mobile app and streamline development workflows. As a long-term partner of Micro Focus, we were excited to learn about the new artificial intelligence (AI) test automation features in Micro Focus UFT One and started exploring how we could use the solution to improve and accelerate testing and test coverage.
When we applied UFT One’s AI capabilities, we were impressed with how the solution helped us to focus on testing user flows, without being hampered by having to update the tests every time a minor change was introduced. Because the AI in the Micro Focus solution is able to identify and interact with our applications the same way as a human user would, it is able to accommodate routine updates to the application and continue testing flows without interruption.
Gaining computer vision
We’ve now fully integrated UFT One into our testing processes and we think of the solution like an extra human pair of eyes. The magic of the solution comes from computer vision—the ability to read the application’s user interface, and then interpret and analyze these contents using an AI decision engine.
However, UFT One is much more than an extra human reviewer. It is also able to highlight user experience (UX) issues. As a result, we can understand very quickly when features of our application aren’t readable or rendered intelligibly—and we can do this even after changing coding frameworks.
Changing frameworks
To enhance design work and optimize development, we recently decided to change the code framework that our app runs on. Initially, we anticipated that changing frameworks would lead to a series of downstream issues from a testing perspective.
Specifically, we were concerned that changes to the framework would break the automated testing capabilities that we had developed with Micro Focus UFT One’s AI. This would effectively stop our existing tests from working and require us to build a brand-new testing regime with updated user flows and journeys from the ground up.
We engaged Micro Focus to get a better understanding of the impact of changing our code framework. Much to our surprise, Micro Focus explained that because the AI-based tests that we had written are focused on the user journey without depending on the underlying application’s implementation, it doesn’t matter which particular coding framework we adopt—the tests will continue to run regardless.
Enhancing error detection and remediation
Once we had changed our development framework and started running UFT One, we were pleasantly surprised when the solution found unexpected issues in our app. In fact, it was able to bring a whole new classification of defects to our attention.
During one pre-production test, UFT One’s AI engine spotted that there was no prompt for a user to scroll down a page in order to access a button, which meant that the button was effectively hidden. Traditional test automation would have identified that there was a button in the code, but it wouldn’t have noticed that it had rendered incorrectly. With UFT One we were able to see that the button didn’t render properly and then fix the issue by adding a scroll bar.
In another test, UFT One’s AI spotted that a string of code had wrapped incorrectly. Because of this, the text that it was supposed to display was illegible. Using our previous test automation tools and processes, we would have been able to see the string error, but we may not have noticed that the text was near impossible to read—at least not until a human reviewer took a closer look.
As well as spotting broken objects and UX issues, the Micro Focus solution also helps us to ensure a consistent look and feel throughout our application. For instance, during pre-production testing UFT One’s AI could see that the background of a page was the wrong color. Similarly, the solution noticed that text which was supposed to be in a call-out box and with a different shading had simply rendered as plain text. In both instances, the styles had simply broken.
While a user might not notice off-brand and inconsistent formatting right away, these types of errors can make an app look unprofessional, which can put people off from using it. With the added insight from UFT One’s AI engine we can make sure that defects, bugs and style breakages like these are spotted and fixed early on in the development cycle. Ultimately, this helps us to deliver new features to our members faster and with the confidence that they will work as expected.
Take the next step
Read the case study to learn more about Integrated Health’s automation testing journey with Micro Focus UFT One and its AI-based capabilities.
To learn more about how AI-powered automation testing can enhance your development cycles, sign up for a free trial at microfocus.com/en-us/products/uft-one/free-trial