Can AI and Machine Learning Replace Human Involvement in Testing?

Micro Focus Expert
Micro Focus Expert
1 0 570

GettyImages-78404375_super.jpgMy colleague Eran Bachar, Senior Product Manager for AI in Micro Focus ADM’s Functional Testing portfolio, recently appeared on a podcast, Can AI and ML replace human involvement in testing? with our partners at Capgemini, as a follow-up to the findings of the World Quality Report 2019-20, discussing the question of whether Artificial Intelligence (AI) and Machine Learning (ML) can replace humans. Here’s the transcript of the podcast (lightly edited for clarity):

 

Introductions

Mary-Ellen Harn (Head of Marketing Services, Capgemini): Welcome to the second podcast in the series that we’re doing with our colleagues at Micro Focus. This started out as a discussion on the findings of the most recent Capgemini World Quality Report, and it’s turned into this podcast series which will highlight key areas of the report with further insights from testing experts. If you find this series of podcasts informative, please like it on SoundCloud, and share it with your network of colleagues.

Today, we’re going to talk about a topic that is on everyone’s mind: Can Artificial Intelligence and Machine Learning replace human involvement in testing?

Joining me today are Eran Bachar, Executive Product Manager for Functional Testing at Micro Focus, and Nick Utley, Transformation Leader, AI and Analytics, at Capgemini. Eran, let’s start with you. Can you tell us a little bit about yourself and Micro Focus?

Eran Bachar: Thank you for hosting me, Mary-Ellen, for this great podcast with such an interesting topic. As you might know, Micro Focus is a leading software provider with round about 40,000 customers worldwide that focuses on enterprise software needs, and to do so, we have four main product groups, one of them is Enterprise DevOps, which I’m part of. The group provides end-to-end solutions for the entire application lifecycle, from the planning phases, through the build-and-test phases, and down the road to deploy, operate and monitor. As for myself, in the last six years, I’m with the Functional Testing product team, as a Senior Product Manager responsible for all testing tool solutions in the market. But before joining the product team, I served more than 15 years in various engineering positions, from junior ones up to executive ones. I had the privilege to serve in the engineering world and the privilege to serve in quality assurance teams, and that experience also provided me everything I need to know in order to do the product management as expected. For the last two years, I’m leading the Artificial Intelligence and Machine Learning topic within our portfolio, and how with this amazing and cutting edge technology, we can solve some of the problems we all face in test automation and I guess we’ll talk about later on in this podcast.

Mary-Ellen: Thank you, Eran. You are certainly well-qualified to talk about this topic, and we look forward to your insights!

Nick, can you also tell us about yourself, and what you do at Capgemini?

Nick Utley: Sure! So, I’m in the business of transformations. This involves the maturation of testing practices and automating metrics, and applying machine learning and artificial intelligence-based analytics to the optimization of quality assurance. This allows a pretty interesting perspective of consultants into what is kind of a cross-sectional view of the testing world and where these technologies can lead us.

 

The biggest challenges of test automation today

Mary-Ellen: Thanks Eran and Nick, and let’s jump right into the questions. So Nick, we’ll start with you first. What do you think are the biggest challenges of test automation today?

Nick: So despite huge investments in automation engineering and even bigger return on investment, the ratio of automated to manual test cases tends to be relatively low, especially when that has to do with system integration and service level testing. Even though this ratio is relatively low, it usually involves a relatively high maintenance cost. What I mean by maintenance is the management of the scripts written and the machines on which they’re executed, and even their design. The cost of this remains high. The reason for this is that that skill set is still a niche one, rather than the default for quality engineers. The reason the maintenance costs are so high and the automation rates are relatively low is due to a plethora of reasons, but primarily there’s asset fragility, there are barriers in the skillset which I mentioned, and there’s an exponential growth of platforms to cover. The reason for that is everything is going to mobile; everything is going to cloud base; there are more and more things to test, and you can think of it as a race whose finish line continues to move, faster than those who are running it.

Mary-Ellen: Well, thank you Nick. Eran, do you have anything to add?

Eran: Yes, you know, as Nick stated, we do see these phenomena of low automation rates and high maintenance rates in almost every team we are talking to. And one of the common things is that they are all at, in some point, in their digital transformation journey, so they say. One reason for these phenomena is for example the daily checkins and the frequent checkins that are common practices of today’s Agile and fast-based releases; which the side effect of it is actually breaking the automated tests. It is mainly because test automation assets have high sensitivity to changes because of the nature of creating them. Usually when creating automation assets, we rely on the underlying identifiers, such as the different object properties that uniquely identify the objects and interact with those, such as for example, click on a button, or edit an input field, or even to do any other action on the application.

The end result is that we hear, as Nick said, more and more customers that are questioning the automation ROI, because the time that is being invested to maintain those assets and keep them relevant, sometimes takes more than even running the tests manually. And one more major cause is the exponential growth, exactly as Nick mentioned, of devices, endpoints, and platforms that are continuously growing. And sometimes it can double the time required for script creation as well as script maintenance.

Now, around two years ago, we started looking at major artificial intelligence and machine learning research areas to understand how we can leverage, and if we can leverage, this technology, to solve those challenges, and after extensive research, we are so happy to say, “Yes, we can definitely leverage this technology.”

We’ve built the artificial neural network – think about it as the brain that can understand and interact with different objects. We’ve also leveraged computer vision and OCR techniques – think about it like the eyes that can see and read the screens and the objects exactly like a human does. And last but not least, we’ve also leveraged the Natural Language Processing (NLP) service - think about like the translator, the one that can translate plain English sentences into automated scripts.

Why is AI so effective for test automation?

Mary-Ellen: Thank you Eran and Nick. There are definitely a lot of challenges of test automation. So why then is AI so effective for test automation, and let’s start with you, Eran.

Eran: Yes. You know that in the last two decades, the technology evolved a lot and we’ve leveraged it in the best way we can, but at the end of the day, and as said previously, it didn’t help to increase significantly the overall automation rates and the maintenance cost that’s associated with it. You know, with the AI-infused test automation as we see it, it’s a different ball game. The test creation phase will be completely different, and will be much faster. Instead of building test scripts for each and every one of the different platforms, for example, users can create one script, and let automation intelligence overcome app changes. So just think about an example of having a simple script that will run against iOS and against Android with the same script, and you don’t need to maintain two different scripts. And upon changes, the automation intelligence will overcome them.

The second thing, which is the test maintenance time, will also be shortened, mainly because the automation assets created with AI are much more resilient to changes. If any maintenance is still required, you still don’t need to maintain multiple scripts, you just need to maintain a single script, and that will apply to all platforms at the same time.

And last but not least, I think it was mentioned earlier in the podcast, leveraging AI, the business users now, and definitely the QA engineers, can create automation assets using plain English, leveraging the natural language processing. And with that, we can augment the automation coverage rates, and increase them dramatically. So the bottom line is yes, using AI can definitely help testing teams become much more efficient with their day-to-day work.

Mary-Ellen: Thank you Eran. Nick, would you like to add anything?

Nick: Sure! So Eran did a great job covering the technical side and the hands-on creation of these testing assets, especially when it comes to the actual creation of the scripts. Something I would add to that, would be rather than the technical side, simply the decision-making and the pursuit of test-suite optimization would be itself automated. When it comes to machine learning and AI solutions, especially you had mentioned neural networks. Things like classifier models, and support vector machines can be leveraged in a way that can actually cut down on the administrative time. This can be particularly impactful in the emergent realm of DevOps where regression and targeted test suites are run multiple times a day in a pipeline, compensating for some of that human element which contributes to these challenges and provides a pretty lucrative space to apply machine learning decisions.

Do testers need to learn new skills and tools to leverage AI?

Mary-Ellen: Thank you, Nick and Eran. AI is definitely a very complex field. What do testers need to know in order to leverage AI? And does it involve a new toolset? Let’s start with Eran on this.

Eran: With AI, it’s a complex field, and there’s another effect: It keeps evolving every day. And as we talk, it keeps evolving. If I look back two years, to the journey we went through in Micro Focus, of what the required changes are to develop and AI solution, it’s mainly around three important ingredients: The first is the computing power, the second is the engineering force, and the last, but definitely not least, is the data sets for the AI engine themselves. And I’ll walk you briefly through those three aspects.

First, the computing power. You know, AI projects require both high computing power, as well as big data solutions. So the first thing we had to do is to equip our engineering force with those capabilities, and we’ve built our own computing GPU lab, as well as leveraging in-house big-data solutions that we have in Micro Focus.

The second thing was a super-interesting one, which was the engineering force, because there are significant changes to be made in the engineering force. For example, we hired the main subject matter experts, and formed a new data science team for that project. We’ve also trained our engineering with the programming languages required for AI, and also formed new DevOps processes to support the entire project needs.

Last, but definitely not least, as I mentioned previously, the data. At the end of the day, a successful AI project is highly dependent on the data that will be fitted.

But the good news is that none of the above is something that our customers and testers need to do. We made sure that the AI technology will be embedded into the existing offerings, and can be easily accessed and used with the same tools that they’ve been using up until today. So we did it by adding a simple yet smart API to the existing solution for the automation engineers, but as said previously, we also want to leverage the business users’ knowledge and the QA engineers’ and for that, we created a new layout which provides the codeless capabilities that later on will be facilitated by the natural language processing capabilities. So the idea is to leverage the AI, but keep it simple for the users to consume and enjoy.

Mary-Ellen: Thank you, Eran. Let’s now turn to Nick. And can you share your perspective as well as Capgemini’s perspective on this?

Nick: Sure. When it comes to the inner workings of the machine-learning algorithm, or data science as a whole, what you’re talking about is something involving an extension of statistics and higher-level mathematics, and computer engineering. That all sounds scary, and like a lot to hold on to. However, this does not require those who utilize these technologies to have a working knowledge of their inner workings. Just like a programmer doesn’t have to invent his own language, a lot of this work has already been done. Ours is a world in which libraries handling these functionalities already exist, which Eran kind of touched on. Testers need to be empowered to be able to apply AI and ML where it can save time and money, and that means changing the way that they see it. AI, which we’ve said a number of times so far, and machine learning – you can tell just mentioning their names it gets kind of repetitive, and that’s because they’re currently buzzwords, and they’re often used very broadly, when the mechanics of the logic being applied is pretty straightforward. The gap to be bridged is between the intellectual exercise and the practical tool with real-world ROI. Where this has been done is usually empowering an existing money-making IP, kind of subtly, whether it’s model-based testing, test-case deduplication, priority and risk-based test decision making and effort reduction or even predictive modelling, as Eran mentioned, tends to be embedded for user simplicity. Once the utility of machine learning is demystified, and its use is better understood by stakeholders and management, the small-scale applications will exponentially increase as well.

The key ingredients for success in AI in test automation

Mary-Ellen: Thank you, Nick. So, with our last question of today, can you tell us, starting with Eran, what are the key ingredients of success in AI in test automation?

Eran: You know, I think we’ve touched base on that. If there is one significant success factor for AI projects, it is the data. And this data is not a one-time event, it keeps evolving. There is a nice sentence that I keep hearing about AI engines, that they have an endless appetite for data sets. And with that in mind, as part of our AI capabilities, we have made sure to provide a simple way for our customers to easily share their data with us. Now, some of the listeners will think, “Wait a second, I can’t share this test data!” But no worries, we aren’t talking about their test data or any private data that is sensitive to share. If you remember earlier in this podcast I mentioned that we are leveraging computer vision techniques. For computer vision based engines, the main data being used is pictures and snapshots. And the same goes for us – when we’re talking about data sharing, we’re mainly referring to the snapshots of the application under test. Once we get it, it’s easily processed, and improves the AI engine, where eventually it can be consumed on-demand. So customers can increase the overall efficiency as we move forward. And as I started with, it won’t be a one-time event: the AI is a journey, the same way as the Digital Transformation is a journey, and as Nick stated earlier today, I think that some of the AI still has more unknowns than knowns to us, and as we move forward, we will definitely see how we can improve and take more of this amazing technology to ease the overall testing experience.

Mary-Ellen: Thank you, Eran. And Nick, would you have any additional comments?

Nick: Sure. I don’t think there’s anyone in this industry, particularly in machine learning that you could ask this question of, that wouldn’t come back immediately with a response having to do with data. Data, data, data! It’s really what drives these kinds of solutions. If you think about it, as a ‘garbage in, garbage out’ kind of solution, the only empowerment that a machine learning algorithm has is to be able to predict data that it’s already seen, and that tends to be the issue involving the accuracy of some of these things. It has to do with having clean data versus non-clean data. If you train an algorithm to identify a picture of a dog as a horse, it’ll do that continuously. So, something I would add on to that is the evolution of these kinds of predictions or solutions tend to be limited by the practices that a certain stakeholder has instilled their data recording practices. So the metrics involved as well tend to have the same accuracy as what you can expect from your machine learning algorithms.

Can the human element be replaced?

Mary-Ellen: Thank you. So I’m going to sneak in one more question here. And the question is the title of this podcast: “Can the human element be replaced?” Let’s start with you, Eran.

Eran: So I don’t think that the human factor can be replaced, definitely not in the foreseeable future. But I’ll definitely say that in the next three to five years from today, leveraging AI can help us achieve more autonomous testing, whether we’re talking about automation coverage and automation creation rates, as well as decision making. So for example if we have an application under test, and we know what the changes are, having AI engines that will be placed on top of it might take an autonomous decision of exactly what tests need to be executed, and if those tests are built with AI capabilities, then we’re getting into more autonomous testing. But if you think about it for a second, even autonomous cars, which is the closest one, still require a driver to be in the driving seat and take extra caution if eventually the machine takes the wrong decision. So, no, it won’t replace the human, but definitely we’ll think about it like the assistant of the testers.

Mary-Ellen: OK, and Nick, do you have any comments on this?

Nick: I would largely agree with Eran, that these are enhancements to the human element, or tools to be used. But as smart as machines can be, it’s still not going to be, as Eran put it, autonomous entirely, when it comes to the parallel processing of real-world situations. The closest I’ve seen involves something called Bayesian inference, which involves the quantification of uncertainty, and that’s something that’s being leveraged in cars, just as Eran mentioned, in self-driving cars, but we’re a long way off of even that being applicable, so no, in a word, I don’t believe that humans can be replaced by machine learning or AI any time in our near future.

Wrap-up

Mary-Ellen: Well, thank you! I do agree that the analogy to autonomous cars is really the best way to understand what is happening, you know, in software testing today with regards to AI and ML. This does wrap up our podcast – thank you Eran and Nick for joining me in this second podcast in our series of software testing.

To our listeners, you can go on SoundCloud and all major podcast apps, and search, listen and subscribe to Capgemini’s World Quality Report podcast with Micro Focus. We’ll be back soon with a new podcast, on Intelligent Automation, and how to create a fully automated testing process, from design, to execution, and reporting. In the meantime, please connect with us on LinkedIn and Twitter, and visit Capgemini.com to download the World Quality Report and to learn more about Micro Focus, please visit microfocus.com. Thank you!

_______________

Learn more

 

Feel free to join us on the community, and hop over to the ADM Idea Exchange to let us know what other features you’d like to see in the Micro Focus products to help you test smarter and achieve better outcomes!

About the Author
Malcolm is a researcher in the Application Delivery Management group at Micro Focus. You can find him on Twitter as @MalcolmIsaacs
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.