UFT One AI Expectations

Considering piloting AI on a mobile application test project coming up that involves a customized/configured Off-The-Shelf solution running on iOS and Android.  What sort of turn around time is Micro Focus averaging when it comes to incorporating elements/objects that its AI solution doesn't recognize?  When it doesn't recognize an object and one needs to resort to more classical approaches to interacting with such elements, I'm presuming the script is no longer platform agnostic at that point?

1 Reply
Micro Focus Contributor
Micro Focus Contributor

Re: UFT One AI Expectations

The update to the AI model, for now, is based on releases cadence (once in 3 months) where a new model is available that supports new classes, updates to existing ones, and accuracy improvements.

In case one of the elements won't be identified by the AI model, there is a need to use a traditional object identification, there is a need to have multiple scripts. there is a way to submit feedback directly from the tool for the elements that are not identified. 

As an initial step, we recommend using the AI awareness tool that can provide concrete analysis of what the elements that can be used by AI are.


The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.