UFT One AI Expectations
Considering piloting AI on a mobile application test project coming up that involves a customized/configured Off-The-Shelf solution running on iOS and Android. What sort of turn around time is Micro Focus averaging when it comes to incorporating elements/objects that its AI solution doesn't recognize? When it doesn't recognize an object and one needs to resort to more classical approaches to interacting with such elements, I'm presuming the script is no longer platform agnostic at that point?
Re: UFT One AI Expectations
The update to the AI model, for now, is based on releases cadence (once in 3 months) where a new model is available that supports new classes, updates to existing ones, and accuracy improvements.
In case one of the elements won't be identified by the AI model, there is a need to use a traditional object identification, there is a need to have multiple scripts. there is a way to submit feedback directly from the tool for the elements that are not identified.
As an initial step, we recommend using the AI awareness tool that can provide concrete analysis of what the elements that can be used by AI are.