UFT 12 - Insight Object

I havn't seen many here talk about the Insight feature on either 11.5 nor 12 of UFT.  Is that mainly due to Insight not being reliable? 


Curious to how people use it, if if not why don't you take advanagte of it?


lastly did version 12 add any improvements to the Insight tool?




  • Hi Johnwseitz,


    Hope you are doing well.


    These are some best practices to take into account when using Insight:
    About what's shown on the screen
    • Try your best to anchor the capturing to borders, templates, shades, icons or shapes of any kind rather than just to capture plain text, as OCR is yet to be supported with Insight. Refine your learned\recorded objects in the Object Repository to include a distinct border or shapes that would normally cause a human eye prompt identification.
    • Small scaled images and\or with textual content in them tend to prove themselves less robust on their identification during run time.
    • Take special caution, with respects to the above mentioned, when dealing with highly contrasted user interfaces (bright white over black and vice versa).
    • Insight identification is highly susceptible to changes of zooming (operating system level and/or browser display level). Tests will not pass if the original and current configurations are different.
    • Context loss of menus might occur on some occasions. You can easily overcome this by automating the application under test through a VM console or RDP connection.
    • Large “chunks” of UI would prove to be more robust with identification only for a static UI as well as to be utilized with a reusable approach.
    • In cases which a dynamic UI is being automated, it’s advised to utilize the Area Exclusion feature in conjunction with the similarity cutoff property set to a lower\higher value (depends on the scenario).
    • Area Exclusion is generally recommended for a scope of several affected UI widgets on a medium\large sized image areas.
    • Only exception for the above mentioned might be with a small image, but one that has strong bordered UI layout and that was learned statically (Automatic\manual mode). Mainly, it could be used to deliberately ignore textual content on it.
    Technology specific tips
    • As a rule of thumb – whenever using Insight on a Web rendered content that might vary slightly from one rendering engine to another, it’s advised to utilize an Insight Object that was reduced with its default similarity cutoff by 10-15% in conjunction with an “Index,0” Ordinal Identifier dictation, for the sake of enhanced robustness.
    • For Descriptive Programming use only uncompressed image types such as BMP, PNG and TIF

    Things to take into account
    • Capturing and recording should be operated and replayed only on the primary monitor.
    • The algorithm is color blind. A virtue for most situations, but a drawback if you wish to perform RGB sampling. You can easily overcome this limitation by using the DotNetFactory object with C# code that samples a certain point on the screen.
    • Use Indexing or Visual Relation Identifiers (VRI): It should work like with any other Test Object, and we encourage you to use those as helpful tools.
    • Insight automation would work only if the actual session/screen is on and visible (not even minimized). That is due to the fact that it takes the input for comparison directly from the screen.

    Fine tune your script
    •  Use the Tools > Options configurations of capturing sizing (with pixels) and maximum number of captured images in order to overcome performance issues while recording/replaying the tests.
    •  Upon generating your perfect script you can get rid of any unnecessary images by using the Tools > Delete Insight snapshots (in the Object Repository menu). This action will save space on the hard drive.
    •  For enumeration of similar UI elements to be used in the script it’s advised to utilize a low similarity cutoff in conjunction with the Index Ordinal Identifier property on a single UI widget image.
    •  Use the preview feature of the VRI in order to get visibility of multiple identifications in design time. Make sure neither UFT application nor additional dialogs are in the background of the AUT you’re trying to check while doing so.
    •  You can utilize Stand Alone Insight objects in the script (with no parent objects at all) in cases that you wish to have a free “whatever is on the screen” approach. It’s highly recommended to do so only in cases of need in order to avoid overheads during runtime.
    •  The algorithm is color blind to some extent. In order to avoid color related false-positive identification scenario, it’s advised to have those use a near 100% similarity cutoff property as well as a certain Index Ordinal Identifier value to be set (not necessarily a “0” value).
    •  In order to avoid surrounding area’s hue difference that would lead to a lack of identification it’s regarded a good practice to set the similarity cutoff to a lower 5-10% less than the one that is being used by default. 



    Please mark this answer as accepted solution or correct answer if this answered your question.


    Best regards,

  • Insight object recognition feature doesn't recognize objects outside of the visible screen, for eg, footer of most pages would be visible only after scrolling down the page and while recording footer image could be captured by scrolling down but while playing it back the image recognition fails as the tool trying to find the image without scrolling down but on the visible screen. Scrolling could be handled via scripting but just wanted to know if there is already an option available to recognize the images anywhere on the page.