The more test cases fail due to synchronization faults the less confidence you will have in your test automation!
Synchronization is a critical issue with any test automation scripts. You may think that synchronization of test script actions is a built-in ability of today’s functional testing tools. Reality shows that many unexpected test script failures are related to synchronization issues generating false negative results. These false negatives make it hard to detect real application problems as each test script failure may be related to a test script synchronization issue. Synchronization errors are timing issues therefore they are non-deterministic, heavily depending on the used HW/SW, the network and their utilization. Synchronization faults often show up in the test environment and cannot be reproduced in the development environment. Weak synchronization is often the major reason why automation projects get unmanageable when the set of automated tests grows. Silk Test helps to build test scripts with strong synchronization as it provides automatic synchronization for even for the most demanding Ajax applications.
What is script synchronization?
Test scripts need to be synchronized in a way that the script waits driving the application until the AUT is ready to accept the next user input and waits to verify the application state until the application has completed the previous user interaction.
Here are some examples where synchronization is needed:
- The creation of a window (more general control) must be completed before it can receive messages/commands
- A page is completely loaded before you can click on a link of the page
- A button must be activated before you can click on it
- A data grid has loaded a row before you can verify the row
- A data grid has loaded completely before you can verify the row count of the grid
- A tree is expanded before you can select one of its children
- Selecting a tree node, the details pane of the node needs to be completely loaded before you can verify text on it.
for Internet Explorer,
for Firefox and programmatically by an event that is fired when processing is completed.
This means that there is no easy way to decide when the application is ready to process the next UI action. Usually this is not a problem for humans because we have multiple cognitive techniques to detect if an application is ready to proceed. Humans are also not that fast when working with an application compared to “computer programs” like test automation tools that are driving the application. So many of the synchronization problems do not appear when a human is accessing the application. But asynchronous behavior like seen in Ajax applications is a real nightmare for a testing tool.
Common approaches to avoid synchronization problems
How do conventional test automation tools cope with the problems of synchronization? There are four common techniques conventional test automation tools use to avoid synchronization problems:
- Built-in waits: the test automation tool has built in delays for certain interactions with the AUT (Application Under Test). These can be delays between keyboard strokes, delays for mouse moves, delays between mouse clicks, delays between script actions, aso. . Often these delays were introduced over the lifetime of the test automation tool to improve the “out-of-the box experience” – avoiding synchronization problems for common use cases when evaluating the product.
- Configurable waits: the test automation tool allows the user to configure delays for certain interactions with the AUT.
- Recorded waits: the test automation tool records the delays between user interactions during recording the AUT and uses the delays when playing back.
- Manual synchronization functions: the test automation tool offers various functions to wait for certain events in the AUT. Samples are: Wait functions for appearance/disappearance of UI elements and its properties, or Wait functions for certain events the AUT exposes.
All these techniques are error prone and lead in the end to unmanageable test automation efforts.
So what is the problem with waits (1-3)?
You never know how long you should wait! The time span for waiting until an application can proceed with the next action can vary significantly even for the same actions! The time duration heavily depends on the speed of HW/SW the AUT runs on, the network speed, and the utilization of the machine and the network and the AUT itself. Additional variation of the timing behavior of an application is introduced when the AUT runs in a virtualized environment. So wait times for synchronization are completely non-deterministic! To illustrate the problem look at the following example:
Assume your automation project contains 2500 test cases with 20 actions per test case. On average each action has a 0.3 seconds response time (the minimal time you need to wait before processing the next action). To assure that timing conditions are met we specify a three seconds wait time per action.
Due to the non-deterministic behavior of response times in our virtualized test environment on average 1% of the response times are longer than three seconds. This results in ~500 synchronization faults causing ~500 test cases to fail. This means that 20% of the test cases fail due to synchronization faults! Even worse with each run different test cases will fail because response time is non-deterministic. In terms of execution time we are adding a 41 hour burden (3 seconds * 50000). The raw execution time would have been only 4.2 hours (0.3 seconds * 50000).
How can we fix the problem? – By increasing wait times...
Waits in automation scripts or waits in the test automation tool itself are the ultimate poison for every automation project. Waits may sound adequate to fix some timing issues were synchronization is difficult – but they are not!
What‘s the problem with manual synchronization?
First of all – it’s manual! It’s a manual effort to augment you test scripts with synchronization points. Also does it pollute the context of your test scripts – synchronization functions have nothing to do with the test logic of a test. Conventional test automation tools provide some out-of-the box synchronization (e.g. for plain HTML pages most of the test automation tools do not need manual synchronization functions). But now even worse - you need to decide for which actions you need manual synchronization. This is a though task as it is not obvious which UI interactions are handled through Ajax request that require synchronization and which UI interaction are handled through normal HTML/HTTP processing. Many synchronization points you only will find through trail-and-error after your test automation runs failed. Another drawback with manual synchronization is that you need to maintain it and it is likely that the synchronization functions are more brittle and more work to maintain as the test script actions themselves!
What is the Silk Test approach to synchronization?
Synchronization is done automatically –Silk Test itself knows when it can proceed with the next action and waits until the application is ready to proceed– there is no need to manually insert synchronization statements in your script. Especially for Ajax applications this is a complicated and tedious task. No other test automation tool currently can provide similar capabilities. Let’s take a look how the synchronization in Silk Test compares to classical synchronization for Web based applications offered by other test automation tools:
Figure 1: Ajax Processing Flow
There is a simple way to proof the automatic synchronization Silk Test provides and see the advantages of automatic script synchronization compared to manual synchronization. We will use the grid sample page of ExtJS, a very popular Ajax toolkit, for demonstration purposes:
Figure 2: ExtJS Paging Grid Sample
Using XMLHttpRequest for data retrieval IE and Firefox do not fire an event that indicates that the browser content has changed. As said, there is no easy way for a test automation tool to decide when the data reloading is complete. We have recorded a simple test case with the Silk Test Recorder which brings up the ExtJS paging grid, pages through the first 5 pages of the grid and finally verifies if the page number is equal to 5 (see figure 3).
Figure 3: Silk Test Script with Automatic Synchronization
When you run the script you will recognize that Silk Test always waits after selecting the “Next Page” button until the new data is completely loaded into the data grid. To demonstrate the effect of Ajax synchronization we can turn off automatic Ajax synchronization (see figure 4).
Figure 4: Script Option - Turn Off Ajax Synchronization Mode
When you now run the script you will get a verification error. The script execution did not wait long enough before pressing the “Next-Page” button and therefor presses it before the application was ready for the next command – in this case the application just ignored the command and did not page to the next page (see figure 5).
Figure 5: Verification Error Verifying the Actual Page Number
Without Ajay synchronization you need to manually synchronize your script. For creating a manual synchronization point in your script you have to find an appropriate event that indicates that loading of a page is completed. A simple method for synchronizing the paging through a data grid is to use the event of new data getting displayed in the data grid. Unfortunately this is a bad synchronization strategy because it depends on the content of the data to work correctly – when the content of the data grid changes the synchronization may not work anymore. So we are looking for a more “generic” event that helps us to synchronize the paging. When paging through the grid we see that there is a “Loading…” message displayed during the new page gets loaded. The message disappears when the page loading is completed (see figure 6). Waiting until the “Loading…” message disappears seems to be an appropriate synchronization strategy.
Figure 6: Loading… Message Used for Manual Synchronization
By using the synchronization method WaitOnDisappearance with the locator of the “Loading…” message we are able to synchronize the script. But there is another hurdle we need to overcome. As the message disappears and only stays for a few seconds on the screen it is tricky to capture the locator for the message. You have to capture the loading message with the locator spy at exactly the time it gets displayed in the browser. You now can manually insert WaitOnDisappearance statements into your test script (see figure 7).
Figure 7: Silk Test Script with Manual Synchronization
I think you get an idea of the tedious work needed to manually synchronize a test script. This all is not needed when your test automation tool offers automatic synchronization, like Silk Test does!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.