State-Driven Testing: An Innovative Approach to UI Test Automation
In an IT world of constant change, shorter release cycles, iterative development processes, and increasing complexity of enterprise applications, testing is a business-critical step in the Software Development Lifecycle (SDLC). Automation of testing processes is a must-have for organizations that plan to remain competitive as these changes will continue to accelerate and further influence software development.
Building a successful test automation practice is a challenging task for many development organizations. Mature development organizations realize that simple UI test automation techniques like record/playback do not provide the ROI required for modern development projects.
Keyword-driven Testing (KDT) is a widely accepted test automation technique that many mature development organizations rely on to overcome the disadvantages of simple record/playback test automation. However beyond the advantages that KDT frameworks deliver, there are major disadvantages in manageability and complexity inherent in KDT. Many applications require that thousands of automation keywords be developed to make use of KDT. Navigating and constructing test cases based on these keywords can be cumbersome and unpractical.
Acceptance Testing Frameworks (ATF) such as FitNesse use a similar approach to structuring test cases (acceptance tests) using keywords that are implemented using coded fixtures. These frameworks do not support users in navigating all available actions (keywords). As with KDT frameworks, ATFs do not support developers in structuring fixtures (the implementation of the actions) so that they can be easily reused and maintained.
State-driven Testing (SDT) addresses the maintenance and complexity issues of KDT by providing a UI-state transition model. By defining state transitions of the user interface, the set of allowable UI actions (keywords) at any given point in a test case is reduced from thousands of allowable actions down to a manageable list of tens of allowable actions. SDT uses a domain specific language (DSL) to define the test automation framework which provides a highly maintainable and simple approach to structuring a test framework.
Manual Testing is Not an Option
The effort required for manual testing is too high to accommodate repeating test cases for existing code (manual regression testing) with each test cycle. Unit testing alone cannot ensure the quality of an application because it does not address application functionality, integration, or system dependencies. For these reasons, automation of functional (UI) testing processes is a necessity.
Test automation benefits (compared to manual testing):
Faster execution of tests with fewer human resources (cheaper)
Execute more often (finding problems earlier)
Execute with more data and on more configurations (increased coverage, reduced risk)
Consistent and repeatable test results (trustworthy results, reduced risk)
However building a successful test automation practice is challenging for many development organizations. There are numerous reasons why many businesses still struggle with test automation projects:
Lack of required skills (test automation requires technical staff)
Maintenance effort (test automation is sensitive to application change and volatility)
Inconsistency of execution (test automation tools can deliver inconsistent results due to unresolved synchronization points between test scripts and AUT)
Lack of relevance/test documentation (without associating tests with specific requirements and/or code and without clearly documented test cases, it is not clear what actually has been tested)
Return on investment gained by adopting test automation is the balance between:
The costs or time to develop automated script
The cost of maintaining scripts
The benefit of running scripts with a computer vs. a human
The benefits of being able to run scripts continuously and consistently
Evolution of Software Testing Techniques
Numerous software testing techniques have evolved over time, each providing different benefits and suitability for the varying maturity levels of development organizations:
Structured/modular (conditional scripting, sub-routines, functions)
Data-driven (parameters, variables)
Keyword-driven (abstraction based on actions)
Hybrid (combination of data-driven and keyword-driven models)
Mature development organizations realize that simple UI test automation techniques such as record/playback and structured/modular do not provide the return on investment (ROI) demanded by successful development projects. These organizations are pursuing keyword-driven and hybrid approaches that address the challenges of automated testing.
KDT is a software testing technique that separates much of the programming work of test automation from the actual test design.
Generally speaking, KDT uses keywords to define the actions that non-technical personnel can use to create tests for an application. Keywords have parameters that provide input data for each action. A keyword-driven test consists of a list of actions in a tabular format. KDT distinguishes between low-level keywords and high-level keywords.
Low-level keywords describe generalized UI operations such as "Click", "Enter Text", or "Select" on UI controls such as buttons, text fields, links, and list boxes (see Figure 1).
Figure 1: Low-level keywords
High-level keywords (sometimes referred to as business templates) combine low-level actions into meaningful units (for example, combining all actions required for application login into a single high-level keyword, such as Login (see Figures 2 and 3).
Figure 2: High-level keywords
The high-level keyword Login consists of the following low-level actions:
Figure 3: A sequence of low-level keywords forming a high-level keyword
To automate the execution of a keyword-driven test, the actual keywords must be implemented in a test automation tool. The implementation task uses the keyword definitions as the interface between the logical test action and the actual implementation in the test automation tool.
Advantages of Keyword-Driven Testing
Tests can be developed earlier, before implementation of the keywords.
Tests can be developed without programming knowledge by non-technical subject-matter experts (business analysts).
Separation of test design and test implementation allows for better division of labor and collaboration between technical staff (test engineers implementing keywords) and non-technical staff (subject matter experts designing test cases).
Lower maintenance compared to record/playback techniques: When using high-level keywords that are re-used, changes only need to be made in one location (the keyword) rather than in multiple locations (references of the keyword).
Usage for manual and automated testing: As keyword-driven tests are self-documenting, they can be used as templates for manual test execution.
Independence from UI driver: Keyword-driven testing frameworks can provide a common interface for test design across varying test-automation tool vendors (true for automation tools that provide an open interface, such as a Java or .NET based API).
Disadvantages of Keyword-Driven Testing
Additional effort required for initial setup of test design/implementation code.
Lack of standards. Many smart testers develop customized KDT frameworks based on specific test automation tools and do not document the frameworks properly. KDT frameworks can become unusable after their original developer leaves the company.
Low-level keywords are too generic. Tests based on low-level keywords resemble record/playback-based automation scripts and carry the same disadvantages.
Low-level keywords do not provide context. For example, Button(Ok), Click, One left click does not provide detail regarding the context in which the controls are used (Login dialog box or Save dialog box). Because low-level keywords do not provide context, they are not well suited to document test cases. Test cases based on low-level keywords can only be understood by reading through the entire sequence of the test case and having the application accessible to provide the context.
Inflation in the number of high-level keywords. It is often not easy to tell if a high-level keyword already exists or if a new one needs to be introduced. How do you deal with small variations between similar high-level keywords? It is also not easy to determine in which context each high-level keyword makes sense. For example, it does not make sense to use the high-level keyword Login once you have logged into the application, but how can you know for sure?).
Applications can easily require thousands of high-level keywords. At some point navigating and constructing test cases based on these keywords can become cumbersome and unpractical. To illustrate, see the following statement taken from a test automation blog:
“While keyword-driven sounds wonderful, it is not a magical methodology that will solve all automation problems and cure world hunger. I worked on a keyword-driven project while I was an employee of a big corporation. We had an elaborate in-house tool that could compose the keywords into larger blocks of actions, which were also reusable in tests. The project was a failure. The library of keywords became so huge that no one could figure out which keyword should be used in which context.”
KDT provides a simple model for the abstraction of software applications for testing purposes (low-level keywords are comparable to generic commands and high-level keywords are comparable to functions or sub-routines). Providing layering, abstraction, and re-use through a simple command and sub-routine concept is a software engineering technique introduced in the 1960s.
Acceptance Testing Frameworks
The increasing adoption of agile development methodologies has made the Acceptance Testing Framework (ATF) approach popular. The basic idea of acceptance testing frameworks is to describe a requirement (user story) through a test in tabular format. Acceptance testing frameworks like FitNesse (ATF) structure test cases (acceptance tests) by keywords that are implemented using coded fixtures. These frameworks do not provide support for navigation of the available keywords that can be applied in the actual context of the test script, leading to duplication and inflation of keywords. As with KDT frameworks, ATFs do not provide support for developers to structure fixtures (the implementation of actions) in a way that makes them easily reusable and maintainable.
State-Driven Testing: The Next Generation of Test Automation
State-Driven Testing (SDT) takes KDT to its next stage of evolution with an approach that provides the ease-of-use of KDT for creating tests, enabling non-technical subject matter experts to write tests, but it eliminates the complexities and maintenance issues inherent in KDT. SDT provides an extended abstraction model for the application under test (AUT) by adding an object and a behavioral model to describe the AUT.
State-driven Testing (SDT) = Keyword Driven Testing + (UI) Object Model + (UI) Behavioral Model
The object model refines keywords by associating them to test objects.
Test objects represent UI objects in an AUT. For example, test objects can represent dialog boxes, menu structures, toolbars, Web pages, windows, panes, and tabs. A test object defines the actions (test methods) that can be applied to the represented UI object. Actions (test methods) represent basic actions on UI objects, such as entering text in one or more text fields on a dialog box, selecting a node in a menu tree, or clicking a link on a Web page.
SDT Actions (test methods) are typically not generic keywords like low-level keywords in KDT; they are specific keywords that represent the actions that are available on a represented UI object (for example, selectLogout compared to Click Button “Logout”).
The behavioral model is used to describe UI state transitions of an application.
State transitions define which UI objects are accessible at a specific point in a test case. Each action (test method) can specify which test objects will be accessible after calling the action (test method). By defining the state transitions of UI objects, the set of accessible actions (keywords) at a specific point in a test case is reduced to the minimum (from thousands of options down to tens of options). In SDT the behavioral model is expressed through state transition of the UI of the AUT.
Simple SDT Sample
Figure 4 shows a simple application with its basic state transitions. The sample application can be expressed with the following test objects representing UI objects of the application:
Test Objects of the Sample Application
A SampleApplication test object represents the application before it’s actually started. The SampleApplication test object provides a simple start method. Calling the start method brings up the application’s Login dialog box. SampleApplication is a special test object called Start Object, because it is the only test object available in the initial state of the application.
A Login test object provides the methods setUsernameAndPassword for entering text into the User and Password text fields, selectOk for clicking the Ok button, and selectCancel for clicking the Cancel button.
A Main test object provides the methods selectLogout for clicking the Logout button, selectA for clicking the A button, selectB for clicking the B button, and so on.
Figure 4: Basic UI State Transitions
State Transitions of the Sample Application
Initially the application is in the Not Started state. So the first and only thing that is allowed in this state is to start the application. Through a start action the application changes its state by opening the Login dialog box. In this application state you can enter your user name and password in the Login dialog box and click OK or Cancel. Entering user name and password does not change the state (only actions within the Login dialog box are allowable and accessible at this point). Clicking the OK button will change the application state, close the Login dialog box, and open the Main window of the application. At this point only actions on the Main window are accessible. Clicking the Logout button closes the Main window and reopens the Login dialog box. Again the state of the application changed and only the actions of the Login dialog are accessible. Clicking the Cancel button on the Login dialog box closes the application and changes the state to Not Started.
SDT Test Cases
When writing a SDT test case, a test design editor provides information about the state of the application expressed through a list of available test objects for the next step in the test case based on state transitions defined in the behavioral model of the application:
Application state for Step 1: SampleApplication (the only test object you can use at the beginning is SampleApplication)
Step 1: SampleApplication.start
Application state for Step2: Login
Step 2: Login.setUsernameAndPassword “james”, “secret”
Application state for Step 3: Login
Step 3: Login.selectOk
Application state for Step 4: Main
Step 4: Main.selectLogout
Application state for Step 5: Login
Step 5: Login.selectCancel
Application state after Step 5: SampleApplication
The SDT test design editor provides a simple UI to write SDT test cases. The editor calculates the application state (list of accessible test objects) and only displays test objects that are accessible at the selected position in the test case. Figure 5 shows the SDT test design editor displaying the above sample test case.
Figure 5: SDT Editor View
Defining the SDT Model (SDT Test Framework) for the Application
For the definition of the SDT model of the application, SDT uses a simple Domain Specific Language (DSL). The DSL defines both the object model (test objects and test methods) and the behavioral model (state transitions). The object model uses a simple notation for defining test objects, its associated actions (or test methods), and if needed, parameters for the test methods. The object model for the above sample application is shown in Figure 6.
Figure 6: SDT Object model defined in a DSL
The behavioral model is defined as part of the object model, specifying state transitions for every action (test method) that changes the application state. A state transition defines which test objects are accessible in the next step of a test case. Figure 7 shows the state transitions added to the object model for the above sample application.
Figure 7: SDT behavioral (state transition) model as part of the object model using a DSL
Through the start action of the start object SampleApplication, the application changes its state by opening the Login dialog. This state transition is expressed through StateTransition NewAppState(Login) in the DSL. At that point only test methods/actions of the test object Login can be used in the next step of the test case. SDT provides multiple state transition methods (for example, NewAppState, RestoreAppState, and SetAppState) that are applicable for different behavioral patterns of the application.
Modal dialogs represent a common behavioral pattern for applications. Opening a modal dialog disables the functionality of the parent window and restricts interaction to the modal dialog itself until the dialog is closed. Once the modal dialog is closed, the parent window is reactivated. This can easily be expressed through the state transition NewAppState (Modal Dialog) when opening the dialog and RestoreAppState, which restores the application state to the state it had before the modal dialog was opened. Other common patterns, such as the page pattern used by Web applications, are also easily expressible using the provided state transitions.
Describing all possible state transitions in detail and how they can be applied to behavioral patterns of applications is beyond the scope of this paper.
Linking the SDT Model to an Implementation in a Test Automation Tool (Silk4J)
Test design requires only the availability of the SDT model, not the implementation of the model. Therefore, test cases can be designed before interface implementation.
SDT tightly couples the definition of the test framework (SDT model) with the implementation of the framework. SDT automatically creates Java interfaces for each test object in the model. The test automation engineer only needs to implement the test methods that are specified in the generated Java interfaces using Silk4J. He does not need to be concerned with the overall structure of the framework as it is defined by the SDT model and its DSL. Thus each new test framework defined with SDT looks familiar to testers who have worked with other SDT frameworks in the past. To implement the framework in Java using Silk4J, only minimal Java knowledge is required as the implementation structure is generated automatically from the SDT model.
Figure 8: Generating the framework interfaces
Figure 9: Implementation of the SDT framework
SDT Process and Personas
SDT distinguishes between three main activities, separating test design and test implementation, allowing different stakeholders (non-technical, subject matter experts, and technical automation engineers or developers) to participate and collaborate in the test automation process. These activities are:
Modeling the application under test (AUT) by defining the object model of the AUT in terms of test objects and test methods/actions based on UI objects and actions that can be taken on these UI objects using a DSL (comparable to creating keywords in the KDT approach). Create the behavioral model of the AUT in terms of state transitions for actions that change the state of the AUT. Test analysts and design-savvy subject matter experts, or even UI designers/developers, can create the DSL-based model of the AUT without any programing knowledge.
Implementing how the test methods/actions defined in the object model interact with the specific controls in the AUT using Silk4J. Test automation engineers or developers with basic Java knowledge can implement the framework classes.
Design test cases based on the model of the AUT using a test design editor. By utilizing the state transition model of the AUT, the SDT test design editor limits the set of accessible test objects and actions (keywords) at a specific point in a test case to a few possible keywords instead of all available keywords. This makes it possible to easily construct test cases that are based on large frameworks that incorporate thousands of test methods/actions. The test design editor allows non-technical subject matter experts to easily create test cases in a tabular format. Tests are stored in a Java JUnit format and therefore can be easily integrated into continuous integration systems or test management systems like SilkCentral Test Manager.
Figure 10: SDT components and interactions
Advantages of State-Driven Testing
A state-driven testing approach to test automation frameworks offers several advantages over traditional approaches:
The separation of test design (Figure 10: 1 and 3) and test implementation (Figure 10: 2) allows easy and efficient collaboration in a cross-functional team consisting of subject matter experts, developers, and testers.
Separating test design and implementation also supports test-driven development approaches where tests are developed before the functionality (specifically the UI) of the tested application has been developed.
Because of the self-documenting nature of SDT-based tests, tests can also be used for manual and semi-automated (when only parts of the underlying test framework are implemented) test execution.
Self-documenting, clear readable test cases also lead to up-to-date test case documentation that can be understood by all stakeholders. It also clearly documents what has been automated.
Due to the DSL based pre-defined structure of the test framework, SDT provides a consistent approach for test framework design and implementation. This leads to standardized test framework structures and reduces the risk and effort of maintaining these test frameworks, especially compared to home-grown framework approaches where you have a strong dependency on the capabilities and the availability of the developer of the framework.
The DSL based structure of the test framework and the automatic creation of the implementation interfaces of the framework reduce the complexity of defining and implementing the test framework and let the team focus on testing rather than building a re-usable test framework. SDT’s DSL provides a simple means of defining the framework and does not require programming skills. Automatically generated Java interfaces shield the test automation engineer from the complexities of the implementation language (Java using Silk4J).
As the SDT model is based on an object oriented paradigm, it fosters implementation reuse and dramatically reduces maintenance costs.
SDT is adaptable to different software development processes, including Agile, test-driven development, V-Model, and pure waterfall. It can be used to automate existing applications as well as applications that are in development or only in the design stage. SDT can be used for automated user-acceptance tests as well as integration and system tests.
SDT provides a simple tabular editor for test case design and test case execution that can be used by non-technical, subject matter experts, and technical people to write test cases. Minimal training is required to assemble test cases; only knowledge of AUT is required.
Although SDT may not be the “holy grail” of test automation, it does provide answers to many test automation challenges that remain unanswered by other approaches. It is an approach that suits organizations that want to mature their test automation practices without the pain and effort of developing their own practice and without the limitations inherent in relying on less capable commercial and open source tools.