Having problems with your account or logging in?
A lot of changes are happening in the community right now. Some may affect you. READ MORE HERE

What are the suggested standards to put in place when considering automated testing?

What are the suggested standards to put in place when considering automated testing?

 
Developing Standards

Capitalization:

  1. DataTypes are always all uppercase. Examples: STRING, INTEGER.
  2. UnderScores are used in DataTypes that are multiple words.
    Examples: MY_ENUM, MY_RECORD.
    

  3. Hungarian notation is used for all variable names. The first letter of a variable name is the first letter(s) of its type.
    Examples: sUserName, iLoopCounter, lsProperties.
    

  4. Method and Function names begin with an uppercase letter. Each Significant letter is capitalized.


Naming Conventions:

  1. Use the const SCRIPT_DIR to reference the path to 4Test scripts. If multiple directories are used, make these references relative to SCRIPT_DIR.
  2. The method used to invoke a window is called "Invoke".
  3. The method used to close a window is called "Close".
  4. The method used to accept a dialog is called "Accept"


Coding Conventions:

  1. Functions and Methods that do not return a value should be indicated by using the return type VOID.
  2. All methods and functions should end with a return statement even if it is VOID.
  3. All while loops that wait for a GUI event to terminate include a second expression/counter to prevent infinite loops.

    Example:

    iLoops = 1
    while !MyWindow.Exists && iLoops 
    

  4. All switch statements have a default case, which may raise an error if no case is matched.
  5. Place reusable functions in a file called myapp_funcs.inc.
  6. Pathnames are never hard-coded. Instead use a const in a general.inc file that is either assigned explicitly or via an environment variable.

    Examples:

    const SCRIPT_DIR = "{HOST_GetEnv ("SCRIPT_DIR")}"
    
    const  DATA_DIR = "{SCRIPT_DIR}\data"
    
    

  7. All "included files" are listed in a file called usefiles.inc. The main frame file for the application under test includes the statement use "usefiles.inc".
  8. Include a single space between a method name and its argument list. There is no space between the parenthesis and the first and last arguments.

    Example: VOID Invoke (STRING sPath)
    

  9. The optional message parameter in the Verify statement is always included to better explain the error condition being generated.


Window Declaration Standards:

  1. Use Multi-tags only where necessary.
  2. Objects are named as they are rendered in the application under test unless the name is ambiguous or long. In those cases a clear, concise name can be substituted. Cute names are anathema.
  3. Move declarations for controls that are not expected to be used to the bottom of the parent"s window declaration, e.g. StaticText.
  4. Members variables and are placed at the top of a window declaration after the tag and parent statements. Methods are placed next followed by declarations for child windows


Machine Independence

Tests should not be designed to run on a specific machine. A common mistake is made when tests assume a constant directory structure and refer to components using an absolute path. If tests are moved to another machine, errors are generated because files cannot be found. Refer to the standard regarding the use of SCRIPT_DIR above for a simple resolution to this problem.

Tests must also be independent of screen resolution when using bitmaps. Users are often surprised when every test fails due to bitmap errors when tests are moved to another machine. This weakness in the bitmap approach can be overcome by creating sets of bitmaps for each different screen resolution to be tested and testing for resolution at runtime using the registry (see the SilkTest help entry for SYS_GetRegistryValue

Commenting

Every test should include comments describing its intentions, test method and expected results. Any references to a script component located in a remote file should include its relative path and file name for easy location. Comments should also identify the author in case further explanation is needed. All non-trivial test code should be commented. Each method or function written should include the following information block:

[ ] // function:  sSegment = SQ_GetFieldReverse (sString, sDelim, iField)
[ ] // returns:   sSegment: The returned segment. STRING.
[ ] // parameter: sString: The string to return a segment from. STRING.
[ ] // parameter: sDelim: The character(s) to use to separate fields in sString. STRING.
[ ] // parameter: iField: The occurrence of the field to return. INTEGER.
[ ] // notes:     Returns a segment (field) of a string, working backwards from the end of the string.


Note that this block can then be used to add the function to the library browser by removing the comment delimiters and putting all function blocks in a help file.

Configuration

The need to set runtime parameters to configure tests should be avoided where possible, but when they must be used, there are two simple rules that help prevent confusion:

  1. Place all configurations parameters in a single file even if they are not logically related. The more files that exist, the greater the chance that they will be forgotten.
  2. Provide detailed configuration instructions in the form of comments to explain the requirements for each setting


Source Control

Like the application code that they are designed to test, test scripts require source code control. It"s no more difficult to accidentally overwrite a team member"s test code than application code. Furthermore, it"s important to save (and label) application and test code together so that if there is a need to fall back to an earlier version, the test code required to test it can be recreated as part of the same process. Any of the popular source code control systems including PVCS, Visual Source Safe, ClearCase or StarTeam are suitable.

Analysing Results

It"s always easy to interpret a result when a test passes. But when tests fail, it can be difficult to locate the actual software malfunction. When a regression includes hundreds or even thousands of tests, a 10% failure rate can mean several hours to a couple of days to research errors. Yet this step can be accomplished in a fraction of that time if results analysis is considered when tests are designed.
In fact, when developing a test, the engineer should consider how an error will be represented if a failure occurs. The best way to ensure that results will be analysed quickly is to provide specific error messages including the actual and the expected results. For example, an error message such as "the balance sheet total is incorrect - expected "750.11" , got "750.10"" suggests a rounding error. Compare to "balance sheet totals do not match". In this case, the test will have to be re-done by hand in order to determine why it failed.

Old KB# 21756

DISCLAIMER:

Some content on Community Tips & Information pages is not officially supported by Micro Focus. Please refer to our Terms of Use for more detail.
Top Contributors
Version history
Revision #:
1 of 1
Last update:
‎2013-02-15 19:23
Updated by:
 
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.