UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21. Read more.
UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21.Read more.
Lieutenant Commander
Lieutenant Commander
984 views

Mechanism for Noting Severity of Failures

I know I can create status reasons to set the pass/fail state of individual test steps.  Is there a way to use these values or any other mechanism to set the over-all impact or severity of a failure of a test?  For instance, if we have one product that has several low impact failures that do not prevent overall functionality and then we have another product that has only 1 failure but it is a severe failure that impacts functionality, it would be nice to be able put some kind of measure on the failure so that when we report pass/fail status of the tests we get a better picture of the product quality.

I could get creative with the way that I break up tests and the use of Categories and translate them into quality goals, but that doesn't really get me completely where I want to be.  For instance if step 2 failed it would be a "Minor" failure as if the instructions were not present it would be a serious inconvenience to the operators but the system could still be used, but if step 3 failed it would be a "Major" failure because the system could not be used if not calibrated.  Right now in SILK the overall test status is either Pass or Fail.  

Is there a way to provide more than just Pass/Fail status but severity?  Ideally if a test fails the overall test status would be the most severe failure status of any failed test step.  Maybe I can achieve this to some degree through some creating reporting techniques???

 

Example Test Case:

Description: This test covers the calibration process for a steering controller for a tractor.

Steps:

1.  Start the Calibration Process

2.  Verify that the "Start Calibration Screen" is presented to the operator and it displays the calibration instructions in English and Spanish.

3.  Verify that the system can be calibrated.

0 Likes
3 Replies
Absent Member.
Absent Member.

Hi Ellen,

In Silk Central there is a concept of risk based testing approach which you discuss. Although this is not done at a step level. At the very top level of a test case you can set the Importance of the test case.

Based on these test case you can then set a goal for a particular test case importance, for example:

You can then report on this information in the quality goals section for an overview in to the project, and how it meets the quality goals.

It will show the percentage you currently are with your test assets to meeting this goal, and ensuring that all critical test cases have passed, at this high-level.

You will find this section under Tracking -> Quality Goals.

Any questions on this just let me know.

Thanks,

Matt

0 Likes
Lieutenant Commander
Lieutenant Commander

This process is somewhat helpful, but if we want to keep from breaking down our tests into even smaller and more focused testing which is undesirable because of the time it takes to test the very complicated large system testing that we are doing, it would be nice to be able to set some sort of severity to the failure of the individual steps. I'll look to see if I can do this by using status reasons and see if I can create a report to pull the information out per test that we are looking for.
0 Likes
Absent Member.
Absent Member.

One potential solution is you only use the fail status for the critical elements of the test case. In this example, it seems that a test case only fails if this step, 3. Verify that the system can be calibrated, can not be completed.

Therefore a quick solution would be to create a not executed status reason which falls under - not executed (failed but not blocking).

The way Silk Central evaluates the aspect of this is the overall test will then be failed if the calibration failed, which is a critical test case, therefore the overall quality goal will not be met.

This would allow status reasons to be used and the quality goals, and risk based approach.

If you do go down the reporting route though let us know, and if there is any assistance we can provide at the time on this, we will be happy to help.

Thanks,
Matt
0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.