Mechanism for Noting Severity of Failures
I know I can create status reasons to set the pass/fail state of individual test steps. Is there a way to use these values or any other mechanism to set the over-all impact or severity of a failure of a test? For instance, if we have one product that has several low impact failures that do not prevent overall functionality and then we have another product that has only 1 failure but it is a severe failure that impacts functionality, it would be nice to be able put some kind of measure on the failure so that when we report pass/fail status of the tests we get a better picture of the product quality.
I could get creative with the way that I break up tests and the use of Categories and translate them into quality goals, but that doesn't really get me completely where I want to be. For instance if step 2 failed it would be a "Minor" failure as if the instructions were not present it would be a serious inconvenience to the operators but the system could still be used, but if step 3 failed it would be a "Major" failure because the system could not be used if not calibrated. Right now in SILK the overall test status is either Pass or Fail.
Is there a way to provide more than just Pass/Fail status but severity? Ideally if a test fails the overall test status would be the most severe failure status of any failed test step. Maybe I can achieve this to some degree through some creating reporting techniques???
Example Test Case:
Description: This test covers the calibration process for a steering controller for a tractor.
1. Start the Calibration Process
2. Verify that the "Start Calibration Screen" is presented to the operator and it displays the calibration instructions in English and Spanish.
3. Verify that the system can be calibrated.
In Silk Central there is a concept of risk based testing approach which you discuss. Although this is not done at a step level. At the very top level of a test case you can set the Importance of the test case.
Based on these test case you can then set a goal for a particular test case importance, for example:
You can then report on this information in the quality goals section for an overview in to the project, and how it meets the quality goals.
It will show the percentage you currently are with your test assets to meeting this goal, and ensuring that all critical test cases have passed, at this high-level.
You will find this section under Tracking -> Quality Goals.
Any questions on this just let me know.
Therefore a quick solution would be to create a not executed status reason which falls under - not executed (failed but not blocking).
The way Silk Central evaluates the aspect of this is the overall test will then be failed if the calibration failed, which is a critical test case, therefore the overall quality goal will not be met.
This would allow status reasons to be used and the quality goals, and risk based approach.
If you do go down the reporting route though let us know, and if there is any assistance we can provide at the time on this, we will be happy to help.