managing tests for embedded real-time software
I am new to silk central and I could use a little help on how to organise tests for embedded real-time software with SCTM.
We have to test our software on different hardware systems. Each hardware system consists of a CPU module and one or more plug-in boards for I/O operations. The CPU modules may differ in hardware architecture (processor platform, multiprocessing, chipset etc.). Furthermore, even CPU modules with the same architecture may vary in many points, e.g. processor speed, amount of RAM, flash memory…
- Our tests must ensure that each requirement is tested at least once with a reference architecture CPU module.
- We must undertake smoke and regression testing with all other CPU module architectures.
- We must record all tested variations in processor speed, RAM…
- Using SCTM’s platform test attribute to represent the different hardware architectures (CPU modules) and using configuration suites to represent variations within the same architecture.
- Each CPU module architecture gets its own test container. Test definitions are only specified for the reference architecture. The other CPU module’s test container “inherit” their test definition by linking them to the container for the reference architecture using SCTM’s child link feature for test containers.
- The requirements are connected to the tests for reference architecture, only. For other CPU architectures, the child link relationship is evaluated to trace from requirement to test definitions.
- Using test attributes to mark smoke and regression tests.
- Standard reporting (and requirements view) can be used for tracking of tests on reference architecture CPU modules.
- No duplication of test definitions.
- Use of many predefined test reports per test container.
- Easy and clearly represented execution planning through test containers and test attributes.
I think, this solution is feasible, but I am not sure, whether it is the best one or not. Overall test reporting and requirement tracking for all platforms may become quite complex. What is your opinion?
With best regards,
as the tests are always the same I would use configurations only. This means instead of using the platform attribute and linking test containers, create a configuration for each hardware architecture and its variation.
The benefit would be that you can display the requirement test coverage in execution context. You would see that the tests of requirementX fail on ConfigurationZ.
Do you see any problems with this?
Thank you very much for your reply. You are right; it is a benefit to display the requirement test coverage in execution context. But there is also one drawback: the Requirements Document View would become useless. Currently, many of our tests are semi-automated (manual tests in SilkCentral) and take some amount of time to execute. I assume, using configurations for the representation of hardware architectures, I would mostly see "not executed" in Requirements Document View. Or does a trick exist, to organize tests, requirements and executions in a way that helps to circumvent this problem?
Using the platform solution, you would need a kind of "multi-dimensional" report (i.e. a matrix) to display the requirement's test coverage and current execution results, but we still have to provide such a kind of report to our software release board. The advantage of the platform solution is that the Requirements Document View remains usable, at least for the reference platform.
With best regards,