Tracking data-driven tests in QC/ALM - Best Practice
I work for a large corporation and we run UFT for automation and HPQC for QA execution and tracking. In certain cases, we may have to run a script against 100, 1000, or even many thousands of data combinations. Creating thousands of test cases for each data combination seemed teadeous and inefficient so we added some custom fields and recorded the number of "autoTests" (rows in Excel data table) to those fields to give an idea of how many tests are actually being covered and how many have been executed by the one test case in QC. Logic at first execution counts the rows with data and updates the "autoTest Total" field. As the test proceeds, "autoTest Executed" and "autoTest Passed" are also updated. Then we have logic built into reporting to look for and count those fields for accurate coverage stats. This is fraught with problems as it requires and relies a lot on the automation engineer and tester to comply and be diligent with the scripting and the process. So my question is, as data-driven tests are the big attration of automation, what other ways have people used/developed to have one or few scripts run many scenarios of data while having accurate reporting in QC/ALM.