Idea ID: 2749357

Test status burn-down/up report

Status : Accepted
over 1 year ago

Would like to be able to create burn-down/up reports for test execution

Showing test planned for a certain release and if they have been run

eg. no run vs. planned run vs. run



Dashboard Usability
  • Thanks  , not sure we will be able to provide the full flexibility in the first phase but we will take this into account

  • I'm would prefer if user could choose date interval themselves

    • Started after (date picker)
    • Started before (date picker)

    This way all options are covered - assuming Release/Sprint/Milestone are available in the filter anyway 


    Thanks for your quick feedback, this is very helpful

    * Release is not mandatory, you will be able to choose a custom time period like in the run progress widget we have today

    * The widget will show all test runs, if you like you will be able to include runs from specific suite runs or even the suite runs themselves. We were thinking to do a very naïve calculation for the "Expected planned" line - divide the total number of planned runs across the working days in the selected time period

    Please let me know if you have any additional comments.

    Thanks a lot!

  •     ,

    This definitely looks promising!

    I want to second what Jared is saying in his first point. The terms "Release" and "Milestone" makes me feel this graph would require us to use the full agile methodology features. This is the reason we are reaching out with this request.

    1. Can we have this widget created where we can just set the configurations without releases, milestones, or sprints?

    2. You have a configuration option of "Run started in". I think this works perfect for what we are trying to do, but it does not give us the ability to use the line or burndown graph in the quality section.

  • Looks promising! 

    Few questions...

    1. would this require us to use "releases" in Octane?  we're not using now for a number of reasons, so we might need some coaching on how best to start using this to power this graph.  Or...would it be mandatory to specify release in the graph? Could we have an option of specifying the test execution end date and then based on test run start date and end date the expected count is auto populated?

    2. would we also pull the data into the graph through using suite runs?  How do you envision setting the expected number of tests completed per day?