Generating awesome graphs for ALM Octane using REST API with Dash & Plotly

Micro Focus Frequent Contributor
Micro Focus Frequent Contributor
0 0 201
0 Likes

In my last article, I provided some examples on how to access the ALM Octane REST API using Python. Before continuing with this article, checkout: How to Access ALM Octane REST API using Python.

Pre-requisites – Set Up Access

Getting started!

Once you meet all the pre-requisites, follow the general integration flow of the ALM Octane REST API.

flowsmall.png

 

Import Required Modules

In your preferred IDE, create a new python script and import the following libraries/modules:

import json
import requests
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.figure_factory as ff
import plotly.graph_objects as go
import plotly.express as px

Access the ALM Octane REST API as described here – once logged in, access and prepare the data to build the context specific graphs.

Define Dash App in your Python Script

#################################################
# D E F I N E    D A S H    A P P
#################################################
app = dash.Dash(__name__)

Define dash app using dash.Dash(__name__). This is important, before attaching any plotly graph (as shown below) into the dashboard using dash.

Example Graph 1 – Release Timelines

#################################################
# R E A D    R E L E A S E S
#################################################
resource='releases'
releases = requests.get(url + '/api/shared_spaces/' + shared_space + '/workspaces/'+ workspace +'/' +resource,
                    headers=ContentType,
                     cookies=cookie)

releases_data = releases.json()
releases_total_count = releases_data['total_count']
releases_list = releases_data['data']


df = []

#iterate through all Releases
for release in releases_list:
    df.append(dict(Task=release['name'], Start=release['start_date'], Finish=release['end_date']))

relFigure = ff.create_gantt(df, title='Release Timelines')


In this example, we are working with the plotly.figure_factory, which was imported prior declaration. The python script above reads the release name, start data and end date to build the gantt graph using the figure_factory declared as ff.

Now this graph need to be used in dash as follow.

app.layout = html.Div(children=[
    html.H1(children='Micro Focus ALM Octane Dashboard'),

    html.Div(children='This dashboard was built with Python'),
    dcc.Graph(figure=relFigure, id='releasegantt')

])


if __name__ == '__main__':

    app.run_server(debug=True)

The output of this python script is the following release timelines graph.

Releasechart.PNG

 

All releases displayed on that graph are extracted from ALM Octane management module.

Releasegrid.PNG

 

Example Graph 2 – Test Execution Metrics (2 examples)

###################################################
# R E A D    E X E C U T I O N    R U N S
###################################################
resource='runs'
runs = requests.get(url + '/api/shared_spaces/' + shared_space + '/workspaces/'+ workspace +'/' +resource,
                    headers=ContentType,
                     cookies=cookie)

runs_data = runs.json()
runs_total_count = runs_data['total_count']
runs_list = runs_data['data']

RunMPassed=0
RunMFailed=0
RunMNotCompleted=0
RunMPlanned=0
RunAPassed=0
RunAFailed=0
RunANotCompleted=0
RunAPlanned=0
RunSPassed=0
RunSFailed=0
RunSNotCompleted=0
RunSPlanned=0


#iterate through all runs
for run in runs_list:
    if run['subtype'] == 'run_manual':
        if run['native_status']['id']=='list_node.run_native_status.planned':
            RunMPlanned=RunMPlanned+1
        elif run['native_status']['id']=='list_node.run_native_status.failed':
            RunMFailed=RunMFailed+1
        elif run['native_status']['id']=='list_node.run_native_status.passed':
            RunMPassed=RunMPassed+1
        elif run['native_status'] ['id']=='list_node.run_native_status.not_completed':
            RunMNotCompleted = RunMNotCompleted+1

    if run['subtype'] == 'run_automated':
         if run['native_status']['id'] == 'list_node.run_native_status.planned':
            RunAPlanned = RunAPlanned + 1
         elif run['native_status']['id'] == 'list_node.run_native_status.failed':
            RunAFailed = RunAFailed + 1
         elif run['native_status']['id'] == 'list_node.run_native_status.passed':
            RunAPassed = RunAPassed + 1
         elif run['native_status']['id'] == 'list_node.run_native_status.skipped':
             RunANotCompleted=RunANotCompleted+1

    if run['subtype'] == 'run_suite':
        if run['native_status']['id'] == 'list_node.run_native_status.planned':
            RunSPlanned = RunSPlanned + 1
        elif run['native_status']['id'] == 'list_node.run_native_status.failed':
            RunSFailed = RunSFailed + 1
        elif run['native_status']['id'] == 'list_node.run_native_status.passed':
            RunSPassed = RunSPassed + 1
        elif run['native_status']['id'] == 'list_node.run_native_status.not_completed':
            RunSNotCompleted = RunSNotCompleted + 1

# Add table data
table_data = [['Run Type', 'Test Progress (%)', 'Planned', 'Passed', 'Failed', 'Not Completed / Skipped', "Total Runs"],
              ['Automated', str(round((((RunAPassed+RunAFailed)/(RunAFailed + RunAPassed + RunAPlanned + RunANotCompleted))*100),2)) + "%", RunAPlanned, RunAPassed, RunAFailed, RunANotCompleted,
                RunAFailed+RunAPassed+RunAPlanned+RunANotCompleted],
              ['Manual', str(round((((RunMPassed+RunMFailed)/(RunMPlanned + RunMPassed + RunMFailed + RunMNotCompleted))*100),2)) + "%", RunMPlanned, RunMPassed, RunMFailed, RunMNotCompleted,
               RunMPlanned + RunMPassed + RunMFailed + RunMNotCompleted],
              ['Test Suite', str(round((((RunSPassed+RunSFailed)/(RunSPlanned + RunSPassed + RunSFailed + RunSNotCompleted))*100),2)) + "%", RunSPlanned, RunSPassed, RunSFailed, RunSNotCompleted,
               RunSPlanned + RunSPassed + RunSFailed + RunSNotCompleted],
              ['Total Tests',str(round((((RunSPassed + RunSFailed + RunMPassed+RunMFailed + RunAPassed+RunAFailed ) /
               (RunMPlanned + RunMPassed + RunMFailed + RunMNotCompleted + RunAFailed + RunAPassed + RunAPlanned + RunANotCompleted+RunSPlanned + RunSPassed + RunSFailed + RunSNotCompleted)) * 100), 2)) + "%",
               RunAPlanned+RunMPlanned+RunSPlanned, RunAPassed+RunMPassed+RunSPassed, RunAFailed+RunMFailed+RunSFailed, RunANotCompleted+RunMNotCompleted+RunSNotCompleted,
               RunAFailed+RunAPassed+RunAPlanned+RunANotCompleted +
               RunMPlanned + RunMPassed + RunMFailed + RunMNotCompleted +
               RunSPlanned + RunSPassed + RunSFailed + RunSNotCompleted]]

ExecutionRunFigur = ff.create_table(table_data)

ExecutionRunTimeFigur = px.scatter(runs_list, y='steps_num', x="creation_time", color="subtype", hover_data=['name'], title="Test Execution over Time")

In this example, we are working with the plotly.figure_factory and plotly_express, which was imported prior declaration. The python script above reads the test execution data to build the table and history graph using the plotly_express declared as px as well as the figure_factory declared as ff.

Now these graphs need to be used in dash as follow.

app.layout = html.Div(children=[
    html.H1(children='Micro Focus ALM Octane Dashboard'),


    html.H1('Test Progress Summary'),
    dcc.Graph(figure=ExecutionRunFigur, id='ExecutionSummary'),


    html.H1('Test Execution by Runs'),
    dcc.Graph(figure=ExecutionRunTimeFigur, id='TimeExecution'),


])


if __name__ == '__main__':
    app.run_server(debug=True)

In this script, 2 Metrics are generated:

  • Test Progress by Test Type – Report, which shows the test progress devided by test type.

testprogresstable.PNG

 
  • Test Execution over Time – Graph, which shows execution of runs on a timeline.

runovertime.PNG

 

Example Graph 3 – Sunburst Graph for CI Server by Pipeline Size

###################################################
# G E T     C I - S E R V E R
###################################################
#Get CI Server and Pipelines
resource='ci_servers'
ci_servers = requests.get(url + '/api/shared_spaces/' + shared_space + '/workspaces/'+ workspace +'/' +resource,
                    headers=ContentType,
                     cookies=cookie)

ci_servers_data = ci_servers.json()
ci_servers_total_count = ci_servers_data['total_count']
ci_servers_list = ci_servers_data['data']

labels=[]
parents = []
values=[]
table_data = []
table_data.append(['CI Server', 'Pipeline Name', 'Number of Nodes'])

#iterate through all Releases
for ci_server in ci_servers_list:
    #get pipelines by ci_servers
    resource = 'pipelines?query="ci_server={ id EQ '+ ci_server['id'] + '}"'
    pipelines = requests.get(url + '/api/shared_spaces/' + shared_space + '/workspaces/' + workspace + '/' + resource,
                              headers=ContentType,
                              cookies=cookie)
    pipelines_data = pipelines.json()
    pipelines_total_count = pipelines_data['total_count']
    labels.append(ci_server['name'])
    parents.append("")
    values.append(pipelines_total_count)
    pipelines_list = pipelines_data['data']

    for pipeline in pipelines_list:
        resource = 'pipeline_nodes?query="pipeline={id EQ ' + pipeline['id'] + '}"'
        pipeline_nodes = requests.get(
            url + '/api/shared_spaces/' + shared_space + '/workspaces/' + workspace + '/' + resource,
            headers=ContentType,
            cookies=cookie)
        pipeline_nodes_data = pipeline_nodes.json()
        pipeline_nodes_total_count = pipeline_nodes_data['total_count']
        labels.append(pipeline['name'])
        parents.append(ci_server['name'])
        values.append(pipeline_nodes_total_count)
        table_data.append([ci_server['name'], pipeline['name'], str(pipeline_nodes_total_count)])


CIServerFigure =go.Figure(go.Sunburst(
    labels=labels,
    parents=parents,
    values=values,
))
CIServerFigure.update_layout(
    width=800,
    height=800)

CIServerTable = ff.create_table(table_data)

 

In this example, we are working with the plotly.figure_factory and plotly.graph_objects, which was imported prior declaration. The python script above reads the CI and pipeline data to build the table and sunburst graph using the plotly.graph_objects declared as go as well as the figure_factory declared as ff.

Now these graphs need to be used in dash as follow.

app.layout = html.Div(children=[
    html.H1(children='Micro Focus ALM Octane Dashboard'),


    html.H1('CI Servers and Pipelines by Size'),
    dcc.Graph(figure=CIServerFigure, id='cisummary'),


    html.H1('Table of CI Servers and Pipelines'),
    dcc.Graph(figure=CIServerTable, id='citablesummary')


])


if __name__ == '__main__':
    app.run_server(debug=True)

The output of this dash are 2 metrics:

  • Sunburst Graph for CI Server by Pipeline Size

sunburst.PNG

 

The CI Information is extracted from ALM Octane Pipelines module.

 
  • Table Grid for CI Server by Pipeline Node Information

summarypipelines.PNG

 

Conclusion

For all users who enjoyed the simplicity of Visual Basic using ALM/QC API (OTA Library), Python is a great option. Python offers json and requests modules to allow easy interaction with the ALM Octane REST API. With Plotly and Dash, it is very easy and handy to visualize data extracted from the ALM Octane REST API.

Start your ALM Octane Trial: https://www.microfocus.com/en-us/products/alm-octane/free-trial

#DevOps #Enterprise #Modern #Quality #Management #Insight #Agile #Scrum #AnInfographicADayKeepsTheCompetitionAway

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.