Running Silk4J Tests with Docker and Jenkins

Micro Focus Expert
Micro Focus Expert
0 0 1,106


In this blog post we want to explore how your tests can benefit from running in a dockerized environment. Docker is a popular tool that allows you to run applications in containers: Small units of deployment that can operate independently from each other. From a software engineering point of view, this allows for large applications to be modularized in a smart way, enabling multiple teams that work together on the same product to build, deploy and operate their code independent from each other.

Even if you don't employ full-featured continuous delivery processes, containerization can be applied to automated functional testing as well. For example, containerized testing helps you to reduce the risk that a side effect of one test influences other tests. Traditionally, if a single test had damaged your test environment in a fundamental way, say by deleting essential user accounts or other test data, all other tests that depend on that same environment would fail. Identifying the root cause of such failures is then often a nightmare. Containerized testing allows you to mitigate such risks by reducing the potential influences that test runs can have on each other. Additionally, reducing test dependencies allows you to run more tests in parallel which helps to reduce cycle time and increase throughput of your entire testing and delivery process.


We will focus on a specific scenario in this blog post: Running existing Silk4J browser tests in a dockerized Jenkins environment as part of an automated build pipeline. To make things even more interesting, we want to run everything (including the Silk4J tests and the virtualized browsers) on Linux.

Silk4J Project

Let's start with Silk4J: If you don't have an existing project yet, create a new one and record a test for our Demo Application (, or any web app of your choice. If you want to use Keyword-Driven Testing or plain JUnit is up to you, both methodologies will work fine with Docker.

For the remainder of this post we'll assume that you use our demo application with Keyword-Driven Testing, and that the test you want to run looks like this:


To make this project ready for use in Docker, we will have to make a few slight changes to the original project. The details are described in detail in the Silk4J Documentation.


First, we need to make sure that our tests can be executed via Apache Ant. To do so, change the existing build.xml (which Silk4J created automatically for you when creating the project) and add a runTests task that looks like this:

<target name="runTests" depends="compile">
  <mkdir dir="./reports"/>

  <junit printsummary="true" showoutput="true" fork="true">

    <sysproperty key="agentRmiHost" value="${agentRmiHost}" />
    <sysproperty key="silktest.configurationName" value="${silktest.configurationName}" />


      <fileset dir="${output}">
        <include name="**/*.jar" />
      <fileset dir="${buildlib}">
        <include name="**/*.jar" />

<formatter type="xml" />

<test name="MyTestSuite" todir="./reports"/>

  <junitreport todir="./reports">
    <fileset dir="./reports">
      <include name="TEST-*.xml" />
    <report format="noframes" todir="./report/html" />

Note that this task refers to a test called MyTestSuite which doesn't exist yet. Depending on what exactly you want to run, this can be either a specific test (for example, a single JUnit test class), or a test suite. Given that we use Keyword-Driven Testing in our example, we want this to be a test suite which in turn references all KDTs we want to run.

For details on this, see also the Silk4J Documentation.

We want to run a single KDT called Simple Test, so here's how our test suite is supposed to look like:

@KeywordTests({ "Simple Test" })
public class MyTestSuite {


Note that we used the Java default package, as this is how it is referenced from the build.xml. Of course you can move it to whichever package you like, as long as you adapt the build.xml file accordingly.


If you want to run Keyword-Driven Tests in Docker, you'll have to add two additional JAR files to the buildlib folder of your projects. Those are the files called and which can both be found in the Silk Test installation directory in the ng\KeywordDrivenTesting subfolder.


To actually run the tests in Docker, we have two options: Either we can use the standard Docker command line, or we can go one step further and use docker-compose. Both approaches are detailed in the Silk4J Documentation, which is why we'll skip ahead to the docker-compose setup for this post.

So let's create a file called docker-compose.yml in the same directory as our project, and adapt it from the template which can be found in the documentation. That should then look like this:

version: '3'
    image: selenium/standalone-chrome:latest
      - JAVA_OPTS=-Dselenium.LOGGER.level=WARNING
    image: functionaltesting/silktest:latest
      - SILK_LOG_FILE_PATH=/logs/log.txt
      - chrome
      - chrome
      - ./logs:/logs
    image: webratio/ant:1.10.1
      - ./:/tmp/project
    command: ["ant", "-DagentRmiHost=agent:22902", "-Dsilktest.configurationName=host=http://chrome:4444/wd/hub - GoogleChrome", "-buildfile", "/tmp/project/build.xml", "runTests"]
      - agent
      - agent

Let's dissect this a little bit - what is actually going on there?

We're advising Docker to run three different containers:

  • One called chrome, which uses the selenium/standalone-chrome image. This image is provided by the Selenium team, and is available for free on Dockerhub. It provides a virtualized, headless Chrome browser which we can use for our testing.
  • One called agent, which uses the functionaltesting/silktest image. This image is the Silk Test Open Agent which actually drives the browser.
  • One called test-runner which uses the webratio/ant image. In this container we'll run the JUnit tests via Ant.

The rest of the file is just boiler-plate code to set up communication between those three containers: For example, we advise the Ant container to not try to run the tests "locally", but remotely on the "agent" container (-DagentRmiHost=agent:22902). We also tell it not to try to run a local browser on the agent container, but rather use a "remote" browser running in the chrome container (-Dsilktest.configurationName=host=http://chrome:4444/wd/hub - GoogleChrome). The agent container itself is configured using the environment variables described in the documentation, so that it knows which license server to contact and where to write its log file.


Enough theory, now let's get the whole thing up and running! As we outlined in the introduction, our aim is to run the project on non-Windows platforms, so if you have a Mac or a Linux machine, just copy the project over there. Make sure that you have an up to date version of Docker installed on that machine, and just run the command docker-compose up in the project directory. That will power up all three of the aforementioned containers, and run the tests.

Here's the output you should see on Linux:

And here's the output on macOS:

Note how after the test run is complete the agent and the chrome container continue running. This may not be what you want in an automation scenario, in which case you can use the --abort-on-container-exit flag of docker-compose up to indicate that docker-compose should shut down all other containers as soon as the first one exits. In our case, where the test-runner container automatically terminates after it ran the tests, this will lead to the other two containers being torn down automatically as well.

We'll make use of this feature later, hen integrating with Jenkins.


Now let's take a brief look at the test results that we get from this run. There are three different results that aim at different levels of debugging:

  • The JUnit HTML result: Shows an overview about the test cases that were executed, which passed/failed, and how long they took.
    Location: InsuranceWeb/report/html/junit-noframes.html

  • The text-based TrueLog plus screenshots: Shows a deep-dive into the actual tests: Which Silk Test actions were executed and when, which objects the test interacted with, and the screenshots that were taken (if any).
    Location: InsuranceWeb/logs/MyTestSuite….txt



  • The Open Agent logfile: Shows an in-depth summary of the actions that were executed against the AUT. Useful mainly for debugging very specific issues.
    Location: InsuranceWeb/logs/log.txt


Now that we have the basics setup up and running, let's take it one step further and integrate it into a Jenkins CI/CD pipeline! To do so, you'll need the following additional components:

  1. A Jenkins server with at least one agent running on Linux or macOS which is able to start Docker containers
  2. A version control system (e.g. Git or SVN)

For the remainder of this post we'll assume that you have those components already up and running. Also, we'll assume that you want to use Jenkins just for running the tests, not for building and deploying the application itself. Of course this would be possible as well, and you could just as easily integrate your Silk4J tests there.

So let's start by creating a new file called Jenkinsfile (without any file extension) in the folder where the project is located. This file contains the pipeline script that will tell Jenkins exactly what to execute, and on which "agent".

The details of how a pipeline script is supposed to look like and what else you can do with it can be found in the Jenkins Documentation.

pipeline {
  agent {
    label "host"

  stages {
    stage('Check out') {
      steps {
        git url: ""

    stage('Run tests') {
      steps {
        script {
          sh "sudo rm -rf logs"
          sh "sudo rm -rf report"
          sh "sudo rm -rf reports"

          sh "docker-compose pull && docker-compose up --abort-on-container-exit"

          archiveArtifacts artifacts: 'logs/*'
          archiveArtifacts artifacts: 'reports/*'

          junit 'reports/*.xml'

Note that the Pipeline script itself doesn't contain any specific information about how to run the tests. All it does is kick-off the docker-compose command we saw before, which will in turn use the configuration from the docker-compose.yml file to set up and configure the tests. Additionally, the Pipeline script takes care of archiving the logs and reports, and processes the JUnit results so that Jenkins can then visualize and keep track of them.

We'll commit the entire project to Git. I used GitLab CE in this example, so in the web view the project now looks like that:

Now switch over to the Jenkins web UI and create a new Pipeline job. Configure it to run the Pipeline script that we just checked in to Git by specifying the Git server and the location of the Jenkinsfile within Git:


Run the pipeline, and if everything is configured correctly you should immediately see the unit test results in the Jenkins UI:

If you use the Jenkins Blue Ocean plug-in, you can even visualize the pipeline steps:

Summary and outlook

In this post we saw how you can take an existing set of Silk4J browser tests and run them in a completely virtualized environment using Docker. We ran the tests locally and in a CI/CD pipeline from Jenkins.

As a next step, consider integrating the tests as part of your automatic CI build of the application itself, so development gets faster feedback from QA. Or, if you have a fully autonomous deployment pipeline you can integrate the tests as part of the criteria used to determine whether or not to publish to a production system. Due to the great level of flexibility that Docker gives you, you are not limited to running the tests locally, or even one at a time: You can run the same pipeline massively in parallel if you want to, and even take the entire execution to the Cloud if you like!

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.