ZENworks Quality

As many of you are aware of early versions of ZENworks Configuration Management could be challenging. Over the last eight years we’ve put a lot of work into tweaking our development processes, talking with customers and the support team, and trying to improve quality. So far our efforts appear to be paying off – with 11SP3 we’ve seen decreased defects reported and the ones that are being reported tend to be of less severity. I still get asked from time to time what we’ve done and what we’re doing to try to ensure the quality of ZENworks, so I thought I’d share a few of the key things we’ve done and are doing to try to ensure a quality product.

  • Agile development model. After ZENworks 10 released it became apparent that the traditional model we were using to develop and deliver software had a number of challenges and as a result the ZENworks team at Novell moved to SCRUM as our development methodology. This model also means that our test and development resources work much closer together to ensure quality is baked into the product. It also means we have lots of chance to provide feedback on the product before a feature is released to you.

  • Done criteria. If you are familiar with SCRUM and Agile development methodologies you are probably familiar with the term “done”. For those of you that aren’t, “done” defines how the team knows when a story is actually complete. Over the last several years we’ve spent quite a bit of time refining our done definition, trying to ensure that there are appropriate activities such as code reviews, use of static code analysis tools, unit testing and lots more. While this has in some cases resulted in slightly longer time to market for a release, we feel confident it has had a significant impact on quality.

  • Test automation. In the 11SP3 timeframe we invested significant resources in automating many of our manual test cases. We focused on end to end test cases that exercise the product capabilities we see used often. This allows our teams to be confident that the main capabilities of the product continue to function when changes are made to add features or fix bugs. We’ve also invested in ensuring our teams are building automation for most of the new features being built so that as we go forward those new features are automatically being tested with every build.

  • Test Drive Development (TDD). The most recent change to our process is moving to a test first approach. Essentially this means that our developers, testers and product owners are working more closely than ever to define the right test cases even before coding begins. We expect to see the fruits of this effort when we release ZENworks 11SP4 in the first half of next year.

  • Scale and Performance testing. Over the last 8 years we’ve evolved a process for running a simulation of a daily use case test that we feel confident simulates roughly 100,000 devices / users accessing the system. This allows us to provide much more tuning guidance and in some cases self-tuning of the system, as is the case with the thread pools in ZENworks 11SP3. We continue to look at ways to push this scale and performance testing framework to new heights.

  • Customer Scenario Testing. We’ve also invested significant effort into gathering customer databases, Primary Server VMs, and use patterns from our customers. This allows us to use real work data for testing key capabilities such as upgrade paths and reporting. Of course any time defect is reported by a customer we are also analyzing the reason for the escape and putting test cases and/or process changes so that similar problems don’t slip through in the future.

  • Customer Impact Teams. Each of the ZENworks PMs is tasked with having a core set of customers that they can reach out to for feedback on quality, features, or ideas for vetting. This helps us make sure we’ve got the right requirements as we build new software and that they are useful. We’re currently striving to move to a model where we can show customers new value after each of our sprints, or at least every other sprint, so that we can have shorter feedback loops. If you are interested in being a part of our customer impact teams, feel free to drop me a line with your name, contact info and the products you are most interested in.

  • CustomerOne and Lighthouse. The CustomerOne program means Novell is deploying ZENworks in-house before releasing it to our customers. Lighthouse is a program where we work with select customers who are willing to deploy pre-release software in their production environment after Novell has reached a point where we believe the software is ready for production. These programs serve as a means to get feedback to ensure that our testing results in our labs hold up in real life, enterprise class environments.

  • Features. While much of quality is about existing features working the way you expect, there’s also a set of capabilities that we’ve heard need to be added to enhance quality. ZENworks 11SP4 will add new capabilities that simplify management of SSL certificates in the zone, improve the remote management capabilities by migrating to the latest version of the TightVNC code base, and decouple the agent update capabilities from the agent itself so that even if the agent gets into a bad state, the updater can run and deploy updates or repair your existing agent. It’s also worth noting we’re hoping to get a new Enhancement Request system up and running soon that will give the community an easier way of helping us understand what’s most important to the broader ZENworks community. I’d guess that will happen early next year.

Ultimately, while we’ll continue to tweak the process, write more tests, do more code reviews, and all the rest -- the ultimate measure is your feedback. So if you have feedback you’d like to share about your positive or negative experiences with the latest releases of the ZENworks products, please feel free to send them my way (jblackett@novell.com) and I’ll make sure they find the right people in the Product Management team or the Engineering team.


How To-Best Practice
Comment List