Having problems with your account or logging in?
A lot of changes are happening in the community right now. Some may affect you. READ MORE HERE
DevPartner Blog
Make a selection from the options in the gray area below:

DevPartner Blog

Community Manager kgroneman Community Manager
Community Manager

Micro Focus is delighted to announce the release of DevPartner 11.04. This is a minor product release and generally available on November 14, 2017.  DevPartner enables developers to quickly build and deploy applications by reducing coding errors and potential performance problems.  Integrated within Visual Studio, DevPartner can automatically detect code quality issues and enable development teams to take quick corrective action. 

DevPartner 11.04 highlights include: 

  • Support for the latest modern development tools from Microsoft, Visual Studio 2017
  • A NEW Jenkins integration through a DPS Runner capability enabling the launch of unit testing from Jenkins, and HTML report generation for both error detection and coverage analysis
  • Improvements in memory management, consumption and performance
  • Support for additional Window’s APIs
  • A roll-up of prior bug fixes and customer requested enhancements from version 11.03 

For more information on the latest version of DevPartner, see the product datasheet and release notes

Read more
0 0 1,364
mlevis Absent Member.
Absent Member.

After several months of sprinting along with Microsoft's agile cadence for Visual Studio 2015, today DevPartner offers to the market DevPartner Studio 11.3. Devpartner Studio packs a set of tools together for code troubleshooting for the professional developer. The 11.3 release tackles the latest Microsoft development platform Visual Studio 2015 and Operating System Windows 10, while keeping true to its BoundsChecker roots set in native development.


  • Full Support for Microsoft Visual Studio 2015. Great for anyone evaluating or migrating to the latest Microsoft IDE. DevPartner Studio still handles VS2005, VS2008, VS2010, VS2012 and VS2013. Both 64-bit and 32-bit applications, making it easy for developers to migrate and troubleshoot multiple versions of their code.
  • Full Support for Microsoft Windows 10 x86 and x64.
  • .NET 4.6 support across C#, VB, and web. Great for anyone evaluating or migrating to the latest Microsoft .NET applications, especially for mixed .NET plus native architectures.

Now why is this important for the developers out there? After a long week in the office we are all starting to feel sluggish right? But how do you feel if your apps are sluggish in responding? Do you know where exactly in the code the performance hits are happening? DevPartner Performance Analysis (TrueTime) can pinpoint the exact lines of code that are causing the sluggish responses from your application.


Does your application suffer from Memory Leaks or worse potential buffer overruns? DevPartner Error Detection (BoundsChecker) can detect these errors as they occur and show you in the source code where the memory was allocated and leaked. Boundschecker can show you a multitude of errors and stop the execution, show the source code and help explain the errors.


Are your apps moving to the mobile platform while maintaining a traditional PC / tablet platform as well? Test your applications common logic (Universal app) under DevPartner to ensure the mobile application is performant and memory efficient.


How much of your application is actually being tested by your automated test suite? Or by your QA teams testing? Has the developer tested all of the code changes being submitted? Important questions that should be answered to know the risks associated with this code change or release. Luckily DevPartner Code Coverage (True Coverage) can answer these questions for you. Code Coverage can also be run in combination with Boundscheker.

There are many other benefits to be had by testing your code with DevPartner. There are also plenty of command line options that would allow using DevPartner in a Continuous Integration testing (CIT) scenario that would alert a dev team to a regression with a new build. This can save a team plenty of time determining when a change in application stability or performance occurred. This is one of the things we do with changes to the instrumentation layer or subsystems in BoundsChecker. We have a suite of unit tests that we run compiling small tests with each version of Visual Studio from VS6 through 2015. Run BoundsChecker against the application and extract the session file output and compare against a benchmark. This is not limited to Boundschecker but it is a way to know if any errors have been introduced to a new build of your application.


Want to know more? Mosey on over to the DevPartner web site. http://www.borland.com/Products/Software-Testing/Automated-Testing/Devpartner-Studio


Heard enough? Want to download a trial? http://www.borland.com/Products/Software-Testing/Automated-Testing/Devpartner-Studio/Product-Trial

Read more
0 0 3,638
Matt Schuetze1 Absent Member.
Absent Member.

We identified in June a bug of the worst kind: the bug introduced during requirements gathering and elicitation, which lead ultimately to building the wrong thing. If you go back to my December post, you'll see how the lab busted their butts getting Transaction Tracking working inside DevPartner Studio. We knew it was risky work, so review again the backdrop for the effort we were about to undertake:

"This feature work, which we are calling Transaction Analysis when it goes generally available in a DevPartner Studio release later this year, represented a steep schedule and technical risk. The request mirrored the existing Entry Point Transaction Tracking already shipping in DevPartner Java Edition, yet DevPartner Studio had none of the basis for entry points or transactions readily available, so we couldn’t just steal code. Instead, due to the risks, we followed a formal methodology including requirements capture, requirements elicitation, preparation of primary and alternative implementation strategies, and writing a functional specification detailing implementation work items, testable checkpoint plateaus, and potential future enhancements."

So to minimize risk and ensure just enough of the right capability needed was built out, we went through fast but thorough design and analysis phase with a business analyst, two or three tight iterations of agile implementation and verification, and delivered the desired capability on time, under duress. Not only that, we thought we exceeded expectations. The DevPartner Java implementation of transaction tracking always suffered from the limitation that you couldn't just click on a package to setup a filter. You actually had to type out the "entry point" (i.e. fully qualified class and method), not have any typos, and usually through trial and error rerunning the target app get it ironed out.  While it has a handy wildcarding feature, the setup was always perceived as quite spartan and required a high burden from the end user to configure properly. The original DPJ developer's design notes from his initial implementation of transaction tracking showed he wanted to add point-and-click method selection in a later development phase, but alas such a requirement in DevPartner Java never surfaced again.  

In the DevPartner Studio implementation this time around, we saw we had the opportunity to add point-and-click logic to setting up the transaction. We leveraged that DevPartner Studio's session control rules already let you discover the DLLs that make up your application, look into their public interfaces for methods and functions, and pick just what you want. The implementation appeared very clean, leveraged existing UI and infrastructure, and met the aspects of transaction filtering and lighter weight overhead, since only the transaction and its call graph did any extra profiling instructions. All other code would run essentially at full speed. The implementation went so slick, we were able not only to do the required .NET methods, but we could even do native C++ with compile time instrumentation. We demonstrated and tested the heck out of the new capability on test applications in C#, VB, and C++, on 32-bit and 64-bit Windows, and even saw decent performance overhead reduction per the design basis. The lab congratulated each other for pulling off a miraculous coding challenge, we shipped a build to the sponsoring customer, our regional sales engineers showed the customer who accepted it, and we shipped the new feature live to all customers at the end of March in our DevPartner Studio 10.6 GA release.

But hold on. We missed two critical aspects that fell out of the initial requirement statement. If you look closely, you'll see the request was to mirror the existing Entry Point Transaction Tracking already shipping in DevPartner Java Edition. We missed two things about mirroring the existing capability that either did not get raised with sufficient priority during elicitation, or got steamrolled in our zeal once we locked in the laser focus on our final design. The first miss was that in the handoffs from customer to business analyst to designer to coder, we dropped off the fact that the customer "liked typing in the transaction." They didn't want to point and click. In fact, typing in the transaction entry point, and using the limited wildcarding, was in fact exactly what they liked most.

The first miss could be dismissed because, indeed, we were trying to shoehorn a new feature into existing code that did not have the notion of entry points to begin with. The other miss however is much more significant and more sinister. When the decision whether to handle .NET or native C++ came up, we stretched and handled both. Well done, chaps, right? While it's true that the greater DevPartner user base will appreciate transaction tracking in all of managed, native, and mixed runtime targets, the sponsoring customer uses a very specific subset of .NET, that being ASP.NET, and even more tightly, a set of solutions based off the Web Site project type in Visual Studio. This specific omission lead to the crux of the current bug: you cannot actually select a ASP.NET assembly as your DLL and pick out a specific public method as the head of the transaction. Why not? The ASP compiler rewrites the internal AssemblyID GUID and randomizes the DLL file name between builds. It does this very intentionally, to allow hot deploy of new versions of compiled objects within the running web site. That fact that picking a specific method in a fully qualified module filename leads to it being wrong on the very next compile and run meant that most of the new transaction tracking feature was rendered approximately useless! A wildcard might have helped a bit, but truly the concepts of entry points were needed, and we really should be exposing namespaces akin to Java packages to pull off the wildcard filter, rather than DLL name.

The fact that DevPartner Java was used by the customer on Java-based web site applications and that DevPartner Studio would be used on ASP web site applications was overlooked while creating a proper user acceptance test case. We did think about it at one point, because looking back I found this unanswered question in our test plan: "Does technique work with IIS based ASP.NET web apps?" Answering that question would have caught this early on, and raised its relatively priority dramatically back in elicitation and design phase. In truth, all is not lost, even though the sponsoring customer is a bit peeved with us. The transaction tracking feature is still very slick with its "start disabled" mode, and it does work like a champ across other .NET and native application architectures. It just might take another crash development effort to get web sites working just as smoothly. Sticking it out with our skiing analogy, the project was easily a double black diamond: we made it down over mogul fields, through the trees, over some ice, and back down to base, only to have on the gondola ride back up a wayward helicopter clip the tow cable with a rotor blade (the team watched http://www.youtube.com/watch?v=v5aMT9MBfZI this week, hence the analogy.) Hopefully the customer won't rescind our lift tickets before we sharpen and wax up our boards and take aim at another run.


Read more
0 0 2,164
Matt Schuetze1 Absent Member.
Absent Member.

The May bug story gives me one more chance to ski before the dog days of summer hit Detroit. A very close customer, let’s call him Chris, hit a apparent hang running BoundsChecker under his rather large 3D app, an app so large, it’s really the Max of 3D apps. Anyway, Rick recognized that Chris’s app was falling over in way that he could reproduce, so he took the case on as his highest priority queued work item. The crux of this story is that a very small snowball was tripping an avalanche. The snowball turned out to be a small oversight in ASCII versus Wide function validations. This conflict however, triggered exception handling code inside Chris’s application to take its own mini dump in an attempt to capture the fault. The mini dump process naturally asked the rest of the application to suspend all its threads, including the BoundsChecker threads in its backend core modules, while the mini dump thread collected stack details for all other threads. This tenacious API therefore also suspended the BoundsChecker communication thread, thereby stalling the shared memory interprocess communication chain back to the BoundsChecker front end inside Visual Studio, freezing up that process too. AVALANCHE! In all, this feels like skiing down a bunny hill, only to find a howling wind-swept corridor leading straight down the north face of Mount Everest. Rick of course brings out a pair of trick skis for cases like this. After cruising the bunny hill, he then pulled out a parachute, night vision goggles, and a laser range finder to descend Everest in style. The trick moves he pulled from the seat of his snow pants included: intercepting SuspendThread( ) properly, collecting all of BoundsChecker’s back side critical sections, letting SuspendThread do its thing, and then releasing all the critical sections, except of course for the communications thread who should never be stopped, which now tells SuspendThread() to buzz off and aborts the suspend API call. Oh, and he corrected the triggering ASCII/Wide oversight snowball too, a difference between 32 versus 48 bytes worth of one parameter: output buffer size. But I cannot do written justice to the actual techniques. Listen to Rick in his own words and consider whether this was triple black diamond or just a green run.

If you want the audio for a podcast version, then download attached file RickSuspendThread.wma.

Read more
0 0 1,496
Matt Schuetze1 Absent Member.
Absent Member.

I got swallowed up in March Madness, so my February bug story is a bit late.

Many software companies like to use their own products internally. Many places call this dogfooding. Micro Focus uses the much more sophisticated term of “self-residency.” This means that the COBOL compiler team writes their compiler in COBOL. It means that the SilkCentral team hosts their own test suites in their own SilkCentral repository. It also means DevPartner engineers use features of DevPartner to check and profile other features.

There are a few restrictions about using a profiler to profile a profiler. The same would be true to use a debugger to debug a debugger. For instance, I can use DevPartner Studio’s performance analyzer to profile DevPartner Java’s coverage or memory cores, but if I tried to profile its own performance analyzer, I would have two processes fighting to capture the singleton Quantum kernel driver, and one or both would lose out. I can also run Code Review against any other component written in support languages, because it’s not a runtime tool. If you are not careful with runtime analyses though, you can wind up in horribly wrong recursion states, if you manage to get the components to self-monitor at all.

That brings us to the February case: we had a regression bug report in that our memory analysis tool had sprung its own memory leak. Is that even possible? Certainly it is: the memory analysis tool needs to run its own code, allocate its own memory, and do all sort of operations while injected into the process being analyzed, any of which could certainly lead to leaks. As it is, the memory analyzer needs to elide all of its memory impact anyway, otherwise data results would show all of sorts of internal processing in the session file and cloud the end user results anyway.

The creative step taken this month was not magic, but it was figuring out how to get around the two architectural realities that for BoundsChecker, the DevPartner injection target cannot be itself, and that for any process running under BoundsChecker, the core injection must be first, before other DLLs come up and start making allocations. Historically, trying to even attempt this was labeled “impossible” by many who were intimately involved with this code tree. The stroke of genius was to realize that the whole shebang of BoundsChecker didn’t need to run as a fully operational product. Rather, the idea was to essentially statically link the BC core into a unit test mule process that contained enough of BC’s internals to drive the suspected locations for the perceived leak. Once this linking and enough test code and scaffolding were in place to run this mule, within 15 seconds --BOOM!-- the leak was caught and nailed down, found lurking in BoundsChecker’s symbol engine. A review of how this regression occurred was that a latent defect, possibly dormant for a decade or longer, got exposed thanks to other changes above the offending leak. It wasn’t caught in standard unit tests, because the symbol engine gets invoked only in certain modes of operation, and whereas load on it is small and unremarkable for our quick functional unit test bank, the leak becomes very problematic for fully scaled up processes with dozens or hundreds of DLLs.

It felt really rewarding to use our own logic to find our own leak. It also shows that something is impossible only until it is not.

Read more
0 0 1,016
Matt Schuetze1 Absent Member.
Absent Member.

 This month's bug takes us down to some bare metal. Sometimes PDBs and Microsoft’s Debug interface just pass on the wrong information to us.  When we have the wrong information this can lead to perceived errors on BoundsChecker's part, especially when using guard bytes, poison on free, or fill on allocation. The incorrect program behavior reported to us in a recent customer case was that BC was overwriting variables past the structure we should have been. 


//Demonstrate RPI
     size_t memsize;
     LV_ITEM lvi;           
     memsize= sizeof(lvi); // this will return 40
     int j = _WIN32_WINNT;
     char *p = (char*)(&lvi);
    *(p+memsize+2) = 0;
//End demo code


LV_ITEM is defined as

typedef struct {
  UINT   mask;
  int    iItem;
  int    iSubItem;
  UINT   state;
  UINT   stateMask;
  LPTSTR pszText;
  int    cchTextMax;
  int    iImage;
  LPARAM lParam;
#if (_WIN32_IE >= 0x0300)
  int    iIndent;
#if (_WIN32_WINNT >= 0x0501)
  int    iGroupId;
  UINT   cColumns;
  UINT   puColumns;
#if (_WIN32_WINNT >= 0x0600)
  int    piColFmt;
  int    iGroup;

As we can see there are many conditional members of the structure and looking at the debug windows we can see the defines were set to and what the debugger tells us the fields are for the LVI_ITEM structure.
Debug window

       memsize              40           unsigned int                                                //Sizeof lvi

        j               0x00000400         int                        // _WIN32_WINNT

      lvi            {mask=0xcccccccc iItem=0xcccccccc iSubItem=0xcccccccc ...}     tagLVITEMW
                                mask     0xcccccccc           unsigned int
                                iItem     0xcccccccc           int
                                iSubItem              0xcccccccc           int
                                state      0xcccccccc           unsigned int
                                stateMask           0xcccccccc           unsigned int
+                             pszText                0xcccccccc <Bad Ptr>      wchar_t *
                                cchTextMax       0xcccccccc           int
                                iImage  0xcccccccc           int
                                lParam  0xcccccccc           long
                                iIndent 0xcccccccc           int
                                iGroupId              0xcccccccc           int
                                cColumns            0xcccccccc           unsigned int
+                             puColumns         0x00000000         unsigned int *

From the definition above we see the last 3 items listed in the debug window should not be there based on the value of _WIN32_WINNT
#if (_WIN32_WINNT >= 0x0501)
  int    iGroupId;
  UINT   cColumns;
  UINT   puColumns;
Note the above debug output is from a run of the application that is not instrumented nor being run under BoundsChecker. When we query the PDB information for the structure and its elements and use those items to put our poison or allocation fillers in we, are using the base address of the variable and the offsets given to us. The same type of operation / overwrite would occur if the user was to manipulate the values in the debug windows.
Sometimes a “bug” in your application is the result of erroneous information being passed from the compiler. Thanks to Mark L. for contributing this article!

Read more
0 0 900
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.