Cadet 3rd Class
Cadet 3rd Class
7636 views

Fortify scan report showing duplicates

Fortify scan report is showing duplicate issues with few internal Fortify identifiers as different (eg. Vulnerability Details Url and VulnId). Are these actually unique issues or just duplicates that can be merged? If truly duplicates, do you know why Fortify is reporting them separately or how to force it to only list once?

Tags (1)
4 Replies
Micro Focus Expert
Micro Focus Expert

Which specific report(s) are showing this issue? Is it occurring for all Projects/Applications, or only a specific one or a few? Do these separate listings have separate Issue ID's? Have you submitted the reports to Fortify Support (support.fortify.com) for review?

-- Habeas Data
Micro Focus Fortify Customers-Only Forums – https://community.softwaregrp.com/t5/Fortify/ct-p/fortify
0 Likes
Cadet 3rd Class
Cadet 3rd Class

I was looking into the same problem when I have noticed something which makes sense... and not at the same time.

I saw I had duplicates, each marking the same line of the exact same function as being vulnerable. I have noticed that the issue IDs were different. The Instance ID (whatever it is - in Metadata) was also different. After carefully analyzing the result I have spotted that the scanner took a different code path (see Analysis Trace) each time where the path visited ended at the exact same vulnerable (well, at least according to Fortify) function. Because of this, even though there was only one instance of the issue, due to the 2 code paths visited (calls to the same function from 2 different locations) it created two issues in the report.

Probably from a UX point of view it would make much more sense to open only 1 issue. Then, when a user opens the issue details via the UI there could be multiple Analysis Trace displayed.

Commodore
Commodore

Hi @tomks,

I've using Fortify SCA some years ago, and so far, I have knowledge of two reasons to SCA report issues that seems to be duplicated.

One of it, is the reason raised by @keyman, there are in the app some function that is called from more than one location using data from different sources (eg. httpRequest, ConfigFile, Database, different user entry points. etc.) each one have to be reported, no matter if there is only a single explotation point (sink), because you have the choise to fix the issue in the source or at the sink. If you fix near the source (entry point) the solution will apply only to one path, leaving the other issues alive. There is a variation form this picture, when data source and sink are the same, but data may follow a few different paths between those points. In this case it will be reported as only one Issue with different paths. It is because you can fix in source or sink with the same result.

Other reason to have "duplicate issues" is the use of HttpFilters, for example in Java Applications. An Http filter acts a Extra Layer between HttpRequests/servletRequests (User controlled Inputs) and application it self. The filter implements for example the "GetParameter" function, through which JRE will retrieve data from he User. The use of "http(Servlet)Request.getParamenter" will call this method in chain, first in the Request Object and then on the filter. Each one of these points is a possible fix point but the filter finally is optional in the WebApp implementation. Then SCA said: "I have to report both possible entry points, the getParameter in the Filter and the getParameter directly in the request". this results in two issues in which one of them seems an extension of the other. In ither words, one issue callstack may looks:

b()->c()->d()->e() 

and the other issue will look:

a()->b()->c()->d()->e()

In this case. If you fix on "b()" function, both issues will be fixed, but if you fix on "a()" <the filter function> the first issue in which filter does not exist, will be alive and will be reported again. If you trust that filter with fix will be ever deployed and never removed, then you can mark the first issue as "Not an Issue", but in this way when the filter be deleted in the source code or the function "a()" get changed to not be safe any more, SCA will no tell you about the first issue again, because you said: This is "Not an Issue".

Hope this be useful.

Best Regards.

Cadet 1st Class
Cadet 1st Class

Did you ever end up figuring out a solution to this? I'm running into a similar problem but this time with request wrappers. The only solution I can think of is a post script rule that would remove the duplicative findings. Looking in the SCA properties there are some wrapper related properties - however I have not looked into how they work yet. 

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.