Handling and storing suppressions in code

 

We are using Fortify SCA in our Gitlab CI/CD pipeline and we are having issues with suppressions. Currently, we create a branch, work in that branch and scans are run, the devs suppress known findings and when we merge back to main, the suppressions do not get stored. Are we missing a tool. I read somewhere that the SSC tool is what stores and manages that data.

Please forgive my limited knowledge, I am just trying to figure out why fortify is not integrating well with gitlab when its supposed to be fully supported.

End result is to suppress findings in the branch for known findings, we then fix whatever other findings are there, and merge back to main. When we are done, main retains those suppressions and we dont have to do all that again to successfully pass scans.

Thanks in advance,

Derek

Parents
  • Suggested Answer

    0  

    SCA is the client

    SSC is the server

    If you really fix an issue in a build - and by that I mean a fix that your devs and Fortify SCA accept - then the issue is removed - gone.

    But if you audit an issue (suppression is not the same thing as auditing) then given there are NO code changes, the issue will be found again in the next SCA client scan.

    That's where the SSC (server) comes into the equation. This allows a common place to view the current status of the project-version (now application, applicationVersion).

    Every scan still finds the audited issue - but on upload the server can remove the finding provided it correlates with existing audits.

    So if you are not using an SSC, you would need a different mechanism to reaudit.

    The SCA can compare a previous run with the current - but that's about it.

    You could export your findings into a SARIF (Static Analysis Result Independent Format) and try using an open source Defect Dojo to give some similar functionality to SSC. But the SSC is the official way to achieve this.

    Let me know if I misunderstood your workflow - maybe I can give a better answer

  • 0   in reply to shooking_sap

    We have a similar but slightly different problem. Our Cyber team used AWB for a long time and utilized custom filters and suppress capabilities to eliminate noise including false positives. Now we are in the process of building CI/CD pipelines using Jenkins/Ansible to perform the scan by utilizing SCA. Although SCA can produces the scan reports based on some standard templates like "FISMA Compliance", "DISA STIG" etc. but it seems to be impossible to provide the same dynamic capabilities in AWB to filter and suppress findings for SCA implementation in CI/CD. 

    What is the best way to carry the same filter and suppress configuration metadata to SCA and produce the same scan reports that will match with the AWB reports? 

    If it is not feasible, what is the best path meet the Cyber team's requirements to have the same AWB functionality available in CI/CD pipelines? That may include some additional tools etc.

    Thanks 

Reply
  • 0   in reply to shooking_sap

    We have a similar but slightly different problem. Our Cyber team used AWB for a long time and utilized custom filters and suppress capabilities to eliminate noise including false positives. Now we are in the process of building CI/CD pipelines using Jenkins/Ansible to perform the scan by utilizing SCA. Although SCA can produces the scan reports based on some standard templates like "FISMA Compliance", "DISA STIG" etc. but it seems to be impossible to provide the same dynamic capabilities in AWB to filter and suppress findings for SCA implementation in CI/CD. 

    What is the best way to carry the same filter and suppress configuration metadata to SCA and produce the same scan reports that will match with the AWB reports? 

    If it is not feasible, what is the best path meet the Cyber team's requirements to have the same AWB functionality available in CI/CD pipelines? That may include some additional tools etc.

    Thanks 

Children