scancentral sensor sca error

Hi, in my scancentral platform the SCA sensor gives an error when it analyzes applications that have a size over 40 MB:

sensor log
com.fortify.cloud.cli.command.WorkerCommand - Job processing failed with exception: The filename, directory name, or volume label syntax is incorrect

the source to be analyzed is present in the \Fortify\SC\jobs folder but the scan does not start

log controller
2024-05-08 12:02:22,027 [INFO] [10.230.35.42 GET /scancentral-ctrl/rest/v2/worker/request/3edaf81c-5caf-4dfe-bf1d-5af66d3b2d06] com.fortify.cloud.ctrl.web .controller.rest.CoreController - WORKER_ACTIVITY:10.230.35.42:workartifact
2024-05-08 12:02:22,464 [INFO] [10.230.35.42 GET /scancentral-ctrl/rest/v2/worker/request/3edaf81c-5caf-4dfe-bf1d-5af66d3bXXX] com.fortify.cloud.ctrl.service .JobManagerServiceImpl - State changed for Job 3edaf81c-5caf-4dfe-bf1d-5af66d3b2d06 from QUEUED to RUNNING
2024-05-08 12:02:58,021 [INFO] [10.230.35.42 POST /scancentral-ctrl/rest/v2/worker/status/3edaf81c-5caf-4dfe-bf1d-5af66d3b2XXX] com.fortify.cloud.ctrl.web .controller.rest.CoreController - Controller entered WORK STATUS update handler
2024-05-08 12:02:58,021 [INFO] [10.230.35.42 POST /scancentral-ctrl/rest/v2/worker/status/3edaf81c-5caf-4dfe-bf1d-5af66d3b2XXX] com.fortify.cloud.ctrl.web .controller.rest.CoreController - WORKER_ACTIVITY:10.230.35.42:update_status
2024-05-08 12:02:58,036 [INFO] [10.230.35.42 POST /scancentral-ctrl/rest/v2/worker/status/3edaf81c-5caf-4dfe-bf1d-5af66d3b2XXX] com.fortify.cloud.ctrl.service .JobManagerServiceImpl - State changed for Job 3edaf81c-5caf-4dfe-bf1d-5af66d3b2d06 from RUNNING to FAULTED
2024-05-08 12:02:58,036 [INFO] com.fortify.cloud.ctrl.service.JobManagerServiceImpl - SSC upload state changed for Job 3edaf81c-5caf-4dfe-bf1d-5af66d3b2XXX from QUEUED to CANCELED

  • 0

    Are you running SC-SAST in a Kubernetes environment? if so, are the applications that are failing, primary JavaScript based? We've seen these same failures when scanning Javascript based applications and the sensor is running in Kubernetes. 

    We resolved these issues by setting the environment variable in our Helm charts:

    worker:
    additionalEnvironment:
    -
    name: SCA_VM_OPTS
     value: "-Xmx48000M -Xss24M"

    We also removed the limits in the Helm charts for the workers. We have no limits set for both CPU and Memory. 

    FYI, our SAST sensors have 16 cores and 64 Gb RAM.