Currently Connect it uses 32-bit architecture which sets a certain limitations on memory which can be allocated to CIT process.
As CIT is a 32-bit software program, the memory a CIT process can use is limited to approximately 2 GB. In order to protect a CIT process from crashing, it checks every request to memory allocation. If a CIT process uses up the available memory, it would not be able to process the data and would throw out-of-memory error in the CIT log. There is a common issue when CIT is working with large files, then scenario stops with the out-of-memory error because the memory is used up. For example such issue was investigated in defects QCCR1E141645 and QCCR1E144507, most of memory was consumed by big attachments (> 50 MB). The bottom line is memory footprint of an attachment is much larger than its file size. It depends on how attachments are handled in a CIT scenario. Therefore, currently custom CIT scenario should be designed in a manner taking into account memory footprint of big attachments.
Adding up, for 2GB program, on Windows around 1GB memory is OS protected memory, so actually 32-bit application can only use around 1GB memory. And on CIT side , the attachment needs to get Unicode encoding first, so it has size increase at this step. Then, when attachment from source connector to mapping, then to destination connector, there’re several rounds of copy operation for the Unicode version attachment, so the duplicated Unicode objects exists in the application, which increase the memory again. For the Java connector (and most CIT connectors are written on Java), it consume lot of memory already, so when all these iterations happen, CIT can't get OS to allocate more memory. At this moment, the CIT memory protection mechanism is triggered to throw error popup windows to suspend the CIT process. Although there is a room for optimization of memory usage in current CIT engine, the more robust solution will be to redesign CIT to use 64-bit architecture.