What is cloud support for dedupe device

Does anyone know what the cloud support to create a store in the dedupe device actually does.

I've posted this before and neither MF support nor the documentation have anything about it.

There are  several things not documented here including the purpose of the cache and sizing for dedupe store installed locally versus cloud backed?

Picture below.

Any ideas appreciated as having undocumented feature in your product is never a good look.

  • By checking the Cloud Support box, you can configure the cloud storage for your deduplication device. The purpose of the cache is actually documented in the Admin guide: Data Protector Deduplication store

    "Each deduplication store with cloud storage has a local cache that enables faster data transfer."

    The sizing of the store clearly depends on the amount of data backed up and the deduplication ratio. About the size of the local cache, I cannot find any recommendations. I would say it depends on the amount of data being backed up at a time. I will come back here if I can find any additional info.


    Koen Verbelen
    Micro Focus (now OpenText) Customer Care Specialist
    If this answered your question, please mark it as "Suggest as Answer" or "Verify as Answer".
    If you found this post useful, please give it a "Like".

  • Thanks for the info,  

    I don't think it's actually a cache, generally a cache is a temporary area to improve performance, but I have noticed that the cache is actually integral to the backup and easily runs out of space -so again an undocumented feature.  If you run more than a few large VM backups (100Gb+)  it just comes back with:   Not enough disk space available on store partition.   Clearly not a space issue in Azure blob.........

  • OK, I will definitely come back with more info. Please allow me some time to find it.


    Koen Verbelen
    Micro Focus (now OpenText) Customer Care Specialist
    If this answered your question, please mark it as "Suggest as Answer" or "Verify as Answer".
    If you found this post useful, please give it a "Like".

  •   

    I believe this option is used to create the store on the cloud instead of creating it on local storage:

    https://docs.microfocus.com/doc/Data_Protector/11.02/SelectStore

    • Cloud Support: Select this option for creating deduplication store on a cloud storage target:
      • For Amazon S3, Azure, and Google, use the Data Protector GUI to directly create the container used for creating the Deduplication Store.
      • For Amazon S3 compatible supported cloud targets, create the container by using the respective cloud interface and then specify the container name in the Data Protector GUI to create the deduplication store on it.
  • Still trying to find details. But so far did not find anything suggesting this would NOT be a cache. Could it possibly be that your Azure upload is extremely slow and that is the reason for the cace being congested? How many sessions are you typically running and how much data (per day/week)?

    Will still come back with more asap!


    Koen Verbelen
    Micro Focus (now OpenText) Customer Care Specialist
    If this answered your question, please mark it as "Suggest as Answer" or "Verify as Answer".
    If you found this post useful, please give it a "Like".

  • I talked to R&D and can confirm: the cache is really a cache! Why you are seeing these messages is not clear. Would you be able to have a support case elevated and provide debugs for this problem. It definitely needs a closer look. Feel free to share the case ID if you have one, so we can monitor.


    Koen Verbelen
    Micro Focus (now OpenText) Customer Care Specialist
    If this answered your question, please mark it as "Suggest as Answer" or "Verify as Answer".
    If you found this post useful, please give it a "Like".

  • Ok, I have found your case. And I confirm again: yes, it is a cache and nothing but a cache. No, the size does not have to be as big as your target store in the cloud, definitely not. Something must be wrong here and we have to find out what it is. And debugs would help.


    Koen Verbelen
    Micro Focus (now OpenText) Customer Care Specialist
    If this answered your question, please mark it as "Suggest as Answer" or "Verify as Answer".
    If you found this post useful, please give it a "Like".

  • I will re-run the jobs and do debugs this time and upload to my case.

    Many thanks for your assistance in this matter - I will report when the debugs have been done.

  • Debug logs were done but unfortunately the same problem occurs ie. "no space on store".  The dedupe_cache on the Dedupe windows server just runs out of space and the backup freezes.    Hopefully the partial logs should at least help for engineering to advise.

  • Thanks! But I can see the volume is completely full now!? Also that should never happen unless the available space on the volume was less than the 100GB which you configured.

    • What's the disk usage of the "Dedupe_store" directory now?
    • You should find a file Program Files\OmniBack\sdfs\etc\<StoreName>-volume-cfg.xml
      In there you can find the configuration details and there should also be something like a "maximum-percentage-full". What is it set to?

    Koen Verbelen
    Micro Focus (now OpenText) Customer Care Specialist
    If this answered your question, please mark it as "Suggest as Answer" or "Verify as Answer".
    If you found this post useful, please give it a "Like".