This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What is cloud support for dedupe device

Does anyone know what the cloud support to create a store in the dedupe device actually does.

I've posted this before and neither MF support nor the documentation have anything about it.

There are  several things not documented here including the purpose of the cache and sizing for dedupe store installed locally versus cloud backed?

Picture below.

Any ideas appreciated as having undocumented feature in your product is never a good look.

Parents
  • 0  

    By checking the Cloud Support box, you can configure the cloud storage for your deduplication device. The purpose of the cache is actually documented in the Admin guide: Data Protector Deduplication store

    "Each deduplication store with cloud storage has a local cache that enables faster data transfer."

    The sizing of the store clearly depends on the amount of data backed up and the deduplication ratio. About the size of the local cache, I cannot find any recommendations. I would say it depends on the amount of data being backed up at a time. I will come back here if I can find any additional info.


    Koen Verbelen

    Although I am an OpenText employee, I am speaking for myself and not for OpenText.
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button.
    You may also be interested in my Data Protector Support Tips listed per category

  • 0   in reply to   

    What I have seen is that the drive is completely full. What you forgot to answer is:

    • What's the disk usage of the "Dedupe_store" directory now?

    Koen Verbelen

    Although I am an OpenText employee, I am speaking for myself and not for OpenText.
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button.
    You may also be interested in my Data Protector Support Tips listed per category

  • 0   in reply to   

    I see the case is not really moving on yet. Therefor I want to make sure we captured the info correctly, so the support engineer can move on with it. As far as I have understood, the cache limit has been configured to 100GB, but the directory content is going above this level. That really means the content of the directory and not just the disk level, right? The other thing I wanted to check with you is: checking the Azure side, can you see all the data being uploaded there?


    Koen Verbelen

    Although I am an OpenText employee, I am speaking for myself and not for OpenText.
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button.
    You may also be interested in my Data Protector Support Tips listed per category

  • 0 in reply to   

    No problem,

    I can see data uploaded to Azure so clearly that is working until the device on prem runs out of space.  This issue seems to be purely on the designated dedupe server.

    The cache limit has been configured to 100GB.  the disk is 200Gb.  The cache is filling to 179Gb at which point the disk runs out of space.  21Gb of data is the OS and other files.

    This has been happening since it was setup.   I'm sure it's just a config problem as stated but there is no accompanying documentation on the feature regarding the local cache/dedupe store for Azure.

    Many thanks.

  • 0   in reply to 

    Frankly and unfortunately, I woukd say, it does not really look like just a config issue. I admit the documentation is limited, but there is not so much to configure. With a default cache of 30GB it should just work fine in most cases. You have 100GB configured and for one or another reason this limit is not respected. I'll have a chat with the support case owner and suggest him to get help from our labs on this.


    Koen Verbelen

    Although I am an OpenText employee, I am speaking for myself and not for OpenText.
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button.
    You may also be interested in my Data Protector Support Tips listed per category

Reply
  • 0   in reply to 

    Frankly and unfortunately, I woukd say, it does not really look like just a config issue. I admit the documentation is limited, but there is not so much to configure. With a default cache of 30GB it should just work fine in most cases. You have 100GB configured and for one or another reason this limit is not respected. I'll have a chat with the support case owner and suggest him to get help from our labs on this.


    Koen Verbelen

    Although I am an OpenText employee, I am speaking for myself and not for OpenText.
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button.
    You may also be interested in my Data Protector Support Tips listed per category

Children
  • 0   in reply to   

    OK I know the case has been elevated to our labs as I just had a chat with the lab support qualifier. I provided him with all the background information I have. One of the details we don't know yet (and to be clear: it actually shouldn't matter, but we are curious to know): how much data are you actually backing up per session, per day, ... Can we get some rough estimate?


    Koen Verbelen

    Although I am an OpenText employee, I am speaking for myself and not for OpenText.
    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button.
    You may also be interested in my Data Protector Support Tips listed per category

  • 0 in reply to   

    No problem,

    Currently we back up between 100GB and 1Tb a day but that can vary i.e. more at end of week month.

    We are also looking to use this primarily for long term retention to Azure blob initially before moving across other backup workloads.

    During my testing I tried various data sets up to 1Tb - smaller ones under 100Gb obviously worked and the larger ones did not.