Having problems with your account or logging in?
A lot of changes are happening in the community right now. Some may affect you. READ MORE HERE
Davide Depaoli_ Absent Member.
Absent Member.
1950 views

Problem using Advanced Backup to Disk on DP 6.21

Hello,

i have setup a file library using Adv backup to disk (with DMF option).

File repository is 1.6TB

Full backup are about 600GB

Full backup runs one time a week

Full backup retention are 2 weeks

Incremental retention is 5 days

 

The problem is that after 2 weeks of backups performed successfully, the system requires a mount request, as if there were no media available for use and the disk fills up.

 

I'm going crazy looking for this problem, but I can not understand what happens.

 

Does anyone help me ?

 

thanks in advance

Regards

Davide

0 Likes
9 Replies
Highlighted
Knowledge Partner Knowledge Partner
Knowledge Partner

Re: Problem using Advanced Backup to Disk on DP 6.21

This let me think that your incrementals are just to large for the disk capacity available. You need to check how big an actual incr backup becomes and increase disk capacity if there is no way to reduce incr size. You could also check how the filelibrary is configured. There are some watermarks you could adjust. Have an eye on the disk capacity itself. Is only the filelibrary located on that disk? Have you excluded the filelibrary from this backup?

Regards,
Sebastian
---
Please use the Like button below, if you find this post useful.
0 Likes
Davide Depaoli_ Absent Member.
Absent Member.

Re: Problem using Advanced Backup to Disk on DP 6.21

Hi Sebastian,

incremental is not so big, there are few files midified between two full backups.

the problem is not the filling of the repository, but no new files are automatically created for the VTL and then the media pool has no media available for new backups.

 

Yes, I have excluded the file library from backup.

 

The device is configured as following:

Repository properties:

max size of file depot=100GB

min free disk size to create new file depot=1GB

amount of disk space which should stay free=100GB

 

Media Pool is configure as Appendable / Loose

 

Now I have recycled oldest media and I'm monitoring the situation.

 

Do you have any suggestion to better configure file library ?

 

thnak you

Davide

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: Problem using Advanced Backup to Disk on DP 6.21

Hi Davide.

     The default policy for a file library is non-appendable, and it is better to leave that untouched. In a file depot, even if just one protected session with 2-3 GB of data exists at the end, in your configuration that depot can have more than 90 GB of space claimed by expired sessions. If you have many such depots, then you can end up with a lot of wasted space.

Regards,

Shishir

0 Likes
Davide Depaoli_ Absent Member.
Absent Member.

Re: Problem using Advanced Backup to Disk on DP 6.21

Hi Shishir,

I'm working with DP for many years, but probably I didn't understand how the library files works.
If I leave the default repository settings  I have a lot of small files. I thought that increasing the size of the file depot I have few files containing more backup data (like a real tape), having more control into media pool.

 

So you advise me to leave the default values ?

 

thank you

Davide

0 Likes
André Beck
Visitor.

Re: Problem using Advanced Backup to Disk on DP 6.21

Hi Davide,

 

do you really need DFMF on the FileLibrary? It makes sense only when you are doing consolidation into virtual full backups (synthetic fulls read from and written to one and the same file library). DFMF uses internal references (similar to hardlinking in a Unix FS, just internal to the application) to make them a little sparser, but that is only so efficient. Specifically, DFMF creates lots of small files, at the writer block size, which may waste an enourmous amount of space due to block size alignment overhead. I've also seen a DFMF FL using up space that couldn't really be attributed to anything, just growing out of the expected space limits.

 

If you have any chance (64bit OS under the hood), just wipe that FL and create a StoreOnce B2D device instead. It performs so much better it cannot even remotely be compared. As an example, the FL mentioned above had 4.7TB space and would use daily incremental forever virtual full backups. Initially, it managed to store 14 days, but after some time it deteriorated and I had to reduce protection to 6 days (in the final phase) and still monitor the operations as it was running on 150GB free or such and would easily spill over. After replacing that with StoreOnce, I'm still amazed that it just allocates some 700GB or such, already stores 10TB user data at a dedup ratio of 14:1 and I have repeatedly expanded the data protection, now it's at 30 days by default (with some GFS runs protected for 24 weeks and 104 weeks) and it shows no signs of growing out of bounds like the DFMF FL did. If you don't require incremental forever for backup window reasons (which means you still have a problem when you must run a full to repair a broken chain), and used virtual full just as poor mans dedup, go get the real thing. Yesterday.

 

If you don't need DFMF, don't use it. Classic FLs perform quite well, they just don't dedup. But DFMF is a PITA. You'll learn that at least when you try to get rid of such a FL...

 

HTH,

Andre.

0 Likes
Davide Depaoli_ Absent Member.
Absent Member.

Re: Problem using Advanced Backup to Disk on DP 6.21

Hi Andre,

no I don't really need DFMF, I tought, using them, to optimize backup.

Do you think is better I use a normal File Jukebox in this small and simple environment ?

Jukebox is not so flexible as File Library, but is more simple to manage.

 

thank you

Regards,

Davide

0 Likes
André Beck
Visitor.

Re: Problem using Advanced Backup to Disk on DP 6.21

Re Davide,

 

DFMF doesn't optimize your backup. It allows for some additional features you probably don't need, and you pay a price for them (worse throughput and space efficiency unless you have a well matching use case for those features). If you really want to stay with a FileLibrary, you can do so, just don't check the selection for DFM Format. This is what I call a classic FileLibrary, it just creates one large file per virtual medium. Having said that, I cannot imagine why one would stay with a FileLibrary (for non-NAS storage) when the DP version one is running already contains StoreOnce B2D support. While that may still have some initial glitches, patches are likely to come and DFMF FL was by no means glitch-free either (dead blue-questionmark media, space lost somewhere unknown, completely effed up pool usage statistics, impossible to get rid of easily just come to mind immediately). In fact, you are seeing some of these glitches. Just try it out, B2D is easier to set up than a FL (no more manually creating lot's of writers, they auto-instantiate from a single gateway device template) and likely to perform better.

 

HTH,

Andre.

0 Likes
Davide Depaoli_ Absent Member.
Absent Member.

Re: Problem using Advanced Backup to Disk on DP 6.21

Andre,

 

my customer cannot use StoreOnce feature of DP because he hasn't and cannot install one server acting as gateway for StoreOnce.

 

He has only one NAS with two disk partitions:

One partition where is installed DP, where is configured File Library

Second partition containing user data.

 

DP backups only this second partition.

 

thank you

davide

 

0 Likes
André Beck
Visitor.

Re: Problem using Advanced Backup to Disk on DP 6.21


@Davide Depaoli_3 wrote:

He has only one NAS with two disk partitions:


Now there it was, the word that changes things: NAS (you didn't talk about storage attachment and platform/OS before). Yes, StoreOnce is no (supported) option for NAS storage (yet). So in this case, a File Library is the best option. Just get rid of the DFMF format, configure non-appending pools as discussed before and all should be well. I've never seen a FL behave as you describe, they always instantiate new media as needed from the given destination pool (both classic and DFMF), unless there is no space left. They also recycle expired media as soon as a new medium is about to be created and expired media are found to exist at this time, so they tend to clean themselfes early enough - unless they fill up in the middle of writing a medium. Your parameters looked good, though, to prevent this.

 

The files created by a classic FL are large enough to perform well, even when you are using non-appending pools. DFMF will perform significantly worse in this regard - it will create individual files more corresponding to individual files in the source file system. That should really be avoided unless needed (for virtual full consolidations).

 

HTH,

Andre.

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.