UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21. Read more.
UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21.Read more.
Absent Member.
Absent Member.
2877 views

Poor Backup performance

We are experiencing issues concerning the backup performance on some of our SLES+OES servers.
The affected machines are running virtualized on VMWare with 1 CPU and 2 GB RAM.
One of the servers is a SLES 1O SP3/OES 2 SP2a and the other are SLES 10SP3/OES 2 SP3.
The backup software we are currently using is Commvault's Simpana 9.
Here are some of the results:

a) Server 1 (SLES 10 SP3/OES2 SP3)
LINUX partition (EXT3)=> 7.97 GB with a throughput of 43 GB/h
OES partition (NSS) => 110.68 GB with a throughput of 3.45 GB/h => with this speed the full backup needs a backup windows of 31 hours.....
De TSAFS.conf, looks like:
cluster=enable
cachememroythreshold=15
readaheadthrottle=2
readbuffersize=65536
readthreadallocation=100
readthreadsperjob=10
tsamode=Dual
cachingmode=enable

b) Server 2 (SLES 10 SP3/OES2 SP3)
LINUX partition (EXT3)=> 5.75 GB with a throughput of 48 GB/h
OES partition (NSS) => 120.69 GB with a throughput of 9.34 GB/h => with this speed the full backup needs a backup windows of 13 hours.....
De TSAFS.conf, looks like:
cluster=enable
cachememroythreshold=1
readaheadthrottle=2
readbuffersize=65536
readthreadallocation=100
readthreadsperjob=4
tsamode=Dual
cachingmode=disable


c) Server 3 (SLES 10 SP3/OES2 SP2a)
LINUX partition (EXT3)=> 5.27 GB with a throughput of 5.63 GB/h
OES partition (NSS) => 98.27 GB with a throughput of 3.22 GB/h => with this speed the full backup needs a backup windows of 30,5 hours.....
De TSAFS.conf, looks like:
cluster=enable
cachememroythreshold=1
readaheadthrottle=2
readbuffersize=65536
readthreadallocation=100
readthreadsperjob=4
tsamode=Dual
cachingmode=disable


The most strange of all is that I have another 2 servers where the backup performance is acceptable both are SLES 10 SP3/OES 2SP2a
d) Server 4 (SLES 10 SP3/OES2 SP2a)
LINUX partition (EXT3)=> 7.88 GB with a throughput of 72.95 GB/h
OES partition (NSS) => 72.82 GB with a throughput of 21.47GB/h => it is not really fast, but at least it is acceptable on this environment (3.3 h)
De TSAFS.conf, looks like:
cluster=enable
cachememroythreshold=1
readaheadthrottle=2
readbuffersize=65536
readthreadallocation=100
readthreadsperjob=4
tsamode=Dual
cachingmode=disable

e) Server 5 (SLES 10 SP3/OES2 SP2a)
LINUX partition (EXT3)=> 8.96 GB with a throughput of 82.72 GB/h
OES partition (NSS) => 150 GB with a throughput of 27.13 GB/h => it is not really fast, but at least it is acceptable on this environment (5.5 h)
De TSAFS.conf, looks like:
cluster=enable
cachememroythreshold=10
readaheadthrottle=2
readbuffersize=65536
readthreadallocation=100
readthreadsperjob=4
tsamode=Linux
cachingmode=enable

As you can see the OES backup is really slow (in fact much slower than with Netware, and also than pure LX), so I was wondering if someone have been facing with this issue and how can it be fixed.
Any suggestions there?
Thanks in advance.

Ricard Malvesi
Labels (2)
0 Likes
15 Replies
Admiral
Admiral

ricard1 wrote:

>
> We are experiencing issues concerning the backup performance on some of
> our SLES+OES servers.
> The affected machines are running virtualized on VMWare with 1 CPU and
> 2 GB RAM.
> One of the servers is a SLES 1O SP3/OES 2 SP2a and the other are SLES
> 10SP3/OES 2 SP3.
> The backup software we are currently using is Commvault's Simpana 9.
> Here are some of the results:
>
> a) Server 1 (SLES 10 SP3/OES2 SP3)
> LINUX partition (EXT3)=> 7.97 GB with a throughput of 43 GB/h
> OES partition (NSS) => 110.68 GB with a throughput of 3.45 GB/h => with
> this speed the full backup needs a backup windows of 31 hours.....
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=15
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=10
> tsamode=Dual
> cachingmode=enable
>
> b) Server 2 (SLES 10 SP3/OES2 SP3)
> LINUX partition (EXT3)=> 5.75 GB with a throughput of 48 GB/h
> OES partition (NSS) => 120.69 GB with a throughput of 9.34 GB/h => with
> this speed the full backup needs a backup windows of 13 hours.....
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=1
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Dual
> cachingmode=disable
>
>
> c) Server 3 (SLES 10 SP3/OES2 SP2a)
> LINUX partition (EXT3)=> 5.27 GB with a throughput of 5.63 GB/h
> OES partition (NSS) => 98.27 GB with a throughput of 3.22 GB/h => with
> this speed the full backup needs a backup windows of 30,5 hours.....
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=1
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Dual
> cachingmode=disable
>
>
> The most strange of all is that I have another 2 servers where the
> backup performance is acceptable both are SLES 10 SP3/OES 2SP2a
> d) Server 4 (SLES 10 SP3/OES2 SP2a)
> LINUX partition (EXT3)=> 7.88 GB with a throughput of 72.95 GB/h
> OES partition (NSS) => 72.82 GB with a throughput of 21.47GB/h => it is
> not really fast, but at least it is acceptable on this environment (3.3
> h)
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=1
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Dual
> cachingmode=disable
>
> e) Server 5 (SLES 10 SP3/OES2 SP2a)
> LINUX partition (EXT3)=> 8.96 GB with a throughput of 82.72 GB/h
> OES partition (NSS) => 150 GB with a throughput of 27.13 GB/h => it is
> not really fast, but at least it is acceptable on this environment (5.5
> h)
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=10
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Linux
> cachingmode=enable
>
> As you can see the OES backup is really slow (in fact much slower than
> with Netware, and also than pure LX), so I was wondering if someone have
> been facing with this issue and how can it be fixed.
> Any suggestions there?
> Thanks in advance.
>
> Ricard Malvesi
>
>


You really need to give more info like the layout. Are these all vm's
sitting on 1 host? need the whole layout otherwise it is hard to figure out.
I have been down this road before. Is this a cluster?

0 Likes
Cadet 1st Class Cadet 1st Class
Cadet 1st Class

How many files? Are you having the backup set the archive bit?

-- Bob
0 Likes
Absent Member.
Absent Member.

No all these servers refer to different remote sites. On each of them is a 2node cluster.

Site 1:
- 2x IBM x3655 (8 logical CPU's + 32 GB RAM)
- ESXi 4.1.0
- Virtual server has 2CPU's + 4Gb RAM assigned
Site2:
- 2x IBM x3655 (8 logical CPU's + 32 GB RAM)
- ESXi 3.5.0
- Virtual server has 2CPU's + 2Gb RAM assigned
Site 3:
- 2x IBM x3655 (8 logical CPU's + 32 GB RAM)
- ESXi 4.0.0
- Virtual server has 1CPU's + 2Gb RAM assigned
Site 4:
- 2x IBM x3650 (4 logical CPU's + 36 GB RAM)
- ESXi 4.0.0
- Virtual server has 1CPU's + 2Gb RAM assigned
Site 5:
- 2x HP Proliant BL460c G6(8 logical CPU's + 72 GB RAM)
- ESXi 4.1.0
- Virtual server has 1CPU's + 4Gb RAM assigned

Regards,

Ricard Malvesi
0 Likes
Absent Member.
Absent Member.

We've had similar issues, but upgrading to SLES 11 / OES 11 made a big
improvement - backups took less than half the time. My last backup gave
me 343 GB @ 24.5 GB / hr. The four full backups I've done since
upgrading were all around 22 - 25 GB/hr.

This was still much slower than some of the straight linux servers (got
44 GB/hr on our ZCM server), but previously I was getting maybe 8 - 10
GB/hr, and a full backup was taking 50+ hours.

Similar setup of virtualised servers on VMWare.

Robert

On 19/01/2012 3:56 AM, ricard1 wrote:
>
> We are experiencing issues concerning the backup performance on some of
> our SLES+OES servers.
> The affected machines are running virtualized on VMWare with 1 CPU and
> 2 GB RAM.
> One of the servers is a SLES 1O SP3/OES 2 SP2a and the other are SLES
> 10SP3/OES 2 SP3.
> The backup software we are currently using is Commvault's Simpana 9.
> Here are some of the results:
>
> a) Server 1 (SLES 10 SP3/OES2 SP3)
> LINUX partition (EXT3)=> 7.97 GB with a throughput of 43 GB/h
> OES partition (NSS) => 110.68 GB with a throughput of 3.45 GB/h => with
> this speed the full backup needs a backup windows of 31 hours.....
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=15
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=10
> tsamode=Dual
> cachingmode=enable
>
> b) Server 2 (SLES 10 SP3/OES2 SP3)
> LINUX partition (EXT3)=> 5.75 GB with a throughput of 48 GB/h
> OES partition (NSS) => 120.69 GB with a throughput of 9.34 GB/h => with
> this speed the full backup needs a backup windows of 13 hours.....
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=1
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Dual
> cachingmode=disable
>
>
> c) Server 3 (SLES 10 SP3/OES2 SP2a)
> LINUX partition (EXT3)=> 5.27 GB with a throughput of 5.63 GB/h
> OES partition (NSS) => 98.27 GB with a throughput of 3.22 GB/h => with
> this speed the full backup needs a backup windows of 30,5 hours.....
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=1
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Dual
> cachingmode=disable
>
>
> The most strange of all is that I have another 2 servers where the
> backup performance is acceptable both are SLES 10 SP3/OES 2SP2a
> d) Server 4 (SLES 10 SP3/OES2 SP2a)
> LINUX partition (EXT3)=> 7.88 GB with a throughput of 72.95 GB/h
> OES partition (NSS) => 72.82 GB with a throughput of 21.47GB/h => it is
> not really fast, but at least it is acceptable on this environment (3.3
> h)
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=1
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Dual
> cachingmode=disable
>
> e) Server 5 (SLES 10 SP3/OES2 SP2a)
> LINUX partition (EXT3)=> 8.96 GB with a throughput of 82.72 GB/h
> OES partition (NSS) => 150 GB with a throughput of 27.13 GB/h => it is
> not really fast, but at least it is acceptable on this environment (5.5
> h)
> De TSAFS.conf, looks like:
> cluster=enable
> cachememroythreshold=10
> readaheadthrottle=2
> readbuffersize=65536
> readthreadallocation=100
> readthreadsperjob=4
> tsamode=Linux
> cachingmode=enable
>
> As you can see the OES backup is really slow (in fact much slower than
> with Netware, and also than pure LX), so I was wondering if someone have
> been facing with this issue and how can it be fixed.
> Any suggestions there?
> Thanks in advance.
>
> Ricard Malvesi
>
>


0 Likes
Absent Member.
Absent Member.

Hi Robert,

Could you please tell me how smooth was the update to SLES11/OES11??
And what is your experience with it, because we are also thinking to begin to deploy this one (but currently we do not have any experience with it).

Thanks in advance,

Ricard
0 Likes
Absent Member.
Absent Member.

Hi Bob,

In one site I have 540.964 files and in another site 467.619 files.
Concerning the archive bit in our case is always disabled.

Regards,

Ricard
0 Likes
Absent Member.
Absent Member.

ricard1;2171000 wrote:

Could you please tell me how smooth was the update to SLES11/OES11??
And what is your experience with it, because we are also thinking to begin to deploy this one (but currently we do not have any experience with it).


From my experience: smooth. If you just take care of the prerequisites you should be fine. Personally, I only had to change the mount options of volumes to 'by-path' fro some early installed SLES/OES2 boxes. After that, upgrading was a matter of next, next..

Experience is very good so far, especially like the much improved patch speeds on SLE11. First upgrades are running a month now, not seen any issues so far.

Best regards, Sebastiaan Veld If you find this post helpful and are logged into the web interface, show your appreciation and click on the star below...
0 Likes
Absent Member.
Absent Member.

Hi Sebastiaan,

Thanks for your comments. Did you had to modify the configuration for the followin services?
- DHCP
- CIFS
- FTP
- iPrint
- iFolder

Thanks in advance!

Ricard Malvesi
0 Likes
Absent Member.
Absent Member.

We "had" a similar Problem

We have two OES2 SP2 xen vm's (both on the same Host, attached to the same FC SAN) with "good" backup performance last week we upgraded the machines to OES2SP3 (SLES 10 SP4) and the backup perfromance problems started,
The bigger machine with with 6 Volumes runs smooth on Backups (nearly the same as before the upgrade)

The second one (hosting Groupwise 8.XX) dramatical droped down (Backup is beeing done on Both machines using SEP Sesan (tsafs)) I compared every SMS releated parameter and did not found any problem, tried different things nothing helped.
I even found the TID 7005707, saying that there was a problem with xen vm's with 4,8,16 etc GB having backup performance problems while using tsafs and this issues is fixed.

I was on your way to switch to OES11, and made a last test, wich helped in our case, i reconfigured Lum on the slow machine and made a tsatest wich looked ok now a test backup is running on that machine actually and it looks like the problem is gone away.

I will report the final result tomorrow.
0 Likes
Absent Member.
Absent Member.

Unfortunately the "poor backup performance" still exists, in the first look the backup had good performance, during the backup the performance broke down, so that backing up 109,2 GB took about 10 hours !!!

I will open a SR and will report back.
0 Likes
Knowledge Partner Knowledge Partner
Knowledge Partner

heinsohn-wibo;2181369 wrote:
Unfortunately the "poor backup performance" still exists, in the first look the backup had good performance, during the backup the performance broke down, so that backing up 109,2 GB took about 10 hours !!!

I will open a SR and will report back.


Tagging on as I'm interested to hear your findings/progress on this.

If supported bij Novell/SEP, going non TSA might be another avenue to take?

Cheers,
Willem
0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.