Dp9x, StoreOnce, and Fiber

So I am running DP 9.05 on an HPUX 11.31 server.  My clients are also HP-UX 11.31.  Both have a fiber conenction to a private fiber switch.  When I do the ioscan on the client it can 'see' the fiber 'tapes' (not 100% of the terminology).

So, my question is, why does my 4Gbs switch max out at 80Mbs throughput? I created a catalyst store using the fiber but can't get anyhing faster than ethernet speeds.  How do I make sure the backup uses the fiber vs ethernet?

And yes, a StoreOnce class is in my future..

Thanks for any advice.

Parents
  • This could be more about infrastructure than DP. I would suggest you in the first place to do some sniffing to make sure which network your jobs are using. 

    Also did you created the gateways according to the associated hostname to the FC card?

  • I have the fiber visible, created a new gateway, made sure I am on DP9.05, updated fiber drivers on server BUT now getting MUCH MUCH slower backup speeds than over the ethernet.  the Ethernet is 100Mbs and I get 60-70Mbs throughput.  The 4Gbs fiber gives me 25 Mbs.  Not sure where to go from here. 

  • do you use the same Backup-Data for comparison? Are both Devices configured identical when it comes to concurrency, Blocksize, Buffers etc? Is the Data really read locally and passed to the local Devicepath when using Fibre? Do you maybe have some omnirc-Variables set up that interfer with Performance like OB2SHMIPC=0?


  • wrote:

    do you use the same Backup-Data for comparison? Are both Devices configured identical when it comes to concurrency, Blocksize, Buffers etc? Is the Data really read locally and passed to the local Devicepath when using Fibre? Do you maybe have some omnirc-Variables set up that interfer with Performance like OB2SHMIPC=0?


    To answer your questions:

    1. Yes, same backup spec just different targets.

    2.  One is VLS set up to emlulate an LTO-Ultrium tape drive.  The backup to the VLS only uses 2 'tapes' with a concurency of 4.  The one going to the StoreOnce has up to 7 devices available for load balancing and concurrency is grayed out on the device properties.  The StoreOnce has a block size set to the default (256), Segment size of 10K, and 32 disk agent buffers.

    3. As far as I can tell.  On the VLS backup the network is slammed (can watch via netstat or glance).  The network is relatively quiet during the StoreOnce and the fiber port on the switch shows activity.

    4.  It was set to 0 as per what HP told me to do to solve a different issue.  I took it out, bounced DP and reran the backup.  no difference.

    Thanks for the questions. 

  • If you want to compare it properly please use same number of devices and concurrency on both, VTL-Drive and StoreOnce-Writer. On our Tests we saw pretty big differences between one or multiple streams sent to a VTL-Drive.

    OB2SHMIPC is an ancient relict that messes up communication between components, no idea why Support still recommends it. Personally I recommend removing it from all host.

Reply
  • If you want to compare it properly please use same number of devices and concurrency on both, VTL-Drive and StoreOnce-Writer. On our Tests we saw pretty big differences between one or multiple streams sent to a VTL-Drive.

    OB2SHMIPC is an ancient relict that messes up communication between components, no idea why Support still recommends it. Personally I recommend removing it from all host.

Children
  • Here are some quick tests.

    Interesting that the all the backups took more or less the same total amount of time but had different backup speeds.
    Still no where near what I expected 4Gbs/fiber vs 100Mb ethernet.

    Server Name: sapjbp
                        StoreOnce
                         Orig  Mod     VLS
    Media Agents  6        2        2
    start                7:38  7:52    8:07
    finish              7:49   8:04    8:18
    Data (MB)       16946 16944 16689
    Reported Speeds
    /usr/sap/DAA 4.23 15.24  5.94
    /                      1.44  3.87  1.46
    /usr                 6.16 12.61  6.76
    /var                 8.27 13.54  9.84
    /opt                 9.39 15.32 11.52

    So I tried a different server

    Server Name: sapbck
                         StoreOnce
                          Orig  Mod     VLS
    Media Agents  6         2        2
    start                8:36   8:44    8:27
    finish              8:43    8:53    8:35
    Data (MB)        16866 16889 16856
    Reported Speeds (MB/s)
    /tmp                2.86   5.36   3.22
    /                      8.13 16.28 12.19
    /usr               14.36 18.40 13.53
    /opt               14.95 19.95 15.73
    /var   17.94 21.62 15.40

  • Your Server sapjbp writes out around 25MB/s, that is way to much for 100Mbit Network and only around 1/4 of what 1Gbit LAN can do. No current Tapedrive will work on full-speed with that. Even LTO4 wants at least 60MB/s to start streaming.

    Please create a StandaloneDevice writing to local Null-Device on your server and make same backup there. If that delivers similar speed your Bottleneck is the Server itself and upgrading to Fibre will not result in any better performance.

    HPE just released a nice Article to find Bottlenecks like in your case, suggest to check out KM1449474

  • Your Server sapjbp writes out around 25MB/s, that is way to much for 100Mbit Network and only around 1/4 of what 1Gbit LAN can do. No current Tapedrive will work on full-speed with that. Even LTO4 wants at least 60MB/s to start streaming.

    Please create a StandaloneDevice writing to local Null-Device on your server and make same backup there. If that delivers similar speed your Bottleneck is the Server itself and upgrading to Fibre will not result in any better performance.

    HPE just released a nice Article to find Bottlenecks like in your case, suggest to check out KM1449474

  • Your Server sapjbp writes out around 25MB/s, that is way to much for 100Mbit Network and only around 1/4 of what 1Gbit LAN can do. No current Tapedrive will work on full-speed with that. Even LTO4 wants at least 60MB/s to start streaming.

    Please create a StandaloneDevice writing to local Null-Device on your server and make same backup there. If that delivers similar speed your Bottleneck is the Server itself and upgrading to Fibre will not result in any better performance.

    HPE just released a nice Article to find Bottlenecks like in your case, suggest to check out KM1449474

  • Your Server sapjbp writes out around 25MB/s, that is way to much for 100Mbit Network and only around 1/4 of what 1Gbit LAN can do. No current Tapedrive will work on full-speed with that. Even LTO4 wants at least 60MB/s to start streaming.

    I don't follow?  I can watch the network and VLS and see faster throughtput than 25MB/s.  I'll work up some screen shots.

    Please create a StandaloneDevice writing to local Null-Device on your server and make same backup there. If that delivers similar speed your Bottleneck is the Server itself and upgrading to Fibre will not result in any better performance.

    No idea how to do this.  Time to google I guess...

    |   HPE just released a nice Article to find Bottlenecks like in your case, suggest to check out KM144947

    FoundFound it but it seems a bit dated. Updated : 2012-Aug-03.  Know of a newer version?

    Thanks for all the help.

     

  • I found it here:

    https://softwaresupport.hp.com/group/softwaresupport/search-result/-/facetsearch/document/KM1449474

    Regarding the Performance I referred to the overall average over the Session Runtime. That you temporarly have higher Values is normal. Still everything shows you are not talking 100Mbit Network but at least 1Gbit.

    Null-Device is a simple thing, could look like this:

    NAME "Null_Device"
    DESCRIPTION "Standalone Null Device"
    HOST sapjpb
    POLICY Standalone
    TYPE File
    POOL "Null_Pool"
    CONCURRENCY 6
    BUFFERS 32
    DRIVES
            "/dev/null"
    BLKSIZE 256
    DEVSERIAL ""
    RESTOREDEVICEPOOL NO
    COPYDEVICEPOOL NO

     

  • I found it here:

    https://softwaresupport.hp.com/group/softwaresupport/search-result/-/facetsearch/document/KM1449474

    Regarding the Performance I referred to the overall average over the Session Runtime. That you temporarly have higher Values is normal. Still everything shows you are not talking 100Mbit Network but at least 1Gbit.

    Null-Device is a simple thing, could look like this:

    NAME "Null_Device"
    DESCRIPTION "Standalone Null Device"
    HOST sapjpb
    POLICY Standalone
    TYPE File
    POOL "Null_Pool"
    CONCURRENCY 6
    BUFFERS 32
    DRIVES
            "/dev/null"
    BLKSIZE 256
    DEVSERIAL ""
    RESTOREDEVICEPOOL NO
    COPYDEVICEPOOL NO

     

  • I found it here:

    https://softwaresupport.hp.com/group/softwaresupport/search-result/-/facetsearch/document/KM1449474

    Regarding the Performance I referred to the overall average over the Session Runtime. That you temporarly have higher Values is normal. Still everything shows you are not talking 100Mbit Network but at least 1Gbit.

    Null-Device is a simple thing, could look like this:

    NAME "Null_Device"
    DESCRIPTION "Standalone Null Device"
    HOST sapjpb
    POLICY Standalone
    TYPE File
    POOL "Null_Pool"
    CONCURRENCY 6
    BUFFERS 32
    DRIVES
            "/dev/null"
    BLKSIZE 256
    DEVSERIAL ""
    RESTOREDEVICEPOOL NO
    COPYDEVICEPOOL NO

     

  • Null-Device is a simple thing, could look like this:

    Would be easy if I knew how to do it.  when I try through GUI DP asks for a lot more information.  not sure how to create it at the filesystem level.

     

  • The Sample I posted previous was one I retrieved from my IDB using omnidownload.

    You can put the Text in a File on your CellServer, modify the Hostname to fit and create the Device with omniupload -create_device <filename>, no witchcraft required