Anonymous_User Absent Member.
Absent Member.
3310 views

Stripe size on raid array and NSS

I have an HP MSA 30 which is direct attached storage connnected with a Smart
Array 6404- 4 channel 256K battery backed cache acclerator on it. It willl
be attached to a Netware 6.5.5 server that is currently running NSS volumes
now. My question is when I create these volumes as a Raid 5, my options can
be set in Stripe Size on the SA controller.- Default is 16K. I would like
to go to 64K Stripe size. My understanding is that NSS can only deal with 4K
stripe size. Do I leave it at 16K default. My other option is to drop down
to 8k but I didn't want to go that route. Looking for some input. Thanks
everyone.


Labels (1)
0 Likes
10 Replies
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

In novell.support.open-enterprise-server.netware.storage-media Brian Howard <bjhow@myrealbox.com> wrote:
> I have an HP MSA 30 which is direct attached storage connnected with a Smart
> Array 6404- 4 channel 256K battery backed cache acclerator on it. It willl
> be attached to a Netware 6.5.5 server that is currently running NSS volumes
> now. My question is when I create these volumes as a Raid 5, my options can
> be set in Stripe Size on the SA controller.- Default is 16K. I would like
> to go to 64K Stripe size. My understanding is that NSS can only deal with 4K
> stripe size. Do I leave it at 16K default. My other option is to drop down
> to 8k but I didn't want to go that route. Looking for some input. Thanks
> everyone.


I have a similar question m'self, but I'm running some tests to answer my
own question. I'm doing a testing series on a 64K stripe size since that's
what I've used elsewhere. On the other hand, I've never used SATA in
production and I know that changes things. The next test run will be with
the default stripe sizes (128K for RAID0, and 16K for RAID5) in order to
see if there are any real differences in performance.

The tool I'm using for this is iozone (www.iozone.com). Run from a network
client works just fine. But if you plop a windows or linux server into the
MSA you can run local tests than don't stress the network.

As for NSS, its block-size is 4k so as long as the stripe size is a
multiple of that it should be good. I've had NSS on RAID sets with a
stripe of 64K with no problems in the past. Back in the TFS days, IIRC the
consensus was to stripe your drives the same width as your block-size.
Doesn't hold as true with NSS.

--
Novell, it does a network good

0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

IIRC the limitations are only if NetWare is doing the RAID processing.
With the HP setup it does all the hard work and just presents the MSA30
tray as one drive to NetWare.

If you have time to fully test give her a go.

--
Timothy Leerhoff
Novell Support Forum Sysop

Success is getting what you want, however Happiness is wanting what you
get.
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

Thanks for the information. I will give the IOZone a whirl.
<corwin@visi.com> wrote in message
news:PlfTg.4049$0h7.890@prv-forum2.provo.novell.com...
> In novell.support.open-enterprise-server.netware.storage-media Brian
> Howard <bjhow@myrealbox.com> wrote:
>> I have an HP MSA 30 which is direct attached storage connnected with a
>> Smart
>> Array 6404- 4 channel 256K battery backed cache acclerator on it. It
>> willl
>> be attached to a Netware 6.5.5 server that is currently running NSS
>> volumes
>> now. My question is when I create these volumes as a Raid 5, my options
>> can
>> be set in Stripe Size on the SA controller.- Default is 16K. I would
>> like
>> to go to 64K Stripe size. My understanding is that NSS can only deal with
>> 4K
>> stripe size. Do I leave it at 16K default. My other option is to drop
>> down
>> to 8k but I didn't want to go that route. Looking for some input. Thanks
>> everyone.

>
> I have a similar question m'self, but I'm running some tests to answer my
> own question. I'm doing a testing series on a 64K stripe size since that's
> what I've used elsewhere. On the other hand, I've never used SATA in
> production and I know that changes things. The next test run will be with
> the default stripe sizes (128K for RAID0, and 16K for RAID5) in order to
> see if there are any real differences in performance.
>
> The tool I'm using for this is iozone (www.iozone.com). Run from a network
> client works just fine. But if you plop a windows or linux server into the
> MSA you can run local tests than don't stress the network.
>
> As for NSS, its block-size is 4k so as long as the stripe size is a
> multiple of that it should be good. I've had NSS on RAID sets with a
> stripe of 64K with no problems in the past. Back in the TFS days, IIRC the
> consensus was to stripe your drives the same width as your block-size.
> Doesn't hold as true with NSS.
>
> --
> Novell, it does a network good
>



0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

Brian,

> I have an HP MSA 30 which is direct attached storage connnected with a Smart
> Array 6404- 4 channel 256K battery backed cache acclerator on it. It willl
> be attached to a Netware 6.5.5 server that is currently running NSS volumes
> now. My question is when I create these volumes as a Raid 5, my options can
> be set in Stripe Size on the SA controller.- Default is 16K. I would like
> to go to 64K Stripe size. My understanding is that NSS can only deal with 4K
> stripe size. Do I leave it at 16K default. My other option is to drop down
> to 8k but I didn't want to go that route. Looking for some input. Thanks
> everyone.


NSS uses a 4K cluster size which is fixed. The RAID stripe size is
independent of the NSS cluster sizes - Netware is basically ignorant of
the RAID stripe size, it just sees a disk.

Going to 64K stripes is probably going to be more efficient for the RAID
controller, so I'd use that - you may want to experiment with the nss
/ReadAheadBlocks setting to match the NSS reads to the stripe size,
theres potentially a performance advantage.


--
Hamish Speirs
Novell Support Forums Volunteer Sysop.

http://haitch.net

(Please, no email unless requested. Unsolicited support emails will
probably be ignored)
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

In novell.support.open-enterprise-server.netware.storage-media Hamish Speirs <hamish@haitch.net> wrote:
> Brian,


> > I have an HP MSA 30 which is direct attached storage connnected with a Smart
> > Array 6404- 4 channel 256K battery backed cache acclerator on it. It willl
> > be attached to a Netware 6.5.5 server that is currently running NSS volumes
> > now. My question is when I create these volumes as a Raid 5, my options can
> > be set in Stripe Size on the SA controller.- Default is 16K. I would like
> > to go to 64K Stripe size. My understanding is that NSS can only deal with 4K
> > stripe size. Do I leave it at 16K default. My other option is to drop down
> > to 8k but I didn't want to go that route. Looking for some input. Thanks
> > everyone.


> NSS uses a 4K cluster size which is fixed. The RAID stripe size is
> independent of the NSS cluster sizes - Netware is basically ignorant of
> the RAID stripe size, it just sees a disk.


> Going to 64K stripes is probably going to be more efficient for the RAID
> controller, so I'd use that - you may want to experiment with the nss
> /ReadAheadBlocks setting to match the NSS reads to the stripe size,
> theres potentially a performance advantage.


I'll keep that in mind for later on down the testing series. Right now I'm
testing as close to raw performance as I can get. THEN I'll get to
OS-specific tuning.

As for striping the RAID5, I noticed that the default for the MSA is 16K.
It strikes me that computing parity bits for a 16k nibble is probably
faster than computing them for a 64k nibble for writes in the 4k range. So
my guess is that the 16k stripe will probably run faster in the 'random
write' test than the 64k stripe, but slower in the 'write huge file
sequentially' test.

--
Novell, it does a network good

0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

Brian,

> As for striping the RAID5, I noticed that the default for the MSA is 16K.
> It strikes me that computing parity bits for a 16k nibble is probably
> faster than computing them for a 64k nibble for writes in the 4k range. So
> my guess is that the 16k stripe will probably run faster in the 'random
> write' test than the 64k stripe, but slower in the 'write huge file
> sequentially' test.



Depending on the size of your writes, possibly. If you're writng more
than 16KB at a time, multiple 16KB chunks are liable to be more tiem
consuming than a single 64KB chunk. I think you'd need to get into
emperical teting with your data set to be find out the situation in your
environment, but suspect that theiference if there is any is going to be
minute.
--
Hamish Speirs
Novell Support Forums Volunteer Sysop.

http://haitch.net

(Please, no email unless requested. Unsolicited support emails will
probably be ignored)
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

In novell.support.open-enterprise-server.netware.storage-media Hamish Speirs <hamish@haitch.net> wrote:
> Brian,


> > As for striping the RAID5, I noticed that the default for the MSA is 16K.
> > It strikes me that computing parity bits for a 16k nibble is probably
> > faster than computing them for a 64k nibble for writes in the 4k range. So
> > my guess is that the 16k stripe will probably run faster in the 'random
> > write' test than the 64k stripe, but slower in the 'write huge file
> > sequentially' test.



> Depending on the size of your writes, possibly. If you're writng more
> than 16KB at a time, multiple 16KB chunks are liable to be more tiem
> consuming than a single 64KB chunk. I think you'd need to get into
> emperical teting with your data set to be find out the situation in your
> environment, but suspect that theiference if there is any is going to be
> minute.


The big crunch test is underway with the 16K stripe size. The data isn't
conclusive yet, but some trends are really showing up.

* 16K stripe RAID5 is much faster on sequential reads on large files than
64K stripe RAID5
* 16K stripe RAID5 is much slower on writes than 64K stripe RAID5
* 16K stripe RAID5 is much slower on writes across the board than 64K
stripe RAID5
* No significant differences (so far) in Random Read performance

For a file-server, I'd probably go with a 64K stripe at this point. Better
all-around performance. For a database... probably 64K stripe for the
transaction logs, and 16K stripe for the main DB files. Hmm.

--
Novell, it does a network good

0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

My first round of benchmarking is done, so I have a pretty good comparison
of 64K stripe-size versus 16K stripe-size for RAID5 arrays on the
MSA1500cs/MSA200.

My testing was done with iozone (www.iozone.com) on a Windows 2003 server.
It doesn't run on NetWare, so I would have been restricted to
network-clients, and that's not a real test. I would have done an
OES-Linux server, but we're not going to be attaching one of those to this
thing in the next year or two.

* Generally speaking, WRITE operations with the 16K stripe-size are 1/5th
as fast as those with the 64K stripe-size. Caching can significantly
affect this, though.
* For files under 2GB, READ performance is comparable.
* For files over 2GB, Sequential READ performance with the 16K
stripe-size is significantly faster than with the 64K stripe size.
Improvement is almost double in some cases, but more than 10% in all
cases.
* For files over 2GB, Random READ performance with the 16K stripe-size is
slightly slower than with the 64K stripe-size for record-reads under 512K,
but significantly better (though not as much as with Sequential) above
that size.
* For files over 2GB, Random and Sequential WRITE performance with the
16K stripe-size trails that of the 64K stripe-size.
* For files over 2GB and record writes over 1024KB, Random and Sequential
WRITE performance with the 16K stripe-size dramatically trails that of the
64K stripe-size.

Since I don't have a tool that'll do a good job of simulating file-serving
load, lots of random I/O to lots of separate files, I have to make
conclusions from the above and data from my own network. My file-server
network throughput charts suggest a 70/30 read:write ratio.

The read performance at the 16K stripe-size is attractive. That'll mean
faster backups. On the other hand, the write performance is significantly
degraded. The throughputs I'm seeing, especially when caching was not
involved are VERY poor. Caching will help, of course, but if the base I/O
subsystem just can't keep up with demand caching won't do any good no
matter how reordered the writes are.

Thanks to this, I'm strongly leaning towards a non-default 64K
stripe-size.

--
Novell, it does a network good

0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

www.iozone.org

<corwin@visi.com> wrote in message
news:KuzVg.8337$0h7.383@prv-forum2.provo.novell.com...
> My first round of benchmarking is done, so I have a pretty good comparison
> of 64K stripe-size versus 16K stripe-size for RAID5 arrays on the
> MSA1500cs/MSA200.
>
> My testing was done with iozone (www.iozone.com) on a Windows 2003 server.
> It doesn't run on NetWare, so I would have been restricted to
> network-clients, and that's not a real test. I would have done an
> OES-Linux server, but we're not going to be attaching one of those to this
> thing in the next year or two.
>
> * Generally speaking, WRITE operations with the 16K stripe-size are 1/5th
> as fast as those with the 64K stripe-size. Caching can significantly
> affect this, though.
> * For files under 2GB, READ performance is comparable.
> * For files over 2GB, Sequential READ performance with the 16K
> stripe-size is significantly faster than with the 64K stripe size.
> Improvement is almost double in some cases, but more than 10% in all
> cases.
> * For files over 2GB, Random READ performance with the 16K stripe-size is
> slightly slower than with the 64K stripe-size for record-reads under 512K,
> but significantly better (though not as much as with Sequential) above
> that size.
> * For files over 2GB, Random and Sequential WRITE performance with the
> 16K stripe-size trails that of the 64K stripe-size.
> * For files over 2GB and record writes over 1024KB, Random and Sequential
> WRITE performance with the 16K stripe-size dramatically trails that of the
> 64K stripe-size.
>
> Since I don't have a tool that'll do a good job of simulating file-serving
> load, lots of random I/O to lots of separate files, I have to make
> conclusions from the above and data from my own network. My file-server
> network throughput charts suggest a 70/30 read:write ratio.
>
> The read performance at the 16K stripe-size is attractive. That'll mean
> faster backups. On the other hand, the write performance is significantly
> degraded. The throughputs I'm seeing, especially when caching was not
> involved are VERY poor. Caching will help, of course, but if the base I/O
> subsystem just can't keep up with demand caching won't do any good no
> matter how reordered the writes are.
>
> Thanks to this, I'm strongly leaning towards a non-default 64K
> stripe-size.
>
> --
> Novell, it does a network good
>



0 Likes
Highlighted
Anonymous_User Absent Member.
Absent Member.

Re: Stripe size on raid array and NSS

In novell.support.open-enterprise-server.netware.storage-media Graham Prentice <gprentice_@_rocketmail.com> wrote:
> www.iozone.org


That's what I used.

> <corwin@visi.com> wrote in message
> news:KuzVg.8337$0h7.383@prv-forum2.provo.novell.com...
> > My first round of benchmarking is done, so I have a pretty good comparison
> > of 64K stripe-size versus 16K stripe-size for RAID5 arrays on the
> > MSA1500cs/MSA200.
> >
> > My testing was done with iozone (www.iozone.com) on a Windows 2003 server.
> > It doesn't run on NetWare, so I would have been restricted to
> > network-clients, and that's not a real test. I would have done an
> > OES-Linux server, but we're not going to be attaching one of those to this
> > thing in the next year or two.
> >
> > * Generally speaking, WRITE operations with the 16K stripe-size are 1/5th
> > as fast as those with the 64K stripe-size. Caching can significantly
> > affect this, though.
> > * For files under 2GB, READ performance is comparable.
> > * For files over 2GB, Sequential READ performance with the 16K
> > stripe-size is significantly faster than with the 64K stripe size.
> > Improvement is almost double in some cases, but more than 10% in all
> > cases.
> > * For files over 2GB, Random READ performance with the 16K stripe-size is
> > slightly slower than with the 64K stripe-size for record-reads under 512K,
> > but significantly better (though not as much as with Sequential) above
> > that size.
> > * For files over 2GB, Random and Sequential WRITE performance with the
> > 16K stripe-size trails that of the 64K stripe-size.
> > * For files over 2GB and record writes over 1024KB, Random and Sequential
> > WRITE performance with the 16K stripe-size dramatically trails that of the
> > 64K stripe-size.
> >
> > Since I don't have a tool that'll do a good job of simulating file-serving
> > load, lots of random I/O to lots of separate files, I have to make
> > conclusions from the above and data from my own network. My file-server
> > network throughput charts suggest a 70/30 read:write ratio.
> >
> > The read performance at the 16K stripe-size is attractive. That'll mean
> > faster backups. On the other hand, the write performance is significantly
> > degraded. The throughputs I'm seeing, especially when caching was not
> > involved are VERY poor. Caching will help, of course, but if the base I/O
> > subsystem just can't keep up with demand caching won't do any good no
> > matter how reordered the writes are.
> >
> > Thanks to this, I'm strongly leaning towards a non-default 64K
> > stripe-size.
> >
> > --
> > Novell, it does a network good
> >




--
Novell, it does a network good

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.