Highlighted
Absent Member.
Absent Member.
734 views

Best Practise SAN Storage Setup (Raid Levels)?

Hi,

there have been a few discussions on that topic but I want to pick that up again as I have not found a fully conclusive answer for me.

I know that Raid-5 is not officially supported regarding I/O and performance issues and that Raid-10 is the recommended solution for reasons I fully understand.

I know have the problem that our SAN guys are migrating our SAN to an EMC Symetrix platform with a RAID-6 (Raid-5 + additional spare) Setup for the whole SAN Cluster. So they cannot provide any Raid-10 Setup for a portion of the SAN. However, they said, that such Raid discussions are rather unusual when it comes to SAN as all write operations are performed through a write cache which should make up for any performance disadvantages Raid-5 may have.

I am not sure if this is the case.

They asked if there are any best practises or recommendations For ArcSight DB setup on Storage types HDS USP-V or EMC Symetrix, especially also in regard of HBA (Host Bus adapter) or driver configuration which apparently can influence performance.

Does anybody have any experience on this or did actually do performance testing of any kid?

Thanks & best regards

Florian

Labels (1)
0 Likes
11 Replies
Highlighted
Cadet 2nd Class Cadet 2nd Class
Cadet 2nd Class

Hey Flo,

you bring up a pretty important topic. This comes up again and again.

And I agree. The more spindles you have in a SAN, the less impact the Raid level will have in my opinion. We should get to a discussion focused on IOPS rather than Raid levels.

We do have many customers running scenarios like this. Finally your storage team is reaponsible for providing the right performance for your system.

If it suffers, then you will notice it soon ;-).

I would recommend a quick setup of a test machine, connect it to the new SAN and run some extended bleep tests. Then you will see if the Raid5 impact is high or low. The only problem you will have with a shared SAN is that you will never know how it behaves under heavy load when other applications are connected as well.

lieben Gruss, Till

0 Likes
Highlighted
Absent Member.
Absent Member.

I there,

my two cents regaring talking about IOPS with SAN "professionals":

Be very carefull, with any EPS to IOPS ratios. I had a never ending discussion on why i am sure a given SAN / HW setup could process 3500 EPS or more and why it is very hypothetical to talk about the 10 or what thousand IOPS necessary for processing 3500EPS!

BTW, i have heared of many assumptions what a given number of EPS means in IOPS. Storage guys always want to talk about IOPS, SIEM guys cannot... I had to convince my customer with other customer setup examples with given performance results.

I guess, you will lose this discussion about EPS and IOPS capacities...

Another point to keep in mind:

Talking about EPS 2 IOPS rates maybe gives you a good idea of what your SAN has to be able to write down to your spindles. But what about the hardly predictable read performance, parallel to your write operations=

A lot of WEB or Console users could do a wide variety of things which end up in additional read or write operations.

Just help yourself with setups as close as possible to yours add some scaling up buffers, build it up and run extensive bleep testings...

If i had done, what storage guys told me, we weren't even able to handle 1000 EPS. Bleep tests result in about 10000 EPS...

Storage guys measured 10000 IOPS for our SAN.

So you see the gap between real life and theoretic assumptions? Within this given practical results, i would have to conclude that one event results in one IO operation. which is rather unrealistic...

Happy sizing,

Markus

0 Likes
Highlighted
Absent Member.
Absent Member.

by the way:

Anton knows exactly, what i am talking abount... 😉

Markus

0 Likes
Highlighted
Absent Member.
Absent Member.

Things like Hitachi HDP may work, but as was mentioned they need to be tested for an extended duration to reveal any limitations in the caching mechanism.  The problem with these technologies is, if they fail to provide the advertised performance after the purchase, it can be very expensive to upgrade afterwards (not just hardware, but downtime and resources spent).

I thought I was the only person interested in researching this topic, but since I found this thread I figured I'll share a draft of a document I was writing.  🙂

While reading, keep in mind that my original EPS to IOPS ratio was EPS x 0.66 = IOPS.  It was suggested by others than I pad to 0.85 (Stefan Zier), and finally we agreed on padding to 1.0.  Also keep in mind that I'm defining IOPS as the sum rating of each drive used in a RAID10 array. For example: 4 x 15k RPM SAS drives, assuming 175 IOPS per drive = 700 IOPS.  700 IOPS could support 1060 EPS.

Please try to poke as many holes in my logic as possible as I want to make sure we all arrive at the correct conclusion.

-Joe

0 Likes
Highlighted
Absent Member.
Absent Member.

I had a talk with a storage engineer today.  The USPV should be able to support multiple RAID levels, for instance REDO can be on RAID10, while arc_event_data resides on a dynamic pool of RAID5.

Not sure what the best layout is, but I'll keep you posted as I know more.

-Joe

0 Likes
Highlighted
Admiral
Admiral

Florian wrote:

I know have the problem that our SAN guys are migrating our SAN to an EMC Symetrix platform with a RAID-6 (Raid-5 + additional spare) Setup for the whole SAN Cluster. So they cannot provide any Raid-10 Setup for a portion of the SAN. However, they said, that such Raid discussions are rather unusual when it comes to SAN as all write operations are performed through a write cache which should make up for any performance disadvantages Raid-5 may have.

I am not sure if this is the case.

I had a presentation with EMC guys yesterday and there was a slide about the penalty you have by type of RAID.  By penalty you must understand that 1 useful IOPS in the front end equals X iops in the back end.  For RAID 10, penalty is 2 , for raid 5 penalty is 4 and for raid 6 penalty is 6.  So the RAID 6 is clearly the worse solution you can think about.

To answer further to your question, in theory, as long as the cache is able to handle all your write accesses, the penalty doesn't really matter as the speed is the speed of the cache and data are written on disk in the background.  The problem comes if you reach the limit because at that time, the cache has be to be flushed on disks and the impact can be very severe depending on your disks speed.  When this happens you pay the full price of a high penalty.

This means you must be sure the cache will always be sufficient to handle write accesses but this is not easy because you need to think about any possibility ( like a big file copy for instance ).

Let us know what your performance will become but I would be quite worry if I was in your situation 😞

HTH

Gaetan

0 Likes
Highlighted
Absent Member.
Absent Member.

Gaetan, thanks for your answer, it's helpful in regard to my discussion with our SAN guys. Would you be able to send me that EMC presentation or even post it here?

Thanks

Florian

0 Likes
Highlighted
Admiral
Admiral

Sorry I don't have a copy of the presentation.

0 Likes
Highlighted
Absent Member.
Absent Member.

Here's an article that details many of the concepts GCA mentioned.

http://blogs.techrepublic.com.com/datacenter/?p=2239

-Joe

0 Likes
Highlighted
Admiral
Admiral

Interesting article as long as you use it as an example for your own calculation because prices are changing quickly and list price can very far from the price you pay at the end.

Also, if I remember correclty, EMC guarantees 2500 iops for 1 SSD so basing you calculation on 6000 iops would be a bit too optimistic.  Another aspect that you shouldnt' forget is the scalability.  As SSD are very fast, the backbone must be able to transfer the information very quickly too.  Consequently there is a limitation in the number of SSD which can be used in your EMC SAN so be careful.

0 Likes
Highlighted
Absent Member.
Absent Member.

FYI - updated storage performance doc here:

https://protect724.arcsight.com/docs/DOC-1583

-Joe

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.