Anonymous_User Absent Member.
Absent Member.
768 views

Sentinel Network Storage over multiple Mount Points


We have a Sentinel 7 instance that we are configuring using the virtual
appliance. We will be using a SAN for network storage, but would like
to break up the storage if possible. Is there any way of using multiple
network locations for storage?


--
robertivey
------------------------------------------------------------------------
robertivey's Profile: http://forums.novell.com/member.php?userid=27938
View this thread: http://forums.novell.com/showthread.php?t=448815

0 Likes
5 Replies
Anonymous_User Absent Member.
Absent Member.

Re: Sentinel Network Storage over multiple Mount Points

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Sentinel itself does not really care about the mountpoints of the system
as long as it can write where it is going to try to write (and read
where it wants to read) without errors. The tricky thing you've
identified is somehow splitting up the data since all of the directories
broken up for each day go in the same 'events' directory and there do
not appear to be any ways of predicting the subdirectory names so you
could do mounts or symlinks for those ahead of time.

I'm sure there are things you could try like moving the data to another
partition and using symbolic links to those partitions in order to get
the data elsewhere, but if you're already using a SAN is there a reason
you cannot have the SAN or OS handle the allocation of space for
whatever? For example, assign space from LUN0 and LUN1 to the same
"disk" which then is partitioned for use by Sentinel? Maybe if you can
describe what you're trying to overcome with a bit more detail your
concerns may be clearer to me (or others reading).

Good luck.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBAgAGBQJO1QazAAoJEF+XTK08PnB5mYwP/1PtTGzJ4sUZXtZC7ODAXUNk
opeKZcvQDAvu1YXvaBm2ghxDC0jENdReA+xHjoVM+N8jLHj63eR9cpaeA+O17UIC
F4tqikKydz9+18TE6yuafbrwfmLXJXXXedxD0GZVlqxdXMinPUJxUOUpQJ9sxziC
R8NdNPh77Cta0IayP0THBUudya0sv/zFokDGSHfC0+eL7YudxUKK4MDJs4FG7ciz
sYBKcHs7QG2YEdd/cJAcsmyW0MbfaIUaVTvJobWs8TJ93Hjuhxzo0iaZ22/DUbr7
0FeW+zBe4AVtdqdBeJJB+3fTDy6Vbs2RqkKIKbWHepuWQgta67IQU76sD3lMgjfN
ucBPtT6HoN9cUP9Db6A9q4GxnW0jJP330TNIJ9h0rbQhwBbKvYh+brnb+3JM2gNA
9y93yezw5t8DYQHdiNwiaw2VzxbFm5FL2CpGoRNzLQHAOpRxbfK49/WUZFJ2yB5R
y4Avhhdc2zHuGtuPmY/+O5PwaHlyQLcEt92XxwVv+QWizB74hMnJEMXRuvrM+38W
VChURNABh6oa7aM1b24YPj5FpHggtppZUbDP25JMSMOF+i/Ap6KOBnnVi10//0UB
78afYb5TjT5EYAlk/Wk7fpSYaRBujgU/L5m0YrZoKDtzl2o0TXvIm4+ONIJAXijM
J8xKMIT7ka1QMxY1uo56
=qZmL
-----END PGP SIGNATURE-----
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Sentinel Network Storage over multiple Mount Points


In short, we have a limitation on the SAN. When creating storage pools
on the SAN, the storage must exist to add it to the pool on creation. A
limitation in the IBM 5000 series SAN is that pools cannot be resized
dynamically. This would not normally be an issue, but when things add
up, we are looking at potentially needing quite a bit of space to store
all data (500 EPS over 90 days is around 2.2 TB, depending on your
average event size). Looking ahead, this can be problematic, because we
want to thin provision the storage and add to it on an as needed basis.
With the SAN volume pools maxing out at 2TB, if we ever need to go
larger (we probably will within the first 3-6 months), this would
require the use of multiple SAN volumes.

My initial thought was to use Linux LVM to stitch together the
individual volumes, but this seems a bit tedious and a slightly hacked
method of accomplishing the task, especially considering we have a SAN
that is breaking up the storage, exposing it separately to an OS, only
to have it stitched right back together on the OS.

We cannot get the storage pool allocated to the full size we (may) want
it because we could be looking at a very large number, considering there
will be hundreds of Cisco Firewalls, VPN concentrators, and domain
controllers that will be pointed at this environment (one of the domain
controllers is estimated to pump out 30 EPS in Security Audit logs alone
- which we want to store). Because we can't get the storage all up
front (must be thin provisioned then grown) we will not be able to use
the raw mappings.

We decided to use the NAS head side of the SAN to provide us with an
NFS mount point for the volume. The NAS head can have its volumes span
multiple pools and provide us with a single volume that can scale as
large as we need. It can be thin provisioned, then grown on the NAS on
an as needed basis as we scale up the EPS we are storing within
Sentinel.

On a side note, one other hurdle we found when investigating storage
for the VMWare ESX environment is two big limitations with storage
attached directly to the ESX Server. First, VMFS volumes have a 2 TB
size limitation, so that will be your maximum amount (minus overhead)
for space on a single volume if you attach the storage to vmware and
have it present a VMDK file to the server. Second, RDM (raw device
mapping) has limitations when trying to move the server between
different vmware clusters for DR purposes.

NFS seemed to be the way to go. It seems to work better than a raw
device given to vmware because we are not limited to which cluster the
vm runs on. We also remove our limitation for growing the volume, since
we do not need all of the storage allocated into the storage pool that
will ever be needed upon initially provisioning it. With the NFS route,
we can grow the volume across multiple storage pools if required to
obtain as much storage as we may need.


--
robertivey
------------------------------------------------------------------------
robertivey's Profile: http://forums.novell.com/member.php?userid=27938
View this thread: http://forums.novell.com/showthread.php?t=448815

0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Sentinel Network Storage over multiple Mount Points

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I've entered an enhancement for this in the future in the form of
BugZilla entry 735790. For now I think LVM is probably your best
option. I'm not sure how that is really different from having Sentinel
point to multiple locations since in the end the endpoint is making up
for the SAN's deficiency, but this could be a nice feature for a bunch
of other reasons. NFS sounds like a good option for you as well.

I cannot give any kind of ETA worth its weight in digital characters as
it was just entered and needs to be prioritized with all of the other
improvements being made, but hopefully it can be fit in by 7.1.

Good luck.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBAgAGBQJO4XeXAAoJEF+XTK08PnB5OzMP/29EUuhjrHsI/EmJ4NmbYtw4
JjdPexQkJjgpruX3vjUj11MBbVMHUaTYk27/eFNPYxV5dF3biElFKONMEhETeKbj
bIPdzMP8xsnSkv+d2xEYInTTFbHsMMkmCVRa56zZcaKq4aK3YI7oVKAwui2/09gj
hkVJ4cUhn/kITHfbiHmfpw9osfXf5jfk3sj+Oo3bBDrHjfdJp8ta/S4LVdR/wPs6
22QrnZpu5wTcxY7gMCQtxz6/44VL1D6xoPagn2HC70B9r7qjRIMP6PmrioGIJ3UP
yDFvuchy7PDiuXMP8NCzwN/hRbXJlgKTR5A1jFMdiG24s3gHuFfYanHGdFDdAPkd
aEsTlXgju7spbZDyJVjnz5tPwCKRVQhiN1y6IGAnhmocPK9DCXN6ghfj1tyOmFND
7ohtoNrB4l0H/hcwnJAJX0Up26Osv3me2pVJUZbasdoGUmKg2cdzPlugVyx8vqiK
YKZMAWtshb9AojWp8627D6iIYYhFbFgFSCJJlMt4vezskoP2i/E+ETLRzqj05AX7
kKmS2rWd85csTOM/lkNVjF2zonjZH7FE5VVytdkjQl8Hahmq7i8GtI7s7gVPF7bR
BeumI9N54CfPzpQT87OWujJTZeOGAms7LNQyBDq+QtVGfYrv/pWeUsokl/G4aWyH
oHIv/Pz5RTOOXIYzFSAZ
=bFtV
-----END PGP SIGNATURE-----
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Sentinel Network Storage over multiple Mount Points


Aaron,

A complete hypothetical:

You already have some divisions between the online, rawdata, and
rawdata_archive arrangement. Since the rawdata & archive are zipped,
they take up a lot less space than the online files. So perhaps that
is good enough.

The gibberish folder names are the event source UUIDs. Since data
accumulates in the online folder tree, and -then- is moved to the
rawdata and -then- rawdata archive folder, it should be possible to
construct a script which looks for ..../online/UUID and then ensures
there is a corresponding mount -o for that UUID which tossed the data to
a number of storage areas. Alternatively, if Sentinel works with them,
a symlink to stub the folder over to some other place you want.

But you DO have a couple natural breaks, the online data, the processed
raw data, and the "network" storage ( cough ). And since you have a
"heads up" as to the UUID's ahead of time, you should be able to do
something in a script, and cron it to manage things every 10 minutes to
get well ahead of the hourly dredge Sentinel does.

Also in the absence of using mount -o, if Sentinel works properly with
the UUID folders being symlinks, then you can spray them all over any
storage you like and it should happily "just work."

But this sounds like a good enhancement request to allow it to work
with multiple storage areas and hash the data between them or pick the
one with the most free space.

-- Bob


--
Bob Mahar -- Novell Knowledge Partner
Do you do what you do at a .EDU? http://novell.com/ttp
"Programming is like teaching a jellyfish to build a house."
More Bob: 'Twitter' (http://twitter.com/BobMahar) 'Blog'
(http://blog.trafficshaper.com) 'Vimeo' (http://vimeo.com/boborama) <--
Click And Be Amazed!
------------------------------------------------------------------------
Bob-O-Rama's Profile: http://forums.novell.com/member.php?userid=5269
View this thread: http://forums.novell.com/showthread.php?t=448815

0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Sentinel Network Storage over multiple Mount Points

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I created an entry in BugZilla within Bug# 735790. It links to this
thread to provide a bit more background. Nice ideas, Bob.

Good luck.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBAgAGBQJO4XWMAAoJEF+XTK08PnB5kWIP/2/+5F5iHlY5Cv0NTYgQSG32
8El+4Q0sv+Wzi+P4HV24rDgWeyVo1ABKGfLNmwN2M2u9X8n1SSMXcaGwKEJbbDt+
3uxYoVBiAPp07MWCYe+2wLhfRAOUOH42e8bXg1pVdgXLJHnCG7qqI6cI+KEa+s5v
FuCtaiTz0S9ZGghCzKs+n36h52titnVZLd7Eh4nR3HjMNickYavivl7x9/+evL4N
r1MPA6vo1deQBCxdAPkRwhayl4ot4I/GQj3xchWI0pwe2zRyKI96QeaD0PDvP63O
e1J+5H3dnGVfW3LwpKE4Cj/wv69FUD9dNOqGlEfJpY+zYsH1lsj72UtNCjD1YPfV
Fjmv6CZKqarsLoiCE5Fik5044iHQF46U1ajdwgPkJLneKPCuOSJYMNyagunf2nOt
5v23oKki6DMKz3GyA0hMSgdiCFcMEsk2wWZagekijpZsVS3dt9YJrvNFGhqliIdL
r9P0769IRZQyuXtZXduA6axObfyX1pe+A2DtpcxjVwih0s4chUOVpE9ObCmMWrC0
8dIgd2EgvKLwNt2CJOA4DUkxpzm8y2LNyTw7mNZWg70T04jn1dYf7Qp81bC2WYaH
2s8PEYDxovFeN2pIzEJV7ytZqDT7jNHU1HKBzHQc5JWXe8Z4Qw8WQx/Y0noSuA1F
4APAsiYg4P7swy+Q3KFS
=jbKs
-----END PGP SIGNATURE-----
0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.