Highlighted
Spetsnaz_br Absent Member.
Absent Member.
5079 views

OES2 SP1 at VM + HB + DRBD: its run?

Excuse my english... 🙂
I have accompanied with a lot interest this forum about OES2 SP1 on VM (especifically "OES2 SP1 on XEN - good idea?" thread) because I want to migrate my OES1 + NSS Volumes not virtualized environment to a VMs environment. Beyond the performance, I want to know if the eDir and volumes NSS work well at the XEN and want to use HB + DRBD for mirroring volumes and VMs in another physical machine.
I have:
1 machine IBM x3650 2CPU - 16GB RAM - 2x146GB RAID 1 - 4x146GB RAID 5
1 machine DELL PE2800 1CPU - 6GB RAM - 2x96GB RAID 1 - 4x146GB RAID 5

I was thinking: the IBM would be the master with VMs at RAID 1 and all data / NSS volumes at RAID 5 and the HB + DRBD mirroring it at DELL. The great doubt is NSS, he functions well in VMs? And still more with DRBD?

Any hint will be of great help. Thanks.
Labels (2)
Tags (5)
0 Likes
12 Replies
Brunold Rainer Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

Spetsnaz_br,

we have currently no nss volumes in xen domUs, but we run about 30 oes 2 sp1 systems having the whole meta directory infrastructure in xen / heartbeat / drbd clusters. They run fine, without any problems. We created logical volumes that we assigned to drbd, that does a synchronous mirror, then we created a reiser filesystem there in and configured heartbeat to control the drbd mirror as well as a resource group that mounts the filesystem on the primary drbd node and starts the xen domU there in. Working perfect for us.

The dom0s are sles 10 sp2 x86-64 with online updates.

Rainer
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

Spetsnaz,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

Has your problem been resolved? If not, you might try one of the following options:

- Visit http://support.novell.com and search the knowledgebase and/or check all
the other self support options and support programs available.
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://forums.novell.com)

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.novell.com/faq.php

If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.

Good luck!

Your Novell Forums Team
http://forums.novell.com


0 Likes
Spetsnaz_br Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

brunold;1798348 wrote:
Spetsnaz_br,

we have currently no nss volumes in xen domUs, but we run about 30 oes 2 sp1 systems having the whole meta directory infrastructure in xen / heartbeat / drbd clusters. They run fine, without any problems. We created logical volumes that we assigned to drbd, that does a synchronous mirror, then we created a reiser filesystem there in and configured heartbeat to control the drbd mirror as well as a resource group that mounts the filesystem on the primary drbd node and starts the xen domU there in. Working perfect for us.

The dom0s are sles 10 sp2 x86-64 with online updates.

Rainer

Thanks for your answer Brunold,

I'll try some configurations this week. I have SLES 10 SP2 and 11, both x86-64, but the question is: SLES 10 SP2 have Heartbeat while SLES 11 have better Xen improvements but uses OpenAIS (its new for me). About NSS volumes under a VM, I've read wich is better run RAW over XFS, but if NSS have mount/dismount problems with HB, the NCP volume is a alternative. In any case, I have to analise all before, not to run the risk of creating problems in my production environment, with my current NSS volumes, eDir and others applications.
If someone have another considerations, please let me known.

Mipis
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

brunold wrote:
> Spetsnaz_br,
>
> we have currently no nss volumes in xen domUs, but we run about 30 oes
> 2 sp1 systems having the whole meta directory infrastructure in xen /
> heartbeat / drbd clusters. They run fine, without any problems. We
> created logical volumes that we assigned to drbd, that does a
> synchronous mirror, then we created a reiser filesystem there in and
> configured heartbeat to control the drbd mirror as well as a resource
> group that mounts the filesystem on the primary drbd node and starts the
> xen domU there in. Working perfect for us.
>
> The dom0s are sles 10 sp2 x86-64 with online updates.
>
> Rainer


Hi Rainer,

Do you have any documentation on the specifics of your setup, or can you
recommend which documentation was the most helpful in developing your
system? I've been trying to set up a system like this and i keep coming
up against fine points that turn into show-stoppers.

Thanks,
Paul
0 Likes
Brunold Rainer Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

Paul

Do you have any documentation on the specifics of your setup


what exactly would you like to have ?

We build our xen host systems (dom0) with a standard autoyast template and then I have created a small shell script that uses heartbeat templates to build the two node cluster. It configures all heartbeat stuff starting from a resource group, the drbd master/slave resource, stonith devices, the colocation and all those things.

Rainer
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

brunold wrote:
> Paul
>
>> Do you have any documentation on the specifics of your setup

>
> what exactly would you like to have ?
>
> We build our xen host systems (dom0) with a standard autoyast template
> and then I have created a small shell script that uses heartbeat
> templates to build the two node cluster. It configures all heartbeat
> stuff starting from a resource group, the drbd master/slave resource,
> stonith devices, the colocation and all those things.


If you're feeling generous, i'd love to see your script. 🙂

Paul
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

DRBD for VM's makes me a little nervous every time I hear it. While DRBD is
awesome, for a VM you'll have to be very careful how you construct your
constraints and even more careful about reliability of network connectivity.
I have seen re-syncs take a very, very long time, so you need to be prepared
for a disaster case where you have a node that is down for a extended period
of time.

I would also NOT suggest Heartbeat for this project. While Pacemaker cost
more money, Pacemaker is a god-send over Heartbeat. Pacemaker can handle
OCFS2's o2cb and it can do the DRBD in active/active mode. I would strongly
advise you not to use DRBD in active/active and then put OCFS2 on top of it
and then store your VM images on it. I would use DRBD in active/active mode
for holding the VM configurations with OCFS2; configure Pacemaker to fence a
node in the event of a failure to prevent a split-brain scenerio (The
scenerio sounds really cool on paper until you actually spend the time to
work it out. The danger with this is a split-brain scenerio can happen with
DRBD in active/active.) Also you can implement cLVM.

I would also suggest you use a physical stonith device. The fact that DRBD
has been introduced to this discussion would indicate that costs is a
factor. Many people mistakenly think that they get away with out a stonith
device. If you are having cluster problems and you are using a SSH stonith
device, or even no stonith device, and you have data-corruption Novell might
not even give you the time off day.

In my humble experience with DRBD and Heartbeat, I much prefer DRBD on
Pacemaker.

0 Likes
Brunold Rainer Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

utlemming,

we run maybe ten 2-node xen cluster with heartbeat and drbd in active/passive mode.
The HP integrated lights out is used as stonith device and inside the drbd mirror there is a reiser filesystem holding a file backed domU. This works for us now more then two years very well. With sles 10 GM and SP1 there were some heartbeat problems regarding placement rules and those things, but with sp2 and port sp2 heartbeat patches that all seems to be fixed.

Rainer
0 Likes
Brunold Rainer Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

Paul,

can you write me an email ?

rainer dot brunold <at> allianz dot at

Rainer
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

Brunold,

I appreciate the success story about your environment. The reason I raise
the caution flag is because of my perspective -- I get to see all the broken
systems. Obviously my point of view is skewed because of perspective. My
main reason in saying what I said about it making me nervous is to help any
implementing to think it through.

But I still stand by my statements about Pacemaker -- it is much better, and
whole lot more user friendly.



0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

utlemming wrote:
> DRBD for VM's makes me a little nervous every time I hear it. While DRBD is
> awesome, for a VM you'll have to be very careful how you construct your
> constraints and even more careful about reliability of network connectivity.
> I have seen re-syncs take a very, very long time, so you need to be prepared
> for a disaster case where you have a node that is down for a extended period
> of time.
>
> I would also NOT suggest Heartbeat for this project. While Pacemaker cost
> more money, Pacemaker is a god-send over Heartbeat. Pacemaker can handle
> OCFS2's o2cb and it can do the DRBD in active/active mode. I would strongly
> advise you not to use DRBD in active/active and then put OCFS2 on top of it
> and then store your VM images on it. I would use DRBD in active/active mode
> for holding the VM configurations with OCFS2; configure Pacemaker to fence a
> node in the event of a failure to prevent a split-brain scenerio (The
> scenerio sounds really cool on paper until you actually spend the time to
> work it out. The danger with this is a split-brain scenerio can happen with
> DRBD in active/active.) Also you can implement cLVM.
>
> I would also suggest you use a physical stonith device. The fact that DRBD
> has been introduced to this discussion would indicate that costs is a
> factor. Many people mistakenly think that they get away with out a stonith
> device. If you are having cluster problems and you are using a SSH stonith
> device, or even no stonith device, and you have data-corruption Novell might
> not even give you the time off day.
>
> In my humble experience with DRBD and Heartbeat, I much prefer DRBD on
> Pacemaker.


When i read stories like this, it makes me long for the "good old days"
of Heartbeat 1 and Novell Cluster Services. Under Heartbeat 2, it's
actually more difficult to set up a basic DRBD/filesystem/server process
than it ever was. Honestly, why does it have to be so complicated for
basic cases?

Paul
0 Likes
Brunold Rainer Absent Member.
Absent Member.

Re: OES2 SP1 at VM + HB + DRBD: its run?

utlemming,

But I still stand by my statements about Pacemaker -- it is much better, and
whole lot more user friendly.


I understand what you mean because I remember how much time it took to get it all working together. You should know in depth about the parts you are bringing together and what impact they have into each other. Our solution might work in our infrastructure, but might now work in a different one.

As Peacemaker is relatively new in SLES 11 do you know of anybody (or yourself) that can write a cool solution article on how to configure SLES 11 with heartbeat and drbd ? Or does Novell plan to provide such a documentation in the SLES 11 docus ?

Rainer
0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.