Re: High availability on two boxes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My idea is to run virtual machines that if hardware fails moves over to some other physical box in case of a hardware failure. Even if you have fast boxes with lots of memory, there is  a limit to how many virtual machines you can run on one box, and still get decent performance. So chances are that I will nned more physical boxes. At the very least there should be a plan on how to add them, and it should be possible to expand the system without taking it down.

How long in the future?  As soon as possible, or when I'm confident enough to use this in a production environment.. By then I probably move to CentOS RHEL, but I'm used to Fedora so I'll start testing
on that.

As you ask  about the future, are there any new things on the horizon that I should wait for?



2008/5/7 Todd Denniston <Todd.Denniston@xxxxxxxxxxxxxxxxxx>:
Uno Engborg wrote, On 05/07/2008 03:33 AM:

The problem is that it can't be expanded to more than 2 boxes in the future.


what do you mean by more than 2 boxes[1], and how far in the future?

http://www.linbit.com/en/products-services/drbd/drbd-plus/
I believe they will probably roll this functionality into the openly released version later, but even so they have described on the mailing list that it's setup is very difficult, i.e., you'll want to pay them to set it up.

[1] with the current setup you can have several nodes in one drbd cluster, but _each_ drbd _resource_ is only mirrored between two nodes.




Thanks!
/uno


2008/5/5, Gijs <info@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>:
You might want to take a look at http://www.drbd.org/.
It basically does what you suggest. It creates a software raid 1 partition
across a local and networked partition.
Perfect for real-time syncronisation between disks across a network, while
not having to resort to expensive stuff like SANs and the like.

Regards,

Gijs

Uno Engborg wrote:

Hi,
I'm experimenting with a HA setup on two physical boxes.  The idea is to
create virtual Xen machines that  automagically do live migration from one
physical box to the other in case of e.g. a hardware failure on one of the
physial boxes.

My idea is to create a software raid of networked block devices and real
disks.  I.e:

Box A:
Software raid consisting of:
Physical disk in box A
Nework block device exported from box B

Box B:
Software raid consisting of:
Physical disk in box B
Network block device exported from box A


On top of this I plan t use  GFS2 and a cluster that handles the
migration of the virtual machines.

Any comments on this, would this work, or would it result in deadlock?
What about quorum disk? I suppose
I need one? Most setup of these kind of things usually have their
storage on a separate
SAN, but as this is mostly for experimentation and testing purposes, I
would hope that I could do without that for now.

What would be the best way to export the network block devices? I'm
thinking iSCSI or GNDB?

If this doesn't  look like a good idea, are there any other way to do
this.

Regards
Uno Engborg


--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list




--
Todd Denniston
Crane Division, Naval Surface Warfare Center (NSWC Crane)
Harnessing the Power of Technology for the Warfighter

-- 
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux