On Mon, 22 Aug 2005, Jonathan Horne wrote:
I never understood this argument of hardware raid and software raid.
To me, software raid is operating system dependant. Linuxraid partitions,
or Microsoft partitions mirrored across disks. Load up fdisk, and what do
you see, but 2 or more independent disks.
However, an hp smart array, the logical drive is configured in the bios of
the card. Why would you call other vendors cards "fake raid" when theirs is
almost the same process? Just because its not SCSI? Im not following you
here.
A hardware raid controller hands you a lun, you have no way of knowing
what the underpinnings of the block device are. This should be os
independant.
consider two examples, this is a 3ware
3ware 9000 Storage Controller device driver for Linux v2.26.02.001.
ACPI: PCI interrupt 0000:03:01.0[A] -> GSI 24 (level, low) -> IRQ 209
scsi0 : 3ware 9000 Storage Controller
3w-9xxx: scsi0: Found a 3ware 9000 Storage Controller at 0xdd200000, IRQ: 209.
3w-9xxx: scsi0: Firmware FE9X 2.06.00.009, BIOS BE9X 2.03.01.051, Ports: 4.
Vendor: AMCC Model: 9500S-4LP DISK Rev: 2.06
Type: Direct-Access ANSI SCSI revision: 03
SCSI device sda: 136697856 512-byte hdwr sectors (69989 MB)
SCSI device sda: drive cache: write back, no read (daft)
sda: sda1 sda2 sda3
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
That's actually two 73GB drives in a raid-1 mirror.
This is two fc-sata chassis each with a raid controller, there's 32 disks
here but the os only sees the raid stripes that I'm exporting.
Emulex LightPulse Fibre Channel SCSI driver 8.0.16
ACPI: PCI interrupt 0000:04:01.0[A] -> GSI 48 (level, low) -> IRQ 217
lpfc 0000:04:01.0: 0:1303 Link Up Event x1 received Data: x1 x1 x8 x2
scsi1 : Emulex LightPulse LP1050 2 Gigabit PCI Fibre Channel Adapter on
PCI bus
04 device 08 irq 217
Vendor: SC Model: FC2/SATA16R Rev: 603A
Type: Direct-Access ANSI SCSI revision: 03
SCSI device sdb: 3121954816 512-byte hdwr sectors (1598441 MB)
SCSI device sdb: drive cache: write back
sdb: sdb1
Attached scsi disk sdb at scsi1, channel 0, id 0, lun 0
Vendor: SC Model: FC2/SATA16R Rev: 603A
Type: Direct-Access ANSI SCSI revision: 03
SCSI device sdc: 3121954816 512-byte hdwr sectors (1598441 MB)
SCSI device sdc: drive cache: write back
sdc: sdc1
Attached scsi disk sdc at scsi1, channel 0, id 0, lun 1
Vendor: SC Model: FC2/SATA16R Rev: 603A
Type: Direct-Access ANSI SCSI revision: 03
SCSI device sdd: 3121954816 512-byte hdwr sectors (1598441 MB)
SCSI device sdd: drive cache: write back
sdd: sdd1
Attached scsi disk sdd at scsi1, channel 0, id 0, lun 2
ACPI: PCI interrupt 0000:04:01.1[B] -> GSI 49 (level, low) -> IRQ 225
lpfc 0000:04:01.1: 1:1303 Link Up Event x1 received Data: x1 x1 x8 x2
scsi2 : Emulex LightPulse LP1050 2 Gigabit PCI Fibre Channel Adapter on
PCI bus
04 device 09 irq 225
Vendor: SC Model: FC2/SATA16R Rev: 603A
Type: Direct-Access ANSI SCSI revision: 03
SCSI device sde: 3121954816 512-byte hdwr sectors (1598441 MB)
SCSI device sde: drive cache: write back
sde: sde1
Attached scsi disk sde at scsi2, channel 0, id 0, lun 0
Vendor: SC Model: FC2/SATA16R Rev: 603A
Type: Direct-Access ANSI SCSI revision: 03
SCSI device sdf: 577175552 512-byte hdwr sectors (295514 MB)
SCSI device sdf: drive cache: write back
sdf: sdf1
Attached scsi disk sdf at scsi2, channel 0, id 0, lun 1
So those raid controllers aren't even in the machine, they are attached
via the emulex light-pulse dual-port fiberchannel hba.
We then further agregate the raid stripes using lvm (to create one big
volume of the 4 1.5TB volumes). it ends up looking like this (in df -h)
/dev/mapper/stor1-storage1 5.8T 1.6T 4.2T 27% /mnt/storage1
If fdisk shows 1 logical drive drive to work with, and not independent
disks, then why is it fake raid?
It's the raid controller not the raid that's fake.
Was a software driver loaded to make this
possible?
Yes, there's two ways to do this, the vendors way, which requires a
special driver, which is different for each controller flavor, and has
some stupid limitations (like you can't build raids across multiple
controllers). Or the linux software raid way, which is controller
agnostic, more flexible (raid 4 5 6 50 60 etc), and frequently faster.
you can user device mapper under 2.6.x so that there can be some
congruency between what is represented in the fake raid controllers bios
and what you do to the disk in software.
Is it less reliable?
That's hard to answer in a simple fashion. I'd vastly prefer to deal with
linux software raid than have two or three different raid drivers that
work differently, particularly when it comes to the important part of raid
(dealing with failures). mdadm is your friend, hot spares, on the fly
rebuilds, linux software raid has you covered.
Hardware raid controllers have the luxury of isolating the disk so they
can hide the magic they perform from the os of the machine they are
connected to. being propriatary can be something of a liability (I've
pulled my hair out a couple times trying to get a dell perc3, dpt, or
promise sx6000 to actually do what it was supposed to). you'll find if you
benchmark enough of this stuff that software raid is faster, which might
be counter-intuitive, but the cpu in your computer is a heck of a lot
faster than the one on the raid controller, a real hardware raid
controller is a small (or not so small) computer, with an operating system
in flash, some ram, a real cpu and some i/o interfaces (pci sata scsi
whatever), in a promise controller this is an asic (the 5xxx series was
actually an fpga), in others (the old promise sx6000 was an i960) you can
actually see all the components.
jonathan
-----Original Message-----
From: fedora-list-bounces@xxxxxxxxxx [mailto:fedora-list-bounces@xxxxxxxxxx]
On Behalf Of Claude Jones
Sent: Monday, August 22, 2005 8:08 AM
To: For users of Fedora Core releases
Subject: Re: SuperMicro 5013
On Mon August 22 2005 8:50 am, Markku Kolkka wrote:
Claude Jones kirjoitti viestissään (lähetysaika maanantai, 22.
ASUS P4C800 ?? RAID 0, RAID 1, RAID 0+1, Multiple RAID, built
in, but maybe that's not what you're talking about
It's not a *hardware* raid controller, it's BIOS-based software
RAID a.k.a fake-RAID.
http://linux.yyz.us/sata/faq-sata-raid.html
Thanks for that link, Markku - there's some interesting info elsewhere on
that
site, as well. I didn't know of the distinction between BIOS-based software
RAID and true RAID...
--
--------------------------------------------------------------------------
Joel Jaeggli Unix Consulting joelja@xxxxxxxxxxxxxxxxxxxx
GPG Key Fingerprint: 5C6E 0104 BAF0 40B0 5BD3 C38B F000 35AB B67F 56B2