On Mon, 2005-04-04 at 11:13 -0400, Scot L. Harris wrote: > On Mon, 2005-04-04 at 10:45, Masopust Christian wrote: > > Dear all, > > > > maybe this is a dumb question from an newbie, but... ;-) > > > > I've connected a RAID with appr. 2,6TB to my new installed > > Fedora-System (Fedora Core 3, Adaptec 39160 and RAID is an > > EonStore). > > > > When running fdisk the biggest partition i can create is 2TB! > > > > when trying to create the filesystem with mkfs.xfs directly > > on /dev/sda i also get only 2TB ! > > > > so, what am i doing wrong? is the adaptec 39160 the problem? > > (this because i read something about such big raids working > > with 3ware 9000 RAID cards) > > What raid level are you trying to setup? > > Each raid level utilizes the available drives in different ways which > result in the usable space being different. If you have 2.6 TB of raw > disk space and use Raid 1 (simple mirroring) you would only be able to > create a file system with a size of 1.3 TB. > > The different raid levels provide some trade offs in usable space vs. > speed of writes vs. speed of reads vs. resiliency. > > Raid 5 will have a 20% overhead due to the way it replicates data > across multiple drives. > Your description is good, but slightly flawed in the raid5 description. A raid5 configuration loses exactly one drive for available space no matter how many drives are involved. If the array contains 3 drives it loses 1/3 of the total drive space. If it instead contains 10 drives it only loses 1/10 the drive space. Thus the 20% figure is correct only in a case where the array contains 5 physical drives. This is because of the parity redundancy on raid5 that makes it possible to totally lose 1 drive and still lose no data. The OP did not state how many physical drives he has in the array, so I cannot conjecture whether the usable space (2TB) is correct or not.