Re: Huge Partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Rodolfo J. Paiz wrote:

At 14:44 4/22/2004, you wrote:

Root normally reserves space on a partition to prevent the disk filling up totally and crashing the system, but that's normally just 5%, so where's the other 5% (100GB) going? You can check how much space is "reserved" for root with:


With this in mind, how do you figure such a large space consumption for overhead? I interpret that as minimal and normal overhead -- formating factors, etc.

In fact, on my 30G physical drive as a single partition it reports a filesystem size of 27.94G (a loss of ~7% due to the differences in the way it is stated + overhead)


You'll note that that 1000/1024 is just about 2% short. That should be all you lose to disk naming conventions, but you may lose other space to (as others mentioned) inodes, overhead, etc. So you get to 27.94GB *filesystem size*. But if you add the "used" and "available" numbers from "df -m", do they add to that? Mine don't... they add up to about 5% less than the filesystem size. That's the reserved blocks.

On one disk I could check very quickly, I have this (edited for brevity):

<snipped>


root@rita [~]# df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/hda2 55236 29532 22898 57% / /dev/hda1 99 7 86 8% /boot none 243 0 243 0% /dev/shm

So, the loss of space to which you refer is independent of, and additional to, the one I mentioned. In reality,

        * Filesystem size = used + available + reserved

        * Disk space = filesystem size + overhead

We are both right. <grin>

AFAICT you are correct as relates to the management and overhead of the filesystem.

However, my point was not directly related to the overhead.
The OP on this thread (IIRC) said he had created a raid5 array from 9 250G drives, and expected to have a 2T array for use.


As I said earlier, the mfgr says the drive is 250G, but that is 250 X 1,000,000,000 (the mfgr definition of 1G). In terms used by most mfgrs 1G is actually ~931M due to the differences in the stated size and the size in binary terms. Thus each drive is a maximum of ~233G. 8X233 = ~1.864T which is then further reduced by the overhead of formatting, inodes, reserved, etc. as you have stated.

Yes, we are both right. 5% lost due to differences in stated size vs binary size, + 5% lost due to overhead of various types, easily makes this 2T array into only 1.8T formated and usable.

I was pointing out that a mfgr stated size is usually not the actual usable size of the drive, and you addressed the actual math to show how filesystems use space as overhead.

There are a few cases where the mfgr states his drive is XXG formatted, but that in my experience is rare, and would also usually be based on a dos/windows format and not *nix formatting.

On the box for an 80G Maxtor drive I recently bought it states "A gigabyte (GB) means 1 billion bytes" . Their stated definition of 1 gigbyte is 1,000,000,000 bytes, but gigabyte in binary terms is 1,073,741,824 bytes. The actual size of this drive as seen by the operating system is then 74.5G or less.




[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux