From: "Martin Stone" <martin.stone@xxxxxx> > Martin Stone wrote: > > Bill Gradwohl wrote: > > > >>> what did it say? > >> > >> > >> > >> Here's the script I used: > >> > >> for ((i=0; i<2100; ++i)); do > >> cp x FILE.$i > >> done > >> > >> The "x" file is just under 1GB big. I expected the script to fail at > >> roughly > >> the i=2000 mark. > >> > >> At the i=1001 point, I just got a string of "... no space left on > >> device." > >> messages from cp. > >> > >> BTW - while this loop was executing, I was getting about 1GB/min via > >> the cp > >> command. You say you're getting 1GB every 10 seconds. How are you > >> doing it? > >> The box I'm using is a 2 CPU XEON @2.4Ghz with 2G RAM running absolutely > >> stock FC1 - no patches whatsoever. I don't believe any patches would > >> address > >> this issue. > >> > > > > I'm doing: > > > > while true; do dd if=/dev/zero of="`date`" bs=1048576 count=1024; done > > > > Because I'm extremely lazy ;-) My system is dual 2.8 GHz Xeon with 2 GB > > RAM, with two Escalade 7506 controllers, each of which has 8 300 GB > > drives attached as a RAID 10 - 300GB * 8drives / 2mirrored = 1200 GB / > > 1024megs-in-a-gig = 1.17 TB. I created my ext3 fs'es with a very large > > number of inodes (1 per 4K) so the overhead is higher than normal. df > > -k reports the total size as 1135170164 K = 1.05 TB. I'm also running > > stock FC1, the only exception being that I did install the 2.4.22 kernel > > update build 2179. > > > > 36 GB to go... I'll wait until it completes to send this... > > > > OK it got done. Final output from df -k is: > > > > /dev/sdc1 1135170164 1135170164 0 100% /export/home > > /dev/sdd1 1135170164 699617620 376959076 65% /export/data > > > > So the full 1.05 TB got filled up with nary a problem. > > > > Here's my strong suggestion to you: Using fdisk, remove your partition > > and recreate it. Then remake your ext3 filesystem, and see what > > happens. I still have this feeling like parted is wack. > > > > Good luck, and please let me know how it goes! > > > > Martin > > Oh, also, just to be clear, I created these filesystems before I ever installed > that updated kernel RPM... so I don't think that's the problem. I am voting for Rick's solution if the disk array is formatted with extremely inefficient 512k blocks. Note that 2^32 * 512 equals about 2e12. So if you use a 2^31 size integer plus one bit sign you only get 1 terabyte. It appears it is time for the OS to seriously learn how to handle 2k block sizes or larger, learn to address with more than 32 bits worth of block numbers, or throw its hands in the air and forget being competitive. (I'm amused it is hitting users this soon. I saw it coming when I was doing SCSI device drivers for the Amiga back in the 80s and 90s. Everyone figured even one terabyte was outlandishly large suited only for main frame storage arrays. So I rather uncharacteristically kept my mouth shut. Now it's biting fairly advanced PC users. I've been out of SCSI for awhile. I don't know if they have amended the spec to allow more than 32 bits of block address or not. If not it's time for the SCSI spec to grow up as well as the user operating systems.) {^_-}