[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/21/07, Les Mikesell <[email protected]> wrote:
Mikkel L. Ellertson wrote:
> It isn't that someone decided to implement a limitation, but that
> they didn't program around a limitation of 32 bit processors.

Somebody decided on size of an off_t back then.

 > The
> limit is imposed by using the standard GNU libc as compiled by gcc
> on 32 bit processors. Considering that 1G hard drive was a large
> drive at the time it was implemented, it was not unreasonable to
> accept a 2G size limit.

I think it was unreasonable for a year or two of probably unmeasurably
small performance gain to force an ugly workaround to be needed for the
rest of the life of 32-bit systems.
Then what is reasonable?  Make an off_t 64-bits long?  Why stop there?
Sure, a 2^63 byte file sounds huge (admittedly it is, 8 uh...
Exabytes...), but remember that it wasn't that long ago that a 2^31
byte file sounded enormous.  Just some food for thought.

There are many instances where people in this industry have been
rather short-sighted.  A famous Bill Gates quote comes to mind.  And
then there was the "Y2K bug," even though it never amounted to much in
reality.  We are not in the habit of programming for growth.  Whether
we should seems easily answerable as "yes."  The "how much?" question
is much more debatable.

That said, with where we are, efforts should be made to ensure that
all programs can deal with things like > 2 (or 4) GB files.  Things
are quickly progressing toward 64-bit and even under 32-bit we can
easily have files much bigger than that.  The longer this "legacy"
code hangs around the more painful it will be to fix it later.

Jonathan


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]
  Powered by Linux