Mikkel L. Ellertson wrote:
It isn't that someone decided to implement a limitation, but that
they didn't program around a limitation of 32 bit processors.
Somebody decided on size of an off_t back then.
> The
limit is imposed by using the standard GNU libc as compiled by gcc
on 32 bit processors. Considering that 1G hard drive was a large
drive at the time it was implemented, it was not unreasonable to
accept a 2G size limit.
I think it was unreasonable for a year or two of probably unmeasurably
small performance gain to force an ugly workaround to be needed for the
rest of the life of 32-bit systems.
> The fact that system memory and hard drive
sizes progressed much faster then processor size has led to ways of
handling larger files on 32 bit processors, but you have to use
them. It tends to be more then just changing a function call.
Preprocessor macros do all the grunge work, and pretty much every
program now has made the change. You just run into one that didn't
bother every now and then.
> It
also tends to make the program run slower, because you need more cpu
instructions to do the same thing.
But compared to anything involving a disk access a couple of cpu cycles
aren't going to make a difference.
--
Les Mikesell
lesmikesell@xxxxxxxxx