James Wilkinson wrote:
A better question is why is there still anything with a 2 gig file
size limit? Or why was there ever one in Linux, given that unix
should have already been going through the pain of conversion by the
time Linux distributions were being built?
When Linux distributions were first being built, the filesystem had a
64 MB limit. That's for the entire filesystem...
I would hope everyone involved knew that would be a temporary limitation
and that imposing any particular filesystem's limits on the kernel would
be short sighted. Now 2 gigs is a dollars's worth of disk space.
As for why there are still 2 gig limits -- for one thing, if you're
going to memory map a file, and use memory operations to read and write
the file, and you're using a 32 bit computer, then the 2 gig limit comes
with the territory. Memory-mapping files is a very useful technique,
and using 64-bit file accesses is inherently much slower on a 32 bit
processor (and it matters with memory-mapped files).
But that shouldn't limit your file size - and doesn't anymore for nearly
everything.
The other main reason (and the one I suspect applies here) is that it's
not considered worth the complexity: not worth paying the real price of
extra complexity to be able to handle large files to get the theoretical
benefit of having log files over 2 GB on 32 bit systems.
The benefit isn't theoretical if you have more than 2 gigs of data. I
thought the default now was to compile with large file support and had
been for some time. Does that mean someone intentionally is still
imposing tiny limits or that parts of the system haven't been rebuilt
for ages? I've had another program or two croak when hitting this no
longer relevant limit.
--
Les Mikesell
lesmikesell@xxxxxxxxx