Aren't there already apis to query for the holes in the file, and
doesn't tar already use them to efficiently back up sparse files? I
seem to remember seeing that somewhere.
Jim Dennis wrote:
Perhaps I should have been a bit more clear. /var/log/lastlog has
been a sparse file in most implementation for ... well ... forever.
The example issue is that the support for large UIDs and the convention
of setting nfsnobody to -1 (4294967294) combine to create a file whose
size is very large. The du of the file is (in my case) only about
100KiB. So there's a small cluster of used blocks for the valid
corporate UIDs that have ever accessed this machine ... then a huge
allocate hole, and then one block storing the lastlog timestamp for
nfsnobody.
However, this message was not intended to dwell on the cause of that
huge sparse file ... but rather to inquire as to the core issue;
how do we efficiently handle skipping over (potentially huge)
allocation holes in a portable fashion that might be adopted by
archiving and other tools? I provided this example simply to point
out that it does happen, in the real world and has a significant
cost (40 minutes to scan through NULs with which the filesystem fills
the hole for read()s).
OpenSolaris has implemented a mechanism for doing this and it sounds
reasonable from my admittedly superficial perspective.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]