> Contrived thing and all, but what it does do is show exactly how bad seeking
> all over swap-space is. If you push it out before hitting enter, the time it
> takes easily grows past 10 minutes (with my 768M) versus sub-second (!) when
> it's all in to start with.
Think in "operations/second" and you get a better view of the disk.
> What are the tradeoffs here? What wants small chunks? Also, as far as I'm
> aware Linux does not do things like up the granularity when it notices it's
> swapping in heavily? That sounds sort of promising...
Small chunks means you get better efficiency of memory use - large chunks
mean you may well page in a lot more than you needed to each time (and
cause more paging in turn). Your disk would prefer you fed it big linear
I/O's - 512KB would probably be my first guess at tuning a large box
under load for paging chunk size.
More radically if anyone wants to do real researchy type work - how about
log structured swap with a cleaner ?
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]