Testing has confirmed much larger prefetch values work well.
Con
---
The current number of pages prefetched by swap_prefetch is very gentle, and
we are very cautious about whether prefetching is appropriate or not so we
can increase the size substantially.
Make the prefetch size magnitudes larger, being proportional to memory size
instead of the square root of it.
Fix the last_free calculation to work better with larger prefetch sizes.
Signed-off-by: Con Kolivas <[email protected]>
Index: linux-2.6.14-rc5-ck1/mm/swap_prefetch.c
===================================================================
--- linux-2.6.14-rc5-ck1.orig/mm/swap_prefetch.c 2005-10-22 10:14:26.000000000 +1000
+++ linux-2.6.14-rc5-ck1/mm/swap_prefetch.c 2005-10-22 10:36:30.000000000 +1000
@@ -24,7 +24,7 @@
#define PREFETCH_INTERVAL (HZ)
/* sysctl - how many SWAP_CLUSTER_MAX pages to prefetch at a time */
-int swap_prefetch = 1;
+int swap_prefetch;
struct swapped_root {
unsigned long busy; /* vm busy */
@@ -71,9 +71,7 @@ void __init prepare_prefetch(void)
mapped_limit = mem / 3 * 2;
/* Set initial swap_prefetch value according to memory size */
- mem /= SWAP_CLUSTER_MAX * 1000;
- while ((mem >>= 1))
- swap_prefetch++;
+ swap_prefetch = mem / 10000;
}
/*
@@ -307,10 +305,9 @@ static int prefetch_suitable(void)
* (eg during file reads)
*/
if (last_free) {
- if (temp_free + SWAP_CLUSTER_MAX + prefetch_pages() <
- last_free) {
- last_free = temp_free;
- goto out;
+ if (temp_free + SWAP_CLUSTER_MAX < last_free) {
+ last_free = temp_free;
+ goto out;
}
} else
last_free = temp_free;
@@ -375,6 +372,7 @@ static enum trickle_return trickle_swap(
case TRICKLE_FAILED:
break;
case TRICKLE_SUCCESS:
+ last_free--;
pages++;
break;
case TRICKLE_DELAY:
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]