Remove the unnecessary size limit on setting read_ahead_kb.
Also make possible large values harmless. The stock readahead
is protected by always consulting the avaiable memory before
applying this number. Other readahead paths have already did so.
read_ahead_kb used to be guarded by the queue's max_sectors,
which can be too rigid because some devices set max_sectors to
small values like 64kb. That leads to many user complains.
Signed-off-by: Wu Fengguang <[email protected]>
---
block/ll_rw_blk.c | 5 -----
1 files changed, 5 deletions(-)
--- linux-2.6.17-rc6-mm1.orig/block/ll_rw_blk.c
+++ linux-2.6.17-rc6-mm1/block/ll_rw_blk.c
@@ -3810,12 +3810,7 @@ queue_ra_store(struct request_queue *q,
unsigned long ra_kb;
ssize_t ret = queue_var_store(&ra_kb, page, count);
- spin_lock_irq(q->queue_lock);
- if (ra_kb > (q->max_sectors >> 1))
- ra_kb = (q->max_sectors >> 1);
-
q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10);
- spin_unlock_irq(q->queue_lock);
return ret;
}
--- linux-2.6.17-rc6-mm1.orig/mm/readahead.c
+++ linux-2.6.17-rc6-mm1/mm/readahead.c
@@ -156,7 +156,7 @@ EXPORT_SYMBOL_GPL(file_ra_state_init);
*/
static inline unsigned long get_max_readahead(struct file_ra_state *ra)
{
- return ra->ra_pages;
+ return max_sane_readahead(ra->ra_pages);
}
static inline unsigned long get_min_readahead(struct file_ra_state *ra)
--
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]