Re: 2.6.12-rc6-mm1 & 2K lun testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2005-06-16 at 13:37, Andrew Morton wrote:
> Badari Pulavarty <[email protected]> wrote:
> >
> > > 
> > > We seem to be always ooming when allocating scsi command structures. 
> > > Perhaps the block-level request structures are being allocated with
> > > __GFP_WAIT, but it's a bit odd.  Which I/O scheduler?  If cfq, does
> > > reducing /sys/block/*/queue/nr_requests help?
> > 
> > Yes. I am using CFQ scheduler. I changed nr_requests to 4 for all
> > my devices. I also changed "min_free_kbytes" to 64M.
> 
> Yeah, that monster cfq queue depth continues to hurt in corner cases.
> 
> > Response time is still bad. Here is the vmstat, meminfo, slabinfo
> > and profle output. I am not sure why profile output shows 
> > default_idle(), when vmstat shows 100% CPU sys.
> 
> (please inline text rather then using attachments)
> 
> > MemTotal:      7209056 kB
> > ...
> > Dirty:         5896240 kB
> 
> That's not going to help - we're way over 40% there, so the VM is getting
> into some trouble.
> 
> Try reducing the dirty limits in /proc/sys/vm by a lot to confirm that it
> helps.
> 
> There are various bits of slop and hysteresis and deliberate overshoot in
> page-writeback.c which are there to enhance IO batching and to reduce CPU
> consumption.  A few megs here and there adds up when you multiply it by
> 2000...
> 
> Try this:
> 
> diff -puN mm/page-writeback.c~a mm/page-writeback.c
> --- 25/mm/page-writeback.c~a	Thu Jun 16 13:36:29 2005
> +++ 25-akpm/mm/page-writeback.c	Thu Jun 16 13:36:54 2005
> @@ -501,6 +501,8 @@ void laptop_sync_completion(void)
>  
>  static void set_ratelimit(void)
>  {
> +	ratelimit_pages = 32;
> +	return;
>  	ratelimit_pages = total_pages / (num_online_cpus() * 32);
>  	if (ratelimit_pages < 16)
>  		ratelimit_pages = 16;
> _
> 

Wow !! Reducing the dirty ratios and the above patch did the trick.
Instead of 100% sys CPU, now I have only 50% in sys.

Of course, my IO rate is not so great, but machine responds really
really well. :) 

Thanks,
Badari

procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
id wa
 4 3667      8  76068 285016 4777976    0    0    51 22883  419  1900  0
49  0 51
20 3667      8  76068 285744 4779312    0    0    50 23108  433  1908  0
53  0 47
10 3680      8  76080 286492 4772888    0    0    58 26266  419  1805  0
56  0 44
 6 3661      8  76024 287116 4768136    0    0    50 27894  426  1765  0
59  0 41
 7 3679      8  76156 288052 4764620    0    0   270 24391  442  1852  0
53  0 47
 3 3691      8  77604 288732 4759296    0    0    44 24312  425  1809  0
57  0 43
 3 3697      8  75896 288868 4747808    0    0    82 29504  868  3605  2
64  0 34


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux