Re: oom-killings, but I'm not out of memory!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Roy,

On Sun, Jul 03, 2005 at 09:44:37PM -0500, Roy Keene wrote:
> I think I'm having the same issue.
> 
> I've 2 systems with 4GB of RAM and 2GB of swap that kill processes when 
> they get a lot of disk I/O.  I've attached the full dmesg output which 
> includes portions where it killed stuff despite having massive amounts of 
> free memory.

What kernel version is that?

> oom-killer: gfp_mask=0xd0
> Mem-info:
> DMA per-cpu:
> cpu 0 hot: low 2, high 6, batch 1
> cpu 0 cold: low 0, high 2, batch 1
> cpu 1 hot: low 2, high 6, batch 1
> cpu 1 cold: low 0, high 2, batch 1
> cpu 2 hot: low 2, high 6, batch 1
> cpu 2 cold: low 0, high 2, batch 1
> cpu 3 hot: low 2, high 6, batch 1
> cpu 3 cold: low 0, high 2, batch 1
> Normal per-cpu:
> cpu 0 hot: low 32, high 96, batch 16
> cpu 0 cold: low 0, high 32, batch 16
> cpu 1 hot: low 32, high 96, batch 16
> cpu 1 cold: low 0, high 32, batch 16
> cpu 2 hot: low 32, high 96, batch 16
> cpu 2 cold: low 0, high 32, batch 16
> cpu 3 hot: low 32, high 96, batch 16
> cpu 3 cold: low 0, high 32, batch 16
> HighMem per-cpu:
> cpu 0 hot: low 32, high 96, batch 16
> cpu 0 cold: low 0, high 32, batch 16
> cpu 1 hot: low 32, high 96, batch 16
> cpu 1 cold: low 0, high 32, batch 16
> cpu 2 hot: low 32, high 96, batch 16
> cpu 2 cold: low 0, high 32, batch 16
> cpu 3 hot: low 32, high 96, batch 16
> cpu 3 cold: low 0, high 32, batch 16
> 
> Free pages:       14304kB (1664kB HighMem)
> Active:7971 inactive:994335 dirty:327523 writeback:25721 unstable:0 free:3576 slab:29113 mapped:7996 pagetables:341

There are about 100M of writeout data onflight - I suppose thats too much. 

Guess: can you switch to another IO scheduler than CFQ or reduce its queue size?

IIRC you can do that by reducing /sys/block/device/queue/nr_requests.

> DMA free:12640kB min:16kB low:32kB high:48kB active:0kB inactive:0kB present:16384kB pages_scanned:877 all_unreclaimable? yes
> protections[]: 0 0 0
>
> Normal free:0kB min:928kB low:1856kB high:2784kB active:0kB inactive:739100kB present:901120kB pages_scanned:1556742 all_unreclaimable? yes
> protections[]: 0 0 0

You've got no reservations for the normal zone either. 

How does /proc/sys/vm/lowmem_reserve_ratio looks like?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux