Re: OOM behavior in constrained memory situations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 06 February 2006 23:16, Christoph Lameter wrote:
> On Mon, 6 Feb 2006, Andi Kleen wrote:
> 
> > At least remnants from my old 80% hack to avoid this (huge_page_needed)
> > seem to be still there in mainline:
> > 
> > fs/hugetlbfs/inode.c:hugetlbfs_file_mmap
> > 
> >    bytes = huge_pages_needed(mapping, vma);
> >    if (!is_hugepage_mem_enough(bytes))
> >           return -ENOMEM;
> > 
> > 
> > So something must be broken if this doesn't work. Or did you allocate
> > the pages in some other way? 
> 
> huge pages are now allocated in the huge fault handler. If it would be 
> returning an OOM then the OOM killer may be activated.

Sorry Christoph - somehow I have the feeling we're miscommunicating already
all day.

Of course they are allocated in the huge fault handler. But the point
of that check is to check first if there are enough pages free and 
fail the allocation early, just to catch the easy mistakes (has 
races, that is why I called it a 80% solution) Just like
Linux mmap traditionally worked and still does if you don't enable
the strict overcommit checking.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux