Aubrey wrote:
On 3/1/06, Andrew Morton <[email protected]> wrote:
You mean 10MB.
Sorry for the typo.
The chances of finding 10MB of contiguous free pages are basically nil, so
the page allocator doesn't even try to free up pages to attempt to satisfy
such a large request. If it can't find the 10MB of free memory
immediately, it just gives up.
Nope. I've tested the case on the host. See below. The allocation for
300MB was sucessful when the cached memory was close to the total
memory.
Any thoughts why?
At a guess, this machine is using an mmu and a kernel compiled with
CONFIG_MMU, while the previous one wasn't?
Having an mmu means that all userspace allocations can be satisfied
with arbitrary collections of pages, not having one means you need
a linear area of physical memory of the required size.
I've never used one of those nommu systems, but I imagine that there
are conventions that need to be used that don't exist with general
purpose systems.
To start with: I'd try really hard to keep allocations <= 4k, even
if that means inefficiencies (eg building your own "page tables" in
the form of a radix tree, or using a list or other data structure to
allocate even a simple vector like you have there).
If you really need a big linear area, allocate this once when your
system boots and keep it allocated forever. You could even write
a custom allocator to manage this area, for example.
But asking on the uclinux list would probably be your best bet.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]