Hi,
I first asked this question on uClinux mailing list. My first question
is <http://mailman.uclinux.org/pipermail/uclinux-dev/2005-December/036042.html>.
Although I found this issue on uClinux, it's also can be demostrated
on Linux. This is a small program:
$> cat test2.c
#include <stdio.h>
#include <stdlib.h>
int
main ()
{
char *p;
int i, j;
for (i = 0;; i++)
{
p = (char *) malloc (8 * 1024 * 1024);
if (p)
for (j = 0; j < 8 * 1024 * 1024; j++)
p[j] = 0x55;
else
{
printf ("%d fail\n", i);
break;
}
}
return 0;
}
When I run it on my Linux notebook, it will be killed. I expect to see
it prints out "fail".
Thanks,
Jie
---------- Forwarded message ----------
From: Jie Zhang <[email protected]>
Date: Dec 22, 2005 1:10 AM
Subject: Re: [uClinux-dev] Question on the current behaviour of malloc
() onuClinux
To: uClinux development list <[email protected]>
On 12/21/05, Friedrich, Lars <[email protected]> wrote:
> > I traced this issue to __alloc_pages (). If there is not
> > enough free pages for allocation, it will call
> > try_to_free_pages () to reclaim page frames. If
> > try_to_free_pages () cannot help, which usually occurs when
> > run a fail-to-malloc application the second time, __alloc_pages
> > () will call out_of_memory (), which in turn will kill something.
> >
> > Can we make __alloc_pages () not call out_of_memory () if the
> > allocations request comes from malloc () and just return NULL?
>
> This is intended Linux behaviour. It is assumed that it is more
> important for you to run your new application than it is to
> keep an old application running. After all, if you wouldn't
> want that your new application runs, you wouldn't have started
> it in the first place.
>
> Imagine f.e. you can't call kill because there is not enough
> memory left and you can't free memory because kill can't be called.
>
> Despite this, it seems to be in accordance of SuSv3. Linux is able
> to allocate memory for the application. There is afaik no requirement
> how this has to be achieved. Anyway, requests to change this behaviour
> need to be posted on the Linux kernel mailing list.
>
Here is very small program
$> cat test.c
#include <stdio.h>
#include <stdlib.h>
int
main ()
{
char *p;
p = (char *) malloc (32 * 1024 * 1024);
if (p)
printf ("sucess\n");
else
printf ("fail\n");
return 0;
}
When I run it on uClinux for Blackfin (BF533-STAMP board, which has
64MB SDRAM), The first invocation of it usually prints out "fail" as
expected. But the following invocations will be killed by kernel
because out of memory. The problem is why the second and further runs
should be killed. (The first run is not because try_to_free_pages ()
did some progress.) In these failed malloc () cases, the 32MB memory
has not been allocated. So there is no such out-of-memory case which
will make kernel cannot kill. There is no difference on the memory
consumption between this program and a program which has only an empty
main () function. Why should it be killed?
It's true that Linux can allocate memory for application, but SuSv3
also says If the space cannot be allocated, a null pointer shall be
returned. But uClinux kills this application while there is plenty of
memory just because it cannot reclaim enough pages. It's very weird.
It would be better to nicely tell application that malloc () fails.
Thanks,
Jie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]