On Fri, 28 Sep 2007, [iso-8859-1] Daniel Spång wrote:
> On 9/28/07, linux-os (Dick Johnson) <[email protected]> wrote:
>>
>> On Fri, 28 Sep 2007, [iso-8859-1] Daniel Spång wrote:
>>
>>> Applications with dynamic input and dynamic memory usage have some
>>> issues with the current overcommitting kernel. A high memory usage
>>> situation eventually results in that a process is killed by the OOM
>>> killer. This is especially evident in swapless embedded systems with
>>> limited memory and no swap available.
>>>
>>> Some kind of notification to the application that the available memory
>>> is scarce and let the application free up some memory (e.g., by
>>> flushing caches), could be used to improve the situation and avoid the
>>> OOM killer. I am currently not aware of any general solution to this
>>> problem, but I have found some approaches that might (or might not)
>>> work:
>>>
>>> o Turn off overcommit. Results in a waste of memory.
>>>
>>> o Nokia uses a lowmem security module to signal on predetermined
>>> thresholds. Currently available in the -omap tree. But this requires
>>> manual tuning of the thresholds.
>>> http://www.linuxjournal.com/article/8502
>>>
>>> o Using madvise() with MADV_FREE to get the kernel to free mmaped
>>> memory, typically application caches, when the kernel needs the
>>> memory.
>>>
>>> o A OOM handler that the application registers with the kernel, and
>>> that the kernel executes before the OOM-killer steps in.
>>>
>>> Does it exist any other solutions to this problem?
>>>
>>> Daniel
>>> -
>>
>> But an embedded system contains all the software that will
>> ever be executed on that system! If it is properly designed,
>> it can never run out of memory because everything it will
>> ever do is known at design time.
>
> Not if its input is not known beforehand. Take a browser in a mobile
> phone as an example, it does not know at design time how big the web
> pages are. On the other hand we want to use as much memory as
> possible, for cache etc., a method that involves the kernel would
> simplify this and avoids setting manual limits.
>
> Daniel
>
Any networked appliance can (will) throw data away if there are
no resources available.
The length of a web-page is not relevent, nor is the length
of any external data. Your example will buffer whatever it
can and not read anything more from the external source until
it has resources available unless it is broken.
Cheers,
Dick Johnson
Penguin : Linux version 2.6.22.1 on an i686 machine (5588.29 BogoMips).
My book : http://www.AbominableFirebug.com/
_
****************************************************************
The information transmitted in this message is confidential and may be privileged. Any review, retransmission, dissemination, or other use of this information by persons or entities other than the intended recipient is prohibited. If you are not the intended recipient, please notify Analogic Corporation immediately - by replying to this message or by sending an email to [email protected] - and destroy all copies of this information, including any attachments, without reading or disclosing them.
Thank you.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]