On Sat, 2007-11-17 at 16:53 +0100, Diego Calleja wrote:
> El Sat, 17 Nov 2007 09:42:51 -0800, Martin Olsson <[email protected]> escribió:
>
> > I don't think that setting a max process count by default is a
> > good/viable solution.
>
>
> I don't see why...OS X had a default limit of 100 processes per uid (increased
> to 266 in 10.5) and "it works" (many people notices it, but it's not surprising
> since the limit is too restrictive).
>
> If you don't have limits, you can't avoid starvation easily. From my experience,
> since I use CFS, fork/compile bombs (forgetting to put a number after make -j...)
> are very sluggish mainly because the whole graphic subsystem is paged out.
I don't know if this is at all feasible, but is it possible to have a
mechanism that would detect a fork bomb in progress and either stop the
fork, or allow the user to cancel the operation? For example, are there
any legitimate processes (i.e. ones that really need to fork like crazy)
that would need to generate 200+ processes in less than 1 second?
(Note: I'm not a programmer; I'm just throwing out the idea.)
-Dane
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]