Hey,
I read a document on securityfocus about fork bombinb a linux system.
Although they didn't speak about the effectiveness of resource limits I
guess that should be discussed because it's possible to make a linux
machine extremely slow (compared to FreeBSD for instance) even with well
configured resource limits.
I revised kernel/fork.c and I found a way to prevent this problem by
removing all associated processes with the parent, but that's far from
portable and should not be used for the sake of compatibilities. I guess
the function fork() should be revised.
And what about creating a 'maxprocs' sysctl var (even if left high) when
the resource limits problem is fixed? It would help security when it is
needed and wouldn't bother other applications. RLIMITs on login are not
trustworthy. It should exist a global limit in case someone could spawn
a shell without limits through some flawed application.
Thanks, and please advise.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]