On Mon, 2005-03-21 at 11:23, Aleksandar Milivojevic wrote:
Linux does not protect user space processes from each other.
That statement is incorrect. Linux and Unix in general have done a better job of this than Windows ever did. I think what you mean is that without setting appropriate ulimits there is nothing to keep a user process from using all available resources on a system.
No, I ment that Linux is not protecting processes from each other. Yes, it does allocate seprate memory address space to each process, and one process can't corrupt the address space of other process, and trash or crash it. But this isn't really Linux doing it. It is in hardware. Your MMU is really doing the job here. All the Linux is doing is using what exists in the hardware: your MMU.
This kind of protection is really trivial to implement. It requires hardware that is capable of performing it. And once you have the hardware, there's really a little job for OS, other than actually using the existing hardware.
The more complex protection to implement is to protect processes from starving each other to death and stealing all system resources from each other. The operating system that I would describe as "rock solid" would have this kind of protections in place. Than you don't really need to set most of those ulimits. The system will be usuable even when put to extremely high stress, regardless if the stress comes from legitimate usage or abuse. Note the "legitimate usage" pharase here. Even the legitimate usage can make the Linux box go down, if it causes too much stress on the available physical resources.
Also, note that you can't limit access to all vital system rousources using ulimit. Check ulmit man page. No matter how you set limits, it is possible for abusive user to bring system down. Unless if you set limits so low, that system is actually unusable for anything else than demonstration of ulimit usage.
I like the Linux. And I like it a lot. But it is not "rock solid" operating system. Linux is designed to give high numbers in benchmark tests in laboratory controlled environment. Linux is doing great in this area. Linux is not designed to be rock stable in real world applications. It is doing great in real world, because it is designed to be fast, and adding resources is relatively cheap these days. But than, you end up with system that has 10 gigs of RAM only because once a month there's that half an hour period when all that RAM might be used, while rest of the time system would do just fine with 100 or 200 megs of RAM. Now, this is something I don't like to see. If the process is going to run for 2 hours, slowing it down to run for 2 hours and 3 minutes is more than acceptable, if it makes entire system performing stable under high loads.
Yeah, and I'd rather see Linux scoring #2 in benchmarks and being rock solid, than seeing Linux scoring #1 in benchmarks and being vulnerable to some basic resource attacks.
-- Aleksandar Milivojevic <amilivojevic@xxxxxx> Pollard Banknote Limited Systems Administrator 1499 Buffalo Place Tel: (204) 474-2323 ext 276 Winnipeg, MB R3T 1L7