Daniel Phillips wrote:
Andrew Morton wrote:
Daniel Phillips <[email protected]> wrote:
What happened to the case where we just fill memory full of dirty file
pages backed by a remote disk?
Processes which are dirtying those pages throttle at
/proc/sys/vm/dirty_ratio% of memory dirty. So it is not possible to
"fill"
memory with dirty pages. If the amount of physical memory which is dirty
exceeds 40%: bug.
So we make 400 MB of a 1 GB system unavailable for write caching just to
get around the network receive starvation issue?
What happens if some in kernel user grabs 68% of kernel memory to do some
very important thing, does this starvation avoidance scheme still work?
Also think about eg. scientific calculations, or anonymous memory.
People want to be able to use a larger percentage of their memory
for dirty data, without swapping...
--
What is important? What you want to be true, or what is true?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]