* Al Boldi <[email protected]> wrote:
> There is one workload that still isn't performing well; it's a
> web-server workload that spawns 1K+ client procs. It can be emulated
> by using this:
>
> for i in `seq 1 to 3333`; do ping 10.1 -A > /dev/null & done
on bash i did this as:
for ((i=0; i<3333; i++)); do ping 10.1 -A > /dev/null & done
and this quickly creates a monster-runqueue with tons of ping tasks
pending. (i replaced 10.1 with the IP of another box on the same LAN as
the testbox) Is this what should happen?
> The problem is that consecutive runs don't give consistent results and
> sometimes stalls. You may want to try that.
well, there's a natural saturation point after a few hundred tasks
(depending on your CPU's speed), at which point there's no idle time
left. From that point on things get slower progressively (and the
ability of the shell to start new ping tasks is impacted as well), but
that's expected on an overloaded system, isnt it?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]