RE: Postal 56% waits for flock_lock_file_wait

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On which filesystem were the above results obtained if it was not
ext2?
The default ext3 fs was used.

> All the above results are telling you is that your test involves
several
> processes contending for the same lock, and so all of them barring the
> one process that actually holds the lock are idle.

Yes. It is  flock_lock_file_wait.

Leonid
-----Original Message-----
From: Trond Myklebust [mailto:[email protected]] 
Sent: Saturday, September 30, 2006 7:06 PM
To: Ananiev, Leonid I
Cc: Linux Kernel Mailing List
Subject: Re: Postal 56% waits for flock_lock_file_wait

On Sat, 2006-09-30 at 09:25 +0400, Ananiev, Leonid I wrote:
> A benchmark
>              'postal -p 16 localhost list_of_1000_users'
> 56% of run time waits for flock_lock_file_wait;
> Vmstat reports that 66% cpu is idle and  vmstat bi+bo=3600 (far from
> max).
> Postfix server with FD_SETSIZE=2048 was used.
> Similar results got for sendmail. 
> Wchan is counted by
>             while :; do
>                         ps -o user,wchan=WIDE-WCHAN-COLUMN,comm;
> sleep 1;
>            done | awk '/ postfix /{a[$2]++}END{for (i in a) print
> a[i]"\t"i}'
> If ext2 fs is used the Postal throughput is twice more and bi+bo by
50%
> less while  flock_lock_file_wait 60% still.

On which filesystem were the above results obtained if it was not ext2?

> Is flock_lock_file_wait considered as a performance limiting waiting
for
> similar applications in smp?

All the above results are telling you is that your test involves several
processes contending for the same lock, and so all of them barring the
one process that actually holds the lock are idle.

As for the throughput issue, that really depends on the filesystem you
are measuring. For remote filesystems like NFS, locks can _really_ slow
down performance because they are often required to flush all dirty data
to disk prior to releasing the lock (so that it becomes visible to
processes on other clients that might subsequently obtain the lock).

Cheers,
  Trond
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux