Re: CIFS slowness & crashes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>nor umount -f
> What are the errors?  What is the version of cifs.ko module?

umount2: Device or resource busy
umount: /tmpmnt: device is busy
umount2: Device or resource busy
umount: /tmpmnt: device is busy

Without -f it doesn't print those umount2 errors, just the other two.

The version is whatever comes with 2.6.12 or 2.6.13rc1.

> My tests of reconnection after server crash or network reconnection with
> smbfs works (for the past year or two) and also of course for cifs. 
> cifs also reconnects state (open files) not just the connection to
> \\server\share

Reconnection usually (or perhaps always, with new versions) works, but
nothing cannot be done if the server goes permanently offline (nor until
it comes back online). When the server is down, the programs that were
using the CIFS mount or that try to access it will halt.

> My informal tests (cifs->samba) showed a maximum of about 20%
> utilization of gigabit doing large file writes and about double that for
> large file reads with a single cifs client to Samba over gigabit. Should
> be somewhat simalar to Windows server.

This seems quite bad. Is it waiting for packet confirmations or what?

However, I have never got anything better than about 40 Mo/s with a
gigabit network so far. Jumbo frames and using direct cross-over cable
between the clients make no difference. Still, all protocols that I have
tried, except for SMB and CIFS, can reach that easily. I'll try to get
two Windows machines that I can test with. Perhaps the problem is with
Linux network drivers.

Anyway, 20 Mo/s or something like that would not be a big problem. The
real problem occurs when the speed drops under 3 Mo/s.

> The most common cause of widely varying speeds are the following:
> 1) memory fragmentation - some versions of the kernel badly fragment
> slab allocations greater than 4 pages (cifs by default allocates 16.5K
> buffers - which results in larger size allocations when more than 5
> processes are accessing the mount and the cifs buffer pool is exceeded)

Wouldn't this show up as increased system loads?

> 2) write behind due to oplock - smbfs does not do oplock, cifs does -
> which means that cifs client caching can cause a large amount of
> writebehind data to accumulate (with great performance for a while) -
> then when memory gets tight due to the inode caching in linux's mm
> layer, the cifs client is asked to write out a large amount of file data
> at one time (which it does synchronously). 
> 
> Both of these are being improved.  You can bypass the inode caching on
> the client by using the cifs mount option
> 	"forcedirectio"

This option has little or no effect. Still runs slowly (2.4 Mo/s right now).

My benchmark is reading data in 50 Mio chunks, as quickly as it can,
calculating the average read speed. The files being read are bigger than
4 Gio. I haven't tried writing anything (the shares that I play with are
read-only).

For the record, I am using O_RDONLY | O_LARGEFILE on open and then pread
 for all reads. However, I am getting similar results with all programs,
so I don't think that the reading method really matters that much.

- Tronic -

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux