Trond Myklebust wrote:
However if you know the cases where time is set implicitly by the
server, why can't you simply optimise away the ATTR_CTIME and/or
ATTR_MTIME?
I don't see any case other than explicit application calls on the client
to utime in which we would ever want the client to send its timestamp to
the server.
The cases that have to be checked for for the cifs client case are few -
setting the file size (truncate/ftruncate) and chmod/chown (which sets
CTIME on the client, not just the setting the mode) and in my traces
they do look different in setattr than calls to setattr that come
through utimes. For example, it probably is safe to assume that ctime
never has to be sent to the server when also one or more of the the
mode, owner or size is being set - and no other user space application
(just via the truncate system call) would ever call notify_change
(setattr) for both size and time, but it does make me nervous to throw
away any request to update the timestamps remotely if the size is also
changing (the change of the file size does need to go to the server).
It does seem like
utime(filename, timeval)
may be the only time we want to send time changes to the server but I am
not certain how risky such an approach is even after scanning fs/open.c
to ignore time changes except when both ATIME/MTIME/CTIME are set at the
same time (as they are in sys_utime and do_utimes). Most people
probably don't care if the server and client clocks are not too far off,
but it does affect performance (presumably even noticeable on something
like fsx test)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|