må den 19.09.2005 Klokka 19:35 (-0500) skreiv Steve French:
> CIFS time format is similar to "DCE time" ie expressed as 100
> nanoseconds units since 1600 UTC. In many large networks cifs clients
> in a domain will share an ntp or equivalent time source with the
> servers in the domain and will agree reasonably closely on what the
> UTC time is. Ideally (at least from a performance perspective) most
> time stamps would be set implicitly by the server rather than
> explicitly by the client (in particular time of file creation, which
> is set by the server when the file is created, and last write time,
> set by the server after every write). truncate of course is another
> example where time could be set implicitly by the server rather than
> explicitly by the client, but is hard to distinguish from explicit
> setattr.
That sucks... Even if the clients are synchronised using ntp, there is
no guarantee that two almost-simultaneous setattr RPC calls can't race
and cause time to go backwards if they have to explicitly set to the
"current" client time.
However if you know the cases where time is set implicitly by the
server, why can't you simply optimise away the ATTR_CTIME and/or
ATTR_MTIME?
Cheers,
Trond
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|