Re: Way to make distant servers to appear to have the same data ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phil Meyer wrote:
David Timms wrote:
What is the most effective, most robust way to allow servers that are quite distant, and on slow networks "appear" to have the same content ?

In my example, the content is read/write at 4 sites. Hopefully the system should make caching possible for files that were originally at another site. If a file were not already cached, then it would get loaded across the slow network.

...
In the olden days, we used to require users to 'promote' content from a staging server to a production server. The act of 'promotion' included an rsync to the remote servers, and then a 'roll out' of that content at the same moment on all servers once all had acknowledged receipt of the promotion. It was all automated and not hard to do.

Steps:

rsync staging/content primary::staging/content

wait for completions

on all servers at once:

mv content content.$$ ; mv staging/content content

roll back was easy

mv content.NNNNN content

I have been out of that game for a few years, but it was easy enough then.

Now days you have much more dynamic content, more middle ware, broader use of server based cookies, etc.

It can be difficult to insure that all of that data is preserved across remote servers.

But for fairly static content, the old way should work just fine.
So, with this method, would you keep multiple versions of the "content" ? For a large amount of data, this would seem to require very large disk space.

Or would you promote each small folder etc individually ?

Can the system automate this, or does a user need to run some script to activate it ? How would you instigate the script if using winXP only clients connecting via samba ?

DaveT.


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux