Re: yum local { partial } mirror

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Timothy Murphy wrote:
Kam Leo wrote:

I asked a while ago if there was any way
of only downloading a package once
and then installing it on several machine,
and I was advised that the yum-downloadonly plugin
would serve this purpose.

You do not need the yum-downloadonly plugin. Edit /etc/yum.conf and
change "keepcache=0" to "keepcache=1". All the rpms that yum downloads
will remain in the /var/cache/yum directories (separate directory for
each repo) until you dispose of them (yum clean packages, rm xxx, mv
yyy, etc.).

If you want other machines on your network to be able to use those
downloaded rpm files copy or move them to another directory, e.g. your
localrepo directory.

Thanks very much to you and the other responders.
Having read all the suggestions, what I'm proposing is:

0. Set keepcache=1 in /etc/yum.conf on each machine .

1. Soft-link /var/cache/yum on each machine to /common/yum ,
where /common is NFS-shared by all my computers.
You might find locking issue if multiple machines try to access the central cached repodata at the same time, since each machine stores its yum lock file on it's local disk, ie a different location. This is likely to lead to corruption of the downloaded metadata IMHO.

2. Now on each machine run - perhaps daily -
        yum localupdate /common/yum/*.rpm
You don't need the localupdate {unless you want to vet updated packages before releasing them to other machines}. You want machines with differing installed packages to be able to get packages from your nearest mirror. A normal yum update will use already downloaded packages in the /var/cache/yum/fedora/packages dir {as long as the package has the correct checksum according to the repodata}. This saves yum needing to download the header as well {AFAICT}.

3. Every week run "yum clean cache" on the machine carrying /common .
clean cache used to be a yum option but isn't any longer. It is split into dbcache {sqlite} and metadata. Yum uses the conf setting: metadata_expire=1800 # this is seconds=> 30 mins, this might be 15 mins by default. If the repodata hasn't been retrieved in longer than this time, then the repodata is refetched. You could avoid this for the extra machines by setting the expire time to 24 hours or so. Might help with {1} as well.

I'd be interested to know what the experts think of this proposal.
No expert, but:

Might just work ?
===
My version:
main machine:
- /etc/yum.conf:
  - keepcache=1
- failovermethod=priority {go through baseurls/mirrorlist in order, not random}
- /etc/yum.repos.d/*.repo
- add baseurls for my nearest mirrors {my own isp first = free downloads}.
- install every package that is installed on any other machine.
- to not install every update automatically on the main machine:
- yum install yum-downloadonly, and add --downloadonly to yum update line below. - yum -y update every day in cron job. {I align the time to within an hour of when my nearest mirror gets synced}. - {6} copy the repodata from each active repo yum cache to a folder so that it matches the original relative paths compared to /packages folder.
- vsftp server anon_root the files.

other machines:
- /etc/yum.conf:
- failovermethod=priority {go through baseurls/mirrorlist in order, not random}
- /etc/yum.repos.d/*.repo s
  - put the main machine as the first mirror baseurl.
- cron schedule yum -y update to be soon after main machine is usually complete.

Your method {shared cache folder} makes part {6} not required, so let us know if it works OK, and how many client machines are checking.

DaveT.


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux