Re: yum gone crazy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tim:
>>> I've seen plenty of times where you'll do an update that worked, then
>>> moments later try to do another (on the same PC or another), and you've
>>> unfortunately picked the moment that the repo has been changed.  You're
>>> stuck with repeatedly doing a "yum clean metadata", "yum update", for a
>>> while until the repos get their act together.  
 

Timothy Murphy:
>> I'm surprised there is not some simple way
>> of informing the user that the repo is being updated.
>> One would imagine the repodata could contain a message to that effect
>> (with an estimate of how long it would take)
>> which yum could read and repeat.
 

oldman:
> Sure, that could be done! With the net effect of you trying to update
> again, which of course you should be smart enough to figure out without
> being told.

If some scripting were involved in presenting the data to the yum
client, rather than it just HTTP fetching the repo file, it would be
doable.

For instance, the following HTTP response code:  

10.5.4 503 Service Unavailable
The server is currently unable to handle the request due to a temporary
overloading or maintenance of the server. The implication is that this
is a temporary condition which will be alleviated after some delay. If
known, the length of the delay MAY be indicated in a Retry-After header.
If no Retry-After is given, the client SHOULD handle the response as it
would for a 500 response.

Refer to:
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4>

Doing it in such a way would allow yum to try that repo again in X
minutes, or try another repo immediately, but more intelligently than
the current scheme.  It would allow yum to very quickly progress down a
list of repos just fetching HTTP headers (a miniscule amount of data),
not spending several minutes downloading a 1 meg file over and over,
until you've downloaded 20 megs, or more, of the same thing.

All you'd have to do is have the request for the XML data be handled by
a script on the server, rather than have it dumbly served out and rely
on the client to make decisions.  On that note, why isn't the rather
small repomd.xml file the deciding factor, in the current system,
instead of the rather large other file?  It doesn't seem very well
planned out.

-- 
(Currently running FC4, occasionally trying FC5.)

Don't send private replies to my address, the mailbox is ignored.
I read messages from the public lists.


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux