On Fri, 2006-07-07 at 05:38 -0500, Gene Heskett wrote: > After all my network probs, I thought I'd let yumex check things again. > It got hung, so I killed it. Then I had it clean the cache. Its > stuck now downloading primary.xml.gz for updates-testing in a repeating > forever loop, wtf? I've yet to see a case where you needed to clear the cache (all the downloaded RPMs and headers), doing so is just going to be a drain on bandwidth (yours and the servers), as all the headers will need to be downloaded again to resolve dependencies. Cleaning the metadata is all that I've seen that needs doing (it cleans away the file that is used to check for updates, and the one that gets complained about when yum doesn't work). If you aborted in the middle of getting a RPM header or file, the broken download will be resumed or restarted next time, automatically. I've seen plenty of times where you'll do an update that worked, then moments later try to do another (on the same PC or another), and you've unfortunately picked the moment that the repo has been changed. You're stuck with repeatedly doing a "yum clean metadata", "yum update", for a while until the repos get their act together. They don't seem too quick at finishing updating themselves. -- (Currently running FC4, occasionally trying FC5.) Don't send private replies to my address, the mailbox is ignored. I read messages from the public lists.