Tim <ignored_mailbox <at> yahoo.com.au> writes: > [...] > ln -s /usr/bin/GET /usr/local/bin/GET > (Make a link to the real file, from where the script is looking for it. > That saves on duplicate files. Or worse, from having two different > versions of a file, because one got automatically updated, and the other > didn't.) [root@localhost ~]# [root@localhost ~]# ln -s /usr/bin/GET /usr/local/bin/GET [root@localhost ~]# ll /usr/bin/GET -rwxr-xr-x 1 root root 14673 Jul 12 2006 /usr/bin/GET [root@localhost ~]# ll /usr/local/bin/GET lrwxrwxrwx 1 root root 12 Apr 28 08:01 /usr/local/bin/GET -> /usr/bin/GET [root@localhost ~]# [root@localhost ~]# date Sat Apr 28 08:01:44 BST 2007 [root@localhost ~]# Thanks, that's exactly why I was hesitant to, for example, create the directory myself or to use "touch" to make it, or to find the file and copy it over. Of course, I just created it at one minute past the hour, so I'll have to wait to see what happens in a bit :) > > So why does the cronjob example on the webpage have a URL in it, then? > > I don't really know, I don't know how the author's mind works. It'd be > easier if they'd provided a comment saying put the URI to your feed > where I've placed a dummy URI. Or prefaced it with a comment saying > auto-update URI. But I didn't see a hint in any direction. I got a 404 > error if I tried that link. Try experiementing with the URI to a feed, > there. > > NB: I haven't downloaded and looked at it, I've just perused their > website, as I noticed this thread. Good to know that I'm not alone in finding the documentation on that lacking. Yes, if I knew what was in the author's mind, that'd help :) I might e-mail him, dunno if he'll respond. Otherwise, it's a really great piece of software, by the way. -Thufir