On Sun, August 03 at 5:39 AM EDT "Michael D. Crawford" <[EMAIL PROTECTED]> wrote:
>If wget fails partway through recursively downloading a big website, is >there some way to start it over again, to have it download the rest of >the site without downloading everything a second time? > >What I envision is for wget to see that I already have some HTML files >in its download directory, and to follow the links in them, rather than >fetching the files from the website. Only if it comes across a file >that has not already been downloaded would it get the file from the >web. > >I've experimented with lots of wget's command line options, and it >seems to me like it ought to be able to do this, but I haven't been >able to get it to work. I'll have to assume you tried "--continue" option? It may have seemed that it was redownloading but it may just have been checking the sizes of the files already existing in the dirs in question. This option works great for large files but if you are dealing with a large number of small files it is probably just as quick to start over, who knows? HTH Shawn Lamson [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]