On Sat, Aug 31, 2002 at 05:19:19AM -0400, Michael D. Crawford wrote: > I'd like to download a sequence of pages which are produced by someone's > asp application so that I may read them while I am offline. > > Is there a parameter to wget that will allow me to do this? > > The URL for the first page is something like > > http://www.something.com/junk.asp&thepageIwant=1 > > I can use the "--html-extension" to cause the page I download to have a > .html extension, so my web browsers know what to do with the file. > However, I don't seem to be able to get wget to follow the link within that > page to the next page, because the link is given as a parameter to an asp > application. That is, there is HTML like this: > > <p>Click the following to go to the > <a href="http://www.something.com/junk.asp&thepageIwant=2">next > page</a>.</p> > > What I need is for wget to understand that stuff following an "&" in a URL > indicates that it's a distinctly different page, and it should go > recursively retrieve it. The --recursive option doesn't seem to help. > > Any help you can give me is appreciated.
I use a sequence I learned off the Linux Journal site: wget -m -L -t 5 -w 5 http://somplace.com/some.asp&page=1 When I've done so and encountered asp or cgi pages, it has done just fine -- the -m tag does recursion, infinite depth, and some url extensions. --Matthew -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]