>
> I would use perl and Net::HTTP for this. But then I'm
> familiar with both.
>
Really. You spider the site with that? One problem I always had with
wget is that it only gathers URLs from HTML. I wanted to get support at
least CSS, and maybe even JavaScript/VBScript URLS.
--
Unsubscribe in
doblejota wrote:
Hi,
I'm trying to get a section of a web page using wget. I edit a previous page
source just leaving an initial section. Then, I execute
wget is a good tool for downloading full websites. A good enough tool for
download one URL (but there are better ones for this, e.g. curl). But
Hi,
I'm trying to get a section of a web page using wget. I edit a previous page
source just leaving an initial section. Then, I execute
wget -c url
I obtain
Continued download failed on this file, which conflicts with `-c'.
Refusing to truncate existing file 'name.ext'. (Name of file I'v
3 matches
Mail list logo