Re: Fetch the content of a website

2005-09-11 Thread Todd Lewis
If you know what you are looking for on a particular site. Some helpful tools can be found cpan. http://www.cpan.org/ I've found the HTML::TableExtract to be very valuable for retrieving info. A lot of info on a web page are stored in table format. Mads N. Vestergaard wrote: -BEGIN PGP

Re: Fetch the content of a website

2005-09-11 Thread Chris Devers
On Sun, 11 Sep 2005, Mads N. Vestergaard wrote: > I have a few minor problems. > I need to get the content of a website, and search a bit in it. > > I'm using the package called LWP::Simple Not to complicate things, but have you looked at WWW::Mechanize ? http://search.cpan.org/~petdance/WWW-Me

Re: Fetch the content of a website

2005-09-11 Thread Mads N. Vestergaard
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, Well, it has to be perl, since there is also more to it than that, but my debugging shows, that it is the slow part. Mads Stephen York wrote: | Hi, | | Does it have to be perl? | I'd personally use the shell command called wget. | | | Steve | |

Fetch the content of a website

2005-09-11 Thread Mads N. Vestergaard
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hey Perl Beginners, I have a few minor problems. I need to get the content of a website, and search a bit in it. I'm using the package called LWP::Simple, and then i can get it, like this: $url = "http://domain.tld/site.ext"; my $cont = get $url;