Thanks, but complicated for true beginners. First issue was which of
three choices was XML::Simple - I chose to install XML-Simple-DTD Reader
over XML-Simpler or Test-XML-Simple. I later read that XML::Simple
probably comes with active Perl.
Then I read the FAQ for XML::Simple and found t
The script below scrapes a House of Representatives vote page which is in
xml and saves it in a spreadsheet which is best opened as an xls read
only. How can I:
1) scrape multiple vote pages into individual spreadsheets with a single
script?
2) Only scrape columns C, F, G, H in the resu
On Thu, 01 Jun 2006 20:14:36 -0400, David Romano <[EMAIL PROTECTED]>
wrote:
Hi kc68,
On 6/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I'm not getting past printing to the screen and to a file the page in
the
script below but without the list of names in the middle.
I'm not getting past printing to the screen and to a file the page in the
script below but without the list of names in the middle. Without the if
line I get an endless scroll. I want to be able to pull in all names and
then isolate and print one (e.g. abercrombie). Guidance and actual scr
On Mon, 22 May 2006 18:00:25 -0400, Jaime Murillo <[EMAIL PROTECTED]>
wrote:
On Monday 22 May 2006 14:49, [EMAIL PROTECTED] wrote:
When I execute the script below, I get the error message "No such file
or
directory at simple2.pl line 21." Line 21 is the Open OUT statement.
This script para
When I execute the script below, I get the error message "No such file or
directory at simple2.pl line 21." Line 21 is the Open OUT statement.
This script parallels a tutorial script that does work and I don't see the
error. It does print to the screen if I comment out the Open OUT line.
On Tue, 11 Apr 2006 18:12:16 -0400, <[EMAIL PROTECTED]> wrote:
I am slowly making my way through the process of scraping the data
behind a form and can now get five results plus a series of links using
the script below. I need help in doing the following: 1) Eliminating
all material on the
I am slowly making my way through the process of scraping the data behind
a form and can now get five results plus a series of links using the
script below. I need help in doing the following: 1) Eliminating all
material on the page other than the list and the links (and ultimately
elimina
On Mon, 10 Apr 2006 19:03:25 -0400, Charles K. Clarkson
<[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] wrote:
: I'm trying to scrape the data behind the form at
: http://www.theblackchurch.com/modules.php?name=Locator As a true
: beginner with Perl (I know some php), I'm working from training
I'm trying to scrape the data behind the form at
http://www.theblackchurch.com/modules.php?name=Locator As a true beginner
with Perl (I know some php), I'm working from training scripts that scrape
from another site. There are four scripts of increasing complexity, but
on the simplest I g
On Fri, 07 Apr 2006 16:02:53 -0400, Oliver Block <[EMAIL PROTECTED]>
wrote:
Hi,
I understand regex, but the following fails:
open PAGE, 'c://redcross.htm';
while( my $line = ) {
$line =~ /Health and Safety Classes/
print "$1\n";
}
What fails? Your forget a ';' after the regex but I guess
I'm trying to learn web scraping and am stopped at the basic point of
scraping a portion
of a web page. I'm able to scrape a full page and save it as *.xml or
*.htm, and I think
I understand regex, but the following fails:
**
# Prints a portion of a red cross web page to a new h
12 matches
Mail list logo