On Mon, Jan 12, 2009 at 15:26,  <tt.traduto...@gmail.com> wrote:
> Dear all,
> I can not make a search using perl.
> I get the error:
>
> Couldn't get http://www.google.com/search?q=traducao at browser.pl
> line 13.
>
> This is the naked script:
>
> #!/usr/bin/perl
>
> use strict;
> use CGI::Carp qw(fatalsToBrowser);
> use CGI qw(:standard);
> print header;
>
> my $url = 'http://www.google.com/search?q=traducao';
>  # Just an example: the URL for the most recent /Fresh Air/ show
>
>  use LWP::Simple;
>  my $content = get $url;
>  die "Couldn't get $url" unless defined $content;
>
>  # Then go do things with $content, like this:
>
> print $content;

If you take a look in Google's robot.txt you will see

Disallow: /search

This means all URL paths that start with /search should not be
accessed by automated processes.  The error you are getting is Google
trying to stop you from violating their Terms of Service* (because
your user agent string is obviously that of a robot).  Google used to
provide a SOAP interface for automated searches, but I think they
shutdown that service.  It looks like it is shutdown for new users,
but old users are unaffected*.  You may be able to get a key from an
existing user who no longer uses it.

Another allowed option is to use their AJAX search API**.

* http://code.google.com/apis/soapsearch/
** http://code.google.com/apis/ajaxsearch/

-- 
Chas. Owens
wonkden.net
The most important skill a programmer can have is the ability to read.

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to