Hi all, Im learning web scraping with python from the following link http://www.packtpub.com/article/web-scraping-with-python
To work with it, mechanize to be installed I installed mechanize using sudo apt-get install python-mechanize As given in the tutorial, i tried the code as below import mechanize BASE_URL = "http://www.packtpub.com/article-network" br = mechanize.Browser() data = br.open(BASE_URL).get_data() Received the following error File "webscrap.py", line 4, in <module> data = br.open(BASE_URL).get_data() File "/usr/lib/python2.6/dist-packages/mechanize/_mechanize.py", line 209, in open return self._mech_open(url, data, timeout=timeout) File "/usr/lib/python2.6/dist-packages/mechanize/_mechanize.py", line 261, in _mech_open raise response mechanize._response.httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt Any Ideas? Welcome
-- http://mail.python.org/mailman/listinfo/python-list