Re: python 2 urlopen vs python 3 urlopen

2018-08-27 Thread Terry Reedy
On 8/27/2018 1:25 PM, Sean Darcy wrote: python 2 : python Python 2.7.15 (default, May 15 2018, 15:37:31) . import urllib2 res = urllib2.urlopen('https://api.ipify.org').read() print res www.xxx.yyy.zzz In Python 2, this is the printed representation of a bytestring. python

Re: python 2 urlopen vs python 3 urlopen

2018-08-27 Thread Chris Angelico
On Tue, Aug 28, 2018 at 3:25 AM, Sean Darcy wrote: > python 2 : > > python > Python 2.7.15 (default, May 15 2018, 15:37:31) > . >>>> import urllib2 >>>> res = urllib2.urlopen('https://api.ipify.org').read() >>>> print res > www.

python 2 urlopen vs python 3 urlopen

2018-08-27 Thread Sean Darcy
python 2 : python Python 2.7.15 (default, May 15 2018, 15:37:31) . >>> import urllib2 >>> res = urllib2.urlopen('https://api.ipify.org').read() >>> print res www.xxx.yyy.zzz python3 python3 Python 3.6.6 (default, Jul 19 2018, 16:29:00) ... >>&

Re: urllib2 urlopen takes too much time

2017-12-08 Thread Python
traced > > >down to _socket.recv. I am calling some web services and each of them > > >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen, > > >while the rest of the call is finished in milliseconds. > > > > What happens if you use urlopen

Re: urllib2 urlopen takes too much time

2017-12-03 Thread Python
traced > > >down to _socket.recv. I am calling some web services and each of them > > >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen, > > >while the rest of the call is finished in milliseconds. > > > > What happens if you use urlopen(

Re: urllib2 urlopen takes too much time

2017-12-03 Thread cpollio
gt; >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen, > >while the rest of the call is finished in milliseconds. > > What happens if you use urlopen() by itself? > -- > Aahz (a...@pythoncraft.com) <*> http://www.pythoncraft.com/ >

Strange urlopen error

2016-04-12 Thread Mike Driscoll
token} values = json.dumps(payload) req = urllib2.Request(url, values, headers) try: response = urllib2.urlopen(req, timeout=30) break except IOError, e: if e.errno != errno.EINTR: print e.errno raise We log the errono and the raised exception. The exception is: IOError: And

Re: urlopen, six, and py2

2016-03-02 Thread Chris Angelico
m the first one having this > problem... > > (until this difference with urlopen I have found six to be extremely good at > helping not caring about python versions at all) What happens if you use 'requests' rather than urlopen? My guess is that requests will already have dea

Re: urlopen, six, and py2

2016-03-02 Thread Fabien
On 03/02/2016 03:35 PM, Matt Wheeler wrote: I agree that six should probably handle this, Thank you Matt and Chris for your answers. Do you think I should open an issue on six? It sounds unlikely that I am the first one having this problem... (until this difference with urlopen I have

Re: urlopen, six, and py2

2016-03-02 Thread Chris Angelico
On Thu, Mar 3, 2016 at 1:35 AM, Matt Wheeler wrote: >> from six.moves.urllib.request import urlopen >> >> try: >> with urlopen('http://www.google.com') as resp: >> _ = resp.read() >> except AttributeError: >> # p

Re: urlopen, six, and py2

2016-03-02 Thread Matt Wheeler
eed", using the "with" construction is an added feature, not a burden! > from six.moves.urllib.request import urlopen > > try: > with urlopen('http://www.google.com') as resp: > _ = resp.read() > except AttributeError: > # python 2 >

urlopen, six, and py2

2016-03-02 Thread Fabien
Hi, it seems that urlopen had no context manager for versions < 3. The following code therefore will crash on py2 but not on py3. from six.moves.urllib.request import urlopen with urlopen('http://www.google.com') as resp: _ = resp.read() Error: AttributeError: addinfourl in

Re: urllib2.urlopen() crashes on Windows 2008 Server

2015-12-06 Thread Ulli Horlacher
Dennis Lee Bieber wrote: > >> Connection reset by peer. > >> > >> An existing connection was forcibly closed by the remote host. > > > >This is not true. > >The server is under my control. Die client has terminated the connection > >(or a router between). > The odds are still good

Re: urllib2.urlopen() crashes on Windows 2008 Server

2015-12-04 Thread Ulli Horlacher
line 1992, in > > File "", line 180, in main > > File "", line 329, in get_ID > > File "", line 1627, in check_7z > > File "C:\Software\Python\lib\urllib2.py", line 154, in urlopen > > File "C:\Software\Python\lib\urllib2.p

urllib2.urlopen() crashes on Windows 2008 Server

2015-12-03 Thread Ulli Horlacher
et_ID File "", line 1627, in check_7z File "C:\Software\Python\lib\urllib2.py", line 154, in urlopen File "C:\Software\Python\lib\urllib2.py", line 431, in open File "C:\Software\Python\lib\urllib2.py", line 449, in _open File "C:\Software\Pyth

Re: Using urlopen in Python2 and Python3

2015-08-24 Thread Cecil Westerhof
On Monday 24 Aug 2015 19:37 CEST, Ned Batchelder wrote: > On Monday, August 24, 2015 at 1:14:20 PM UTC-4, Cecil Westerhof wrote: >> In Python2 urlopen is part of urllib, but in Python3 it is part of >> urllib.request. I solved this by the

Re: Using urlopen in Python2 and Python3

2015-08-24 Thread Ned Batchelder
On Monday, August 24, 2015 at 1:14:20 PM UTC-4, Cecil Westerhof wrote: > In Python2 urlopen is part of urllib, but in Python3 it is part of > urllib.request. I solved this by the following code: > > from platf

Using urlopen in Python2 and Python3

2015-08-24 Thread Cecil Westerhof
In Python2 urlopen is part of urllib, but in Python3 it is part of urllib.request. I solved this by the following code: from platform import python_version if python_version()[0] < '3': from urllib

Re: urllib2.urlopen error "socket.error: [Errno 104] Connection reset by peer"

2015-05-12 Thread dieter
"Jia CHEN" writes: > I have the error below when trying to download the html content of a webpage. > I can open this webpage in a browser without any problem. "Connection reset by peer" means that the other side (the HTTP server in your case) has closed the connection. It may have looked at th

urllib2.urlopen error "socket.error: [Errno 104] Connection reset by peer"

2015-05-04 Thread Jia CHEN
(default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import urllib2 request = urllib2.Request('http://guggenheiminvestments.com/products/etf/gsy/holdings') re

Re: Do I need to call close on the handle returned by urlopen?

2014-07-14 Thread Mark Lawrence
On 14/07/2014 15:59, krzysztof.zelechow...@syncron.com wrote: The tutorial says that I should use "with open" to close the file handle properly. The reference documentation for urlopen mentions that the handle returned is like a file handle but the code samples below do not bother to

Re: Do I need to call close on the handle returned by urlopen?

2014-07-14 Thread krzysztof.zelechowski
http://bugs.python.org/issue12955 Użytkownik napisał w wiadomości grup dyskusyjnych:lq0sar$r6e$1...@mx1.internetia.pl... The tutorial says that I should use "with open" to close the file handle properly. The reference documentation for urlopen mentions that the handle returned

Re: Do I need to call close on the handle returned by urlopen?

2014-07-14 Thread Skip Montanaro
> The tutorial says that I should use "with open" to close the file > handle properly. The reference documentation for urlopen mentions > that the handle returned is like a file handle but the code samples > below do not bother to close the handle at all. Isn’t it > i

Do I need to call close on the handle returned by urlopen?

2014-07-14 Thread krzysztof.zelechowski
The tutorial says that I should use "with open" to close the file handle properly. The reference documentation for urlopen mentions that the handle returned is like a file handle but the code samples below do not bother to close the handle at all. Isn’t it inconsistent?

Re: fork seems to make urlopen into a black hole?

2014-01-14 Thread Chris Angelico
On Wed, Jan 15, 2014 at 7:04 AM, BobAalsma wrote: > A program took much too long to check some texts collected from web pages. > As this could be made parallel easily, I put in fork. Rather than using the low-level fork() function, you may find it easier to manage things if you use the multiproce

fork seems to make urlopen into a black hole?

2014-01-14 Thread BobAalsma
A program took much too long to check some texts collected from web pages. As this could be made parallel easily, I put in fork. And the result seems to be that the program simply stops in the line with urlopen. Any suggestions? Relevant part: try: print 'urlopen by', k

Re: Send array back in result from urllib2.urlopen(request, postData)

2014-01-11 Thread vanommen . robert
I understand the problem now. the echo is a string, wich can contain text but no array. I've changed the PHP script so I get only text separated with comma's and in python I separate the textfields and declare them in the array. With the split methode I saw in the answer of J. Gordon. Thank yo

Re: Send array back in result from urllib2.urlopen(request, postData)

2014-01-10 Thread MRAB
On 2014-01-10 20:57, vanommen.rob...@gmail.com wrote: Hello, I have a Raspberry Pi with 10 temperature sensors. I send the data from the sensors and some other values with json encoding and: result = urllib2.urlopen(request, postData) to a online PHP script wich places the data in a mysql

Re: Send array back in result from urllib2.urlopen(request, postData)

2014-01-10 Thread Denis McMahon
On Fri, 10 Jan 2014 12:57:59 -0800, vanommen.robert wrote: > Hello, > > I have a Raspberry Pi with 10 temperature sensors. I send the data from > the sensors and some other values with json encoding and: > > result = urllib2.urlopen(request, postData) > > to a online PH

Re: Send array back in result from urllib2.urlopen(request, postData)

2014-01-10 Thread Dave Angel
On Fri, 10 Jan 2014 12:57:59 -0800 (PST), vanommen.rob...@gmail.com wrote: No idea about the php.. In python when i do para = result.read() print para the output is: [null,null,null,null,null,"J"] That's a string that just looks like a list. This is correct according to the data in P

Re: Send array back in result from urllib2.urlopen(request, postData)

2014-01-10 Thread John Gordon
In vanommen.rob...@gmail.com writes: > result = urllib2.urlopen(request, postData) > para = result.read() > print para > the output is: > [null,null,null,null,null,"J"] > print para[1] > the output is: > n Probably because para is a string with the v

Send array back in result from urllib2.urlopen(request, postData)

2014-01-10 Thread vanommen . robert
Hello, I have a Raspberry Pi with 10 temperature sensors. I send the data from the sensors and some other values with json encoding and: result = urllib2.urlopen(request, postData) to a online PHP script wich places the data in a mysql database. In the result: result.read() i am trying to

Re: "The urlopen() and urlretrieve() functions can cause arbitrarily long delays"

2013-02-24 Thread Thomas Rachel
Am 24.02.2013 20:27 schrieb 7segment: When in doubt, check some other way, such as with a browser. Thank you Ian. Browser is not a good idea, because I need this tool to work automatically. I don't have time to check and compare the response times manually and put them into the database. Of

Re: "The urlopen() and urlretrieve() functions can cause arbitrarily long delays"

2013-02-24 Thread 7segment
On Sun, 24 Feb 2013 11:55:09 -0700, Ian Kelly wrote: > On Sun, Feb 24, 2013 at 10:48 AM, 7segment <7segm...@live.com> wrote: >> Hi! >> >> The subject is a segment of a sentence which I copied from Python's >> official homepage. In whole, it reads: >> &

Re: "The urlopen() and urlretrieve() functions can cause arbitrarily long delays"

2013-02-24 Thread 7segment
x27;s >>> official homepage. In whole, it reads: >>> >>> "The urlopen() and urlretrieve() functions can cause arbitrarily long >>> delays while waiting for a network connection to be set up. This means >>> that it is difficult to build an interactive Web clien

Re: "The urlopen() and urlretrieve() functions can cause arbitrarily long delays"

2013-02-24 Thread MRAB
On 2013-02-24 18:55, Ian Kelly wrote: On Sun, Feb 24, 2013 at 10:48 AM, 7segment <7segm...@live.com> wrote: Hi! The subject is a segment of a sentence which I copied from Python's official homepage. In whole, it reads: "The urlopen() and urlretrieve() functions can cause

Re: "The urlopen() and urlretrieve() functions can cause arbitrarily long delays"

2013-02-24 Thread Ian Kelly
On Sun, Feb 24, 2013 at 10:48 AM, 7segment <7segm...@live.com> wrote: > Hi! > > The subject is a segment of a sentence which I copied from Python's > official homepage. In whole, it reads: > > "The urlopen() and urlretrieve() functions can cause arbitrarily

"The urlopen() and urlretrieve() functions can cause arbitrarily long delays"

2013-02-24 Thread 7segment
Hi! The subject is a segment of a sentence which I copied from Python's official homepage. In whole, it reads: "The urlopen() and urlretrieve() functions can cause arbitrarily long delays while waiting for a network connection to be set up. This means that it is difficult t

Re: Urllib's urlopen and urlretrieve

2013-02-22 Thread MRAB
[snip] > As for which version if Python, I have been using Python 2 to learn on > as I heard that Python 3 was still largely unadopted due to a lack of > library support etc... by comparison. Are people adopting it fast > enough now that I should consider learning on 3 instead of 2? > [snip] You s

Re: Urllib's urlopen and urlretrieve

2013-02-22 Thread Dave Angel
On 02/22/2013 12:09 AM, qoresu...@gmail.com wrote: Initially I was just trying the html, but later when I attempted more complicated sites that weren't my own I noticed that large bulks of the site were lost in the process. The urllib code essentially looks like what I was trying but it didn't

Re: Urllib's urlopen and urlretrieve

2013-02-21 Thread qoresucks
Initially I was just trying the html, but later when I attempted more complicated sites that weren't my own I noticed that large bulks of the site were lost in the process. The urllib code essentially looks like what I was trying but it didn't work as I had expected. To be more specific, after

Re: Urllib's urlopen and urlretrieve

2013-02-21 Thread Dave Angel
On 02/21/2013 07:12 AM, qoresu...@gmail.com wrote: Why is it that when using urllib.urlopen then reading or urllib.urlretrieve, does it only give me parts of the sites, loosing the formatting, images, etc...? How can I get around this? Start by telling us if you're using Python2 or Python

Re: Urllib's urlopen and urlretrieve

2013-02-21 Thread Dave Angel
On 02/21/2013 12:47 PM, rh wrote: On Thu, 21 Feb 2013 10:56:15 -0500 Dave Angel wrote: On 02/21/2013 07:12 AM, qoresu...@gmail.com wrote: I only just started Python and given that I know nothing about network programming or internet programming of any kind really, I thought it would be interes

Re: Urllib's urlopen and urlretrieve

2013-02-21 Thread Dave Angel
On 02/21/2013 07:12 AM, qoresu...@gmail.com wrote: I only just started Python and given that I know nothing about network programming or internet programming of any kind really, I thought it would be interesting to try write something that could create an archive of a website for myself. Ple

Re: Urllib's urlopen and urlretrieve

2013-02-21 Thread Michael Herman
Are you just trying to get the html? If so, you can use this code- *import urllib* * * *# fetch the and download a webpage, nameing it test.html* *urllib.urlretrieve("http://www.web2py.com/";, filename="test.html")* I recommend using the requests library, as it's easier to use and more powerful:

Urllib's urlopen and urlretrieve

2013-02-21 Thread qoresucks
I only just started Python and given that I know nothing about network programming or internet programming of any kind really, I thought it would be interesting to try write something that could create an archive of a website for myself. With this I started trying to use the urllib library, howe

Re: urlopen in python3

2012-12-05 Thread Olive
Nick Cash wrote: > > In python2, this work if "something" is a regular file on the > > system as well as a remote URL. The 2to3 script convert this to > > urllib.request.urlopen. But it does not work anymore if "something" > > is just a file name. > > > > My aim is to let the user specify a "file

RE: urlopen in python3

2012-12-05 Thread Nick Cash
> In python2, this work if "something" is a regular file on the system as > well as a remote URL. The 2to3 script convert this to > urllib.request.urlopen. But it does not work anymore if "something" > is just a file name. > > My aim is to let the user specify a "file" on the command line and have

urlopen in python3

2012-12-05 Thread Olive
In python2, I use this code: a=urllib.urlopen(something) In python2, this work if "something" is a regular file on the system as well as a remote URL. The 2to3 script convert this to urllib.request.urlopen. But it does not work anymore if "something" is just a file name. My aim is to let the us

Re: Internationalized domain names not working with URLopen

2012-06-13 Thread John Nagle
On 6/12/2012 11:42 PM, Andrew Berg wrote: On 6/13/2012 1:17 AM, John Nagle wrote: What does "urllib2" want? Percent escapes? Punycode? Looks like Punycode is the correct answer: https://en.wikipedia.org/wiki/Internationalized_domain_name#ToASCII_and_ToUnicode I haven't tried it, though.

Re: Internationalized domain names not working with URLopen

2012-06-13 Thread Hemanth H.M
/lib/python2.6/urllib2.py", line 126, in urlopen > return _opener.open(url, data, timeout) > File "/usr/lib/python2.6/urllib2.py", line 391, in open > response = self._open(req, data) > File "/usr/lib/python2.6/urllib2.py", line 409, in _open >

Re: Internationalized domain names not working with URLopen

2012-06-13 Thread Hemanth H.M
Well not really! does not work with '☃.net' Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.6/urllib2.py", line 39

Re: Internationalized domain names not working with URLopen

2012-06-12 Thread Виталий Волков
byknj4f> > > with > > urllib2.urlopen(s1) > > in Python 2.7 on Windows 7. This produces a Unicode exception: > > >>> s1 > u'http://\u043f\u0440\u0438\**u043c\u0435\u0440.\u0438\** > u0441\u043f\u044b\u0442\u0430\**u043d\u0438\u0435' > >>> fd

Re: Internationalized domain names not working with URLopen

2012-06-12 Thread Andrew Berg
On 6/13/2012 1:17 AM, John Nagle wrote: > What does "urllib2" want? Percent escapes? Punycode? Looks like Punycode is the correct answer: https://en.wikipedia.org/wiki/Internationalized_domain_name#ToASCII_and_ToUnicode I haven't tried it, though. -- CPython 3.3.0a3 | Windows NT 6.1.7601.17790

Internationalized domain names not working with URLopen

2012-06-12 Thread John Nagle
I'm trying to open http://пример.испытание with urllib2.urlopen(s1) in Python 2.7 on Windows 7. This produces a Unicode exception: >>> s1 u'http://\u043f\u0440\u0438\u043c\u0435\u0440.\u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435' >>> fd = urllib2.u

Re: IOError 35 when trying to read the result of call to urllib2.urlopen

2011-09-10 Thread matt
On Sep 9, 6:02 pm, Steven D'Aprano wrote: > matt wrote: > > When I try to look at "resp_body" I get this error: > > > IOError: [Errno 35] Resource temporarily unavailable > > > I posted to the same URI using curl and it worked fine, so I don't > > think it has to do with the server. > > Are your P

Re: IOError 35 when trying to read the result of call to urllib2.urlopen

2011-09-09 Thread Steven D'Aprano
matt wrote: > When I try to look at "resp_body" I get this error: > > IOError: [Errno 35] Resource temporarily unavailable > > I posted to the same URI using curl and it worked fine, so I don't > think it has to do with the server. Are your Python code and curl both using the same proxy? It may

IOError 35 when trying to read the result of call to urllib2.urlopen

2011-09-09 Thread matt
I'm using urllib2's urlopen function to post to a service which should return a rather lengthy JSON object as the body of its response. Here's the code: {{{ ctype, body = encode_multipart(fields, files) url = 'http://someservice:8080/path/to/resource' headers = {'

urllib2.urlopen+BadStatusLine+https

2011-05-12 Thread up2date.cyborg
Hi, I am new to this list, I don't really know if I should post here my request. Anyway. The following code is raising httplib.BadStatusLine on urllib2.urlopen(url) url = 'https://stat.netaffiliation.com/requete.php?login=xxx&mdp=yyy&debut=2011-05-01&fin=2011-05-12'

Re: urlopen returns forbidden

2011-02-28 Thread Chris Rebert
On Mon, Feb 28, 2011 at 9:44 AM, Terry Reedy wrote: > On 2/28/2011 10:21 AM, Grant Edwards wrote: >> As somebody else has already said, if the site provides an API that >> they want you to use you should do so rather than hammering their web >> server with a screen-scraper. > > If there any generi

Re: urlopen returns forbidden

2011-02-28 Thread Terry Reedy
On 2/28/2011 10:21 AM, Grant Edwards wrote: As somebody else has already said, if the site provides an API that they want you to use you should do so rather than hammering their web server with a screen-scraper. If there any generic method for finding out 'if the site provides an API" and spe

Re: urlopen returns forbidden

2011-02-28 Thread Grant Edwards
On 2011-02-28, Chris Rebert wrote: > On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw wrote: >> I have a working urlopen routine which opens >> a url, parses it for tags and prints out >> the links in the page. On some sites, wikipedia for >> instance, i get a >

Re: urlopen returns forbidden

2011-02-28 Thread Steven D'Aprano
On Sun, 27 Feb 2011 22:19:18 -0800, Chris Rebert wrote: > On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw > wrote: >> I have a working urlopen routine which opens a url, parses it for >> tags and prints out the links in the page. On some sites, wikipedia for >> instance,

Re: urlopen returns forbidden

2011-02-27 Thread Chris Rebert
On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw wrote: > I have a working urlopen routine which opens > a url, parses it for tags and prints out > the links in the page. On some sites, wikipedia for > instance, i get a > > HTTP error 403, forbidden. > > What is the differen

urlopen returns forbidden

2011-02-27 Thread monkeys paw
I have a working urlopen routine which opens a url, parses it for tags and prints out the links in the page. On some sites, wikipedia for instance, i get a HTTP error 403, forbidden. What is the difference in accessing the site through a web browser and opening/reading the URL with python

Re: I get an error when I used urllib2.urlopen() to open a remote file in a ftp server

2011-01-06 Thread Ariel
You are right, Thanks. On Thu, Jan 6, 2011 at 12:55 PM, Ian Kelly wrote: > On Thu, Jan 6, 2011 at 10:26 AM, Ariel wrote: > > Hi everybody: > > > > I get an error when I used urllib2.urlopen() to open a remote file in a > ftp > > server, My code is the fol

Re: I get an error when I used urllib2.urlopen() to open a remote file in a ftp server

2011-01-06 Thread Ian Kelly
On Thu, Jan 6, 2011 at 10:26 AM, Ariel wrote: > Hi everybody: > > I get an error when I used urllib2.urlopen() to open a remote file in a ftp > server, My code is the following: > >>>> file = 'ftp:/192.168.250.14:2180/RTVE/VIDEOS/Thisisit.wmv' Looks to me l

I get an error when I used urllib2.urlopen() to open a remote file in a ftp server

2011-01-06 Thread Ariel
Hi everybody: I get an error when I used urllib2.urlopen() to open a remote file in a ftp server, My code is the following: >>> file = 'ftp:/192.168.250.14:2180/RTVE/VIDEOS/Thisisit.wmv' >>> mydata = urllib2.urlopen(file) Traceback (most recent call last): File &quo

Re: http error 301 for urlopen

2010-11-10 Thread Hans-Peter Jansen
On Tuesday 09 November 2010, 03:10:24 Lawrence D'Oliveiro wrote: > In message <4cd7987e$0$1674$742ec...@news.sonic.net>, John Nagle wrote: > >It's the New York Times' paywall. They're trying to set a > > cookie, and will redirect the URL until you store and return the > > cookie. > > And if t

Re: http error 301 for urlopen

2010-11-08 Thread Lawrence D'Oliveiro
In message <4cd7987e$0$1674$742ec...@news.sonic.net>, John Nagle wrote: >It's the New York Times' paywall. They're trying to set a cookie, > and will redirect the URL until you store and return the cookie. And if they find out you’re acessing them from a script, they’ll probably try to find

Re: http error 301 for urlopen

2010-11-07 Thread John Nagle
On 11/7/2010 5:51 PM, D'Arcy J.M. Cain wrote: On Sun, 7 Nov 2010 19:30:23 -0600 Wenhuan Yu wrote: I tried to open a link with urlopen: import urllib2 alink = " http://feeds.nytimes.com/click.phdo?i=ff074d9e3895247a31e8e5efa5253183"; f = urllib2.urlopen(alink) print f.read

Re: http error 301 for urlopen

2010-11-07 Thread Nobody
the link in browser. Any way to get solve this? Thanks. > > I checked with my tools and was told that it redirects more than five > times. Maybe it's not infinite but too many for urlopen. The default value of urllib2.HTTPRedirectHandler.max_redirections is 10. Setting it to 11 allow

Re: http error 301 for urlopen

2010-11-07 Thread D'Arcy J.M. Cain
On Sun, 7 Nov 2010 19:30:23 -0600 Wenhuan Yu wrote: > I tried to open a link with urlopen: > > import urllib2 > alink = " > http://feeds.nytimes.com/click.phdo?i=ff074d9e3895247a31e8e5efa5253183"; > f = urllib2.urlopen(alink) > print f.read() > > and g

http error 301 for urlopen

2010-11-07 Thread Wenhuan Yu
I tried to open a link with urlopen: import urllib2 alink = " http://feeds.nytimes.com/click.phdo?i=ff074d9e3895247a31e8e5efa5253183"; f = urllib2.urlopen(alink) print f.read() and got the followinig error: urllib2.HTTPError: HTTP Error 301: The HTTP server returned a redirect error t

Re: Urllib2 urlopen and read - difference

2010-04-25 Thread Aahz
In article , J. Cliff Dyer wrote: >On Thu, 2010-04-15 at 11:25 -0700, koranthala wrote: >> >>Suppose I am doing the following: >> req = urllib2.urlopen('http://www.python.org') >> data = req.read() >> >>When is the actual data received? is

urllib2.urlopen taking way too much time

2010-04-19 Thread Phonethics Mobile Media
handler = urllib2.urlopen(req) is taking way too much time to retrieve the URL. The same code using sockets in PHP doesn't delay this long. I had 'Authorization':'Basic ' + base64.b64encode("username:password") in my header though. [ I didnt use HTTPPasswordMg

Re: Urllib2 urlopen and read - difference

2010-04-15 Thread J. Cliff Dyer
On Thu, 2010-04-15 at 11:25 -0700, koranthala wrote: > Hi, >Suppose I am doing the following: > req = urllib2.urlopen('http://www.python.org') > data = req.read() > >When is the actual data received? is it done by the first line? or > is it done only

Re: Urllib2 urlopen and read - difference

2010-04-15 Thread J. Cliff Dyer
On Thu, 2010-04-15 at 11:25 -0700, koranthala wrote: > Hi, >Suppose I am doing the following: > req = urllib2.urlopen('http://www.python.org') > data = req.read() > >When is the actual data received? is it done by the first line? or > is it done only

Urllib2 urlopen and read - difference

2010-04-15 Thread koranthala
Hi, Suppose I am doing the following: req = urllib2.urlopen('http://www.python.org') data = req.read() When is the actual data received? is it done by the first line? or is it done only when req.read() is used? My understanding is that when urlopen is done itself, we would hav

Re: Problem with urllib2.urlopen() opening a local file

2009-10-26 Thread Gabriel Genellina
En Sat, 24 Oct 2009 20:10:21 -0300, deja user escribió: I want to use urlopen() to open either a http://... file or a local file File:C:/... I don't have problems opening and reading the file either way. But when I run the script on a server (ArcGIS server), the request won't c

Problem with urllib2.urlopen() opening a local file

2009-10-24 Thread deja user
I want to use urlopen() to open either a http://... file or a local file File:C:/... I don't have problems opening and reading the file either way. But when I run the script on a server (ArcGIS server), the request won't complete if it was trying to open a local file. Even though I

Re: urlopen errors in script

2009-08-19 Thread Sleepy Cabbage
On Tue, 18 Aug 2009 13:05:03 +, Sleepy Cabbage wrote: > Thanks for the time you've spent anyway Peter. I have superkaramba > installed and the rest of the script is running fine, it's only when I > put the urlopen part in that it comes back with errors. The quotes ar

Re: urlopen errors in script

2009-08-18 Thread Sleepy Cabbage
Thanks for the time you've spent anyway Peter. I have superkaramba installed and the rest of the script is running fine, it's only when I put the urlopen part in that it comes back with errors. The quotes are just to make it readable on here as my first attempt at posting muted

Re: urlopen errors in script

2009-08-18 Thread Peter Otten
Sleepy Cabbage wrote: > This is the script up to where the error seems to fall: > > "#!/usr/bin/env superkaramba" > "# -*- coding: iso-8859-1 -*-" > > "import karamba" > "import subprocess" > "from subprocess import Popen

Re: urlopen errors in script

2009-08-18 Thread Sleepy Cabbage
added to my playlist. >> >> If I open a python console and add the following: >> >> ">>>import urllib2" >> ">>>from urllib2 import urlopen" >> >> ">>>nowplaying = str.split(urlopen('http://www.heartea

Re: urlopen errors in script

2009-08-18 Thread Peter Otten
: > > ">>>import urllib2" > ">>>from urllib2 import urlopen" > > ">>>nowplaying = str.split(urlopen('http://www.hearteastmids.co.uk// > jsfiles/NowPlayingDisplay.aspx?f=http%3A%2F%2Frope.ccap.fimc.net%2Ffeeds% > 2Fnowpl

urlopen errors in script

2009-08-18 Thread Sleepy Cabbage
I'm scripting a superkaramba theme using python and have intgrated output from amarok. I have also would like to show the artist and song title from a radio stream i've added to my playlist. If I open a python console and add the following: ">>>import urllib2" &q

urlopen errors in script

2009-08-18 Thread Sleepy Cabbage
I'm scripting a superkaramba theme using python and have intgrated output from amarok. I have also would like to show the artist and song title from a radio stream i've added to my playlist. If I open a python console and add the following: >>>import urllib2 >>>

Re: Problem when fetching page using urllib2.urlopen

2009-08-11 Thread dorzey
On 10 Aug, 18:11, "Diez B. Roggisch" wrote: > dorzey wrote: > > "geturl - this returns the real URL of the page fetched. This is > > useful because urlopen (or the opener object used) may have followed a > > redirect. The URL of the page fetched may not be the

Re: Problem when fetching page using urllib2.urlopen

2009-08-11 Thread Diez B. Roggisch
dorzey wrote: > "geturl - this returns the real URL of the page fetched. This is > useful because urlopen (or the opener object used) may have followed a > redirect. The URL of the page fetched may not be the same as the URL > requested." from > http://www.voidspace.org.u

Re: Problem when fetching page using urllib2.urlopen

2009-08-10 Thread jitu
e  having > >j> a semicolon  in the url , while fetching the page using > >j> urllib2.urlopen, all such href's  containing  'semicolons' are > >j> truncated. > >j> For example the > >hrefhttp://travel.yahoo.com/p-travelguide-6901959-pune_restaur

Re: Problem when fetching page using urllib2.urlopen

2009-08-10 Thread Piet van Oostrum
>>>>> jitu (j) wrote: >j> Hi, >j> A html page contains 'anchor' elements with 'href' attribute having >j> a semicolon in the url , while fetching the page using >j> urllib2.urlopen, all such href's containing &

Re: Problem when fetching page using urllib2.urlopen

2009-08-10 Thread dorzey
"geturl - this returns the real URL of the page fetched. This is useful because urlopen (or the opener object used) may have followed a redirect. The URL of the page fetched may not be the same as the URL requested." from http://www.voidspace.org.uk/python/articles/urllib2.shtml#info-

Re: Problem when fetching page using urllib2.urlopen

2009-08-10 Thread jitu
On Aug 10, 4:39 pm, jitu wrote: > Hi, > > A html page  contains 'anchor' elements with 'href' attribute  having > a semicolon  in the url , while fetching the page using > urllib2.urlopen, all such href's  containing  'semicolons'

Problem when fetching page using urllib2.urlopen

2009-08-10 Thread jitu
Hi, A html page contains 'anchor' elements with 'href' attribute having a semicolon in the url , while fetching the page using urllib2.urlopen, all such href's containing 'semicolons' are truncated. For example the href http://travel.yahoo.com/p-travelgu

Re: Web page data and urllib2.urlopen

2009-08-07 Thread Piet van Oostrum
> Dave Angel (DA) wrote: >DA> Piet van Oostrum wrote: >>> >DA> But the raw page didn't have any javascript. So what about that original >DA> raw page triggered additional stuff to be loaded? >DA> Is it "user agent", as someone else brought out? And is there somewhere I

Re: Web page data and urllib2.urlopen

2009-08-07 Thread Dave Angel
Piet van Oostrum wrote: DA> But the raw page didn't have any javascript. So what about that original DA> raw page triggered additional stuff to be loaded? DA> Is it "user agent", as someone else brought out? And is there somewhere I DA> can read more about that aspect of thing

Re: Web page data and urllib2.urlopen

2009-08-07 Thread Piet van Oostrum
> Dave Angel (DA) wrote: >DA> Piet van Oostrum wrote: >>> >DA> If Mozilla had seen a page with this line in an appropriate place, it'd >DA> immediately begin loading the other page, at "someotherurl" But there's no >DA> such line. >>> >>> >DA> Next, I looked for javascript. The Moz

Re: Re: Web page data and urllib2.urlopen

2009-08-06 Thread Kushal Kumaran
On Fri, Aug 7, 2009 at 3:47 AM, Dave Angel wrote: > > > Piet van Oostrum wrote: >> >> >>> >>> DA> All I can guess is that it has something to do with "browser type" or >>> DA> cookies.  And that would make lots of sense if this was a cgi page. >>>  But >>> DA> the URL doesn't look like that, as it

Re: Re: Web page data and urllib2.urlopen

2009-08-06 Thread Dave Angel
Piet van Oostrum wrote: DA> If Mozilla had seen a page with this line in an appropriate place, it'd DA> immediately begin loading the other page, at "someotherurl" But there's no DA> such line. DA> Next, I looked for javascript. The Mozilla page contains lots of DA> javascript, b

  1   2   >