On 8/27/2018 1:25 PM, Sean Darcy wrote:
python 2 :
python
Python 2.7.15 (default, May 15 2018, 15:37:31)
.
import urllib2
res = urllib2.urlopen('https://api.ipify.org').read()
print res
www.xxx.yyy.zzz
In Python 2, this is the printed representation of a bytestring.
python
On Tue, Aug 28, 2018 at 3:25 AM, Sean Darcy wrote:
> python 2 :
>
> python
> Python 2.7.15 (default, May 15 2018, 15:37:31)
> .
>>>> import urllib2
>>>> res = urllib2.urlopen('https://api.ipify.org').read()
>>>> print res
> www.
python 2 :
python
Python 2.7.15 (default, May 15 2018, 15:37:31)
.
>>> import urllib2
>>> res = urllib2.urlopen('https://api.ipify.org').read()
>>> print res
www.xxx.yyy.zzz
python3
python3
Python 3.6.6 (default, Jul 19 2018, 16:29:00)
...
>>&
traced
> > >down to _socket.recv. I am calling some web services and each of them
> > >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen,
> > >while the rest of the call is finished in milliseconds.
> >
> > What happens if you use urlopen
traced
> > >down to _socket.recv. I am calling some web services and each of them
> > >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen,
> > >while the rest of the call is finished in milliseconds.
> >
> > What happens if you use urlopen(
gt; >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen,
> >while the rest of the call is finished in milliseconds.
>
> What happens if you use urlopen() by itself?
> --
> Aahz (a...@pythoncraft.com) <*> http://www.pythoncraft.com/
>
token}
values = json.dumps(payload)
req = urllib2.Request(url, values, headers)
try:
response = urllib2.urlopen(req, timeout=30)
break
except IOError, e:
if e.errno != errno.EINTR:
print e.errno
raise
We log the errono and the raised exception. The exception is:
IOError:
And
m the first one having this
> problem...
>
> (until this difference with urlopen I have found six to be extremely good at
> helping not caring about python versions at all)
What happens if you use 'requests' rather than urlopen? My guess is
that requests will already have dea
On 03/02/2016 03:35 PM, Matt Wheeler wrote:
I agree that six should probably handle this,
Thank you Matt and Chris for your answers. Do you think I should open an
issue on six? It sounds unlikely that I am the first one having this
problem...
(until this difference with urlopen I have
On Thu, Mar 3, 2016 at 1:35 AM, Matt Wheeler wrote:
>> from six.moves.urllib.request import urlopen
>>
>> try:
>> with urlopen('http://www.google.com') as resp:
>> _ = resp.read()
>> except AttributeError:
>> # p
eed", using the "with" construction is an added
feature, not a burden!
> from six.moves.urllib.request import urlopen
>
> try:
> with urlopen('http://www.google.com') as resp:
> _ = resp.read()
> except AttributeError:
> # python 2
>
Hi,
it seems that urlopen had no context manager for versions < 3. The
following code therefore will crash on py2 but not on py3.
from six.moves.urllib.request import urlopen
with urlopen('http://www.google.com') as resp:
_ = resp.read()
Error:
AttributeError: addinfourl in
Dennis Lee Bieber wrote:
> >> Connection reset by peer.
> >>
> >> An existing connection was forcibly closed by the remote host.
> >
> >This is not true.
> >The server is under my control. Die client has terminated the connection
> >(or a router between).
> The odds are still good
line 1992, in
> > File "", line 180, in main
> > File "", line 329, in get_ID
> > File "", line 1627, in check_7z
> > File "C:\Software\Python\lib\urllib2.py", line 154, in urlopen
> > File "C:\Software\Python\lib\urllib2.p
et_ID
File "", line 1627, in check_7z
File "C:\Software\Python\lib\urllib2.py", line 154, in urlopen
File "C:\Software\Python\lib\urllib2.py", line 431, in open
File "C:\Software\Python\lib\urllib2.py", line 449, in _open
File "C:\Software\Pyth
On Monday 24 Aug 2015 19:37 CEST, Ned Batchelder wrote:
> On Monday, August 24, 2015 at 1:14:20 PM UTC-4, Cecil Westerhof wrote:
>> In Python2 urlopen is part of urllib, but in Python3 it is part of
>> urllib.request. I solved this by the
On Monday, August 24, 2015 at 1:14:20 PM UTC-4, Cecil Westerhof wrote:
> In Python2 urlopen is part of urllib, but in Python3 it is part of
> urllib.request. I solved this by the following code:
>
> from platf
In Python2 urlopen is part of urllib, but in Python3 it is part of
urllib.request. I solved this by the following code:
from platform import python_version
if python_version()[0] < '3':
from urllib
"Jia CHEN" writes:
> I have the error below when trying to download the html content of a webpage.
> I can open this webpage in a browser without any problem.
"Connection reset by peer" means that the other side (the HTTP server
in your case) has closed the connection.
It may have looked at th
(default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
request =
urllib2.Request('http://guggenheiminvestments.com/products/etf/gsy/holdings')
re
On 14/07/2014 15:59, krzysztof.zelechow...@syncron.com wrote:
The tutorial says that I should use "with open" to close the file handle
properly. The reference documentation for urlopen mentions that the handle
returned is like a file handle but the code samples below do not bother to
http://bugs.python.org/issue12955
Użytkownik napisał w wiadomości grup
dyskusyjnych:lq0sar$r6e$1...@mx1.internetia.pl...
The tutorial says that I should use "with open" to close the file handle
properly. The reference documentation for urlopen mentions that the handle
returned
> The tutorial says that I should use "with open" to close the file
> handle properly. The reference documentation for urlopen mentions
> that the handle returned is like a file handle but the code samples
> below do not bother to close the handle at all. Isn’t it
> i
The tutorial says that I should use "with open" to close the file handle
properly. The reference documentation for urlopen mentions that the handle
returned is like a file handle but the code samples below do not bother to
close the handle at all. Isn’t it inconsistent?
On Wed, Jan 15, 2014 at 7:04 AM, BobAalsma wrote:
> A program took much too long to check some texts collected from web pages.
> As this could be made parallel easily, I put in fork.
Rather than using the low-level fork() function, you may find it
easier to manage things if you use the multiproce
A program took much too long to check some texts collected from web pages.
As this could be made parallel easily, I put in fork.
And the result seems to be that the program simply stops in the line with
urlopen. Any suggestions?
Relevant part:
try:
print 'urlopen by', k
I understand the problem now. the echo is a string, wich can contain text but
no array.
I've changed the PHP script so I get only text separated with comma's and in
python I separate the textfields and declare them in the array. With the split
methode I saw in the answer of J. Gordon. Thank yo
On 2014-01-10 20:57, vanommen.rob...@gmail.com wrote:
Hello,
I have a Raspberry Pi with 10 temperature sensors. I send the data from the
sensors and some other values with json encoding and:
result = urllib2.urlopen(request, postData)
to a online PHP script wich places the data in a mysql
On Fri, 10 Jan 2014 12:57:59 -0800, vanommen.robert wrote:
> Hello,
>
> I have a Raspberry Pi with 10 temperature sensors. I send the data from
> the sensors and some other values with json encoding and:
>
> result = urllib2.urlopen(request, postData)
>
> to a online PH
On Fri, 10 Jan 2014 12:57:59 -0800 (PST), vanommen.rob...@gmail.com
wrote:
No idea about the php..
In python when i do
para = result.read()
print para
the output is:
[null,null,null,null,null,"J"]
That's a string that just looks like a list.
This is correct according to the data in P
In
vanommen.rob...@gmail.com writes:
> result = urllib2.urlopen(request, postData)
> para = result.read()
> print para
> the output is:
> [null,null,null,null,null,"J"]
> print para[1]
> the output is:
> n
Probably because para is a string with the v
Hello,
I have a Raspberry Pi with 10 temperature sensors. I send the data from the
sensors and some other values with json encoding and:
result = urllib2.urlopen(request, postData)
to a online PHP script wich places the data in a mysql database.
In the result:
result.read()
i am trying to
Am 24.02.2013 20:27 schrieb 7segment:
When in doubt, check some other way, such as with a browser.
Thank you Ian. Browser is not a good idea, because I need this tool to
work automatically. I don't have time to check and compare the response
times manually and put them into the database.
Of
On Sun, 24 Feb 2013 11:55:09 -0700, Ian Kelly wrote:
> On Sun, Feb 24, 2013 at 10:48 AM, 7segment <7segm...@live.com> wrote:
>> Hi!
>>
>> The subject is a segment of a sentence which I copied from Python's
>> official homepage. In whole, it reads:
>>
&
x27;s
>>> official homepage. In whole, it reads:
>>>
>>> "The urlopen() and urlretrieve() functions can cause arbitrarily long
>>> delays while waiting for a network connection to be set up. This means
>>> that it is difficult to build an interactive Web clien
On 2013-02-24 18:55, Ian Kelly wrote:
On Sun, Feb 24, 2013 at 10:48 AM, 7segment <7segm...@live.com> wrote:
Hi!
The subject is a segment of a sentence which I copied from Python's
official homepage. In whole, it reads:
"The urlopen() and urlretrieve() functions can cause
On Sun, Feb 24, 2013 at 10:48 AM, 7segment <7segm...@live.com> wrote:
> Hi!
>
> The subject is a segment of a sentence which I copied from Python's
> official homepage. In whole, it reads:
>
> "The urlopen() and urlretrieve() functions can cause arbitrarily
Hi!
The subject is a segment of a sentence which I copied from Python's
official homepage. In whole, it reads:
"The urlopen() and urlretrieve() functions can cause arbitrarily long
delays while waiting for a network connection to be set up. This means
that it is difficult t
[snip]
> As for which version if Python, I have been using Python 2 to learn on
> as I heard that Python 3 was still largely unadopted due to a lack of
> library support etc... by comparison. Are people adopting it fast
> enough now that I should consider learning on 3 instead of 2?
>
[snip]
You s
On 02/22/2013 12:09 AM, qoresu...@gmail.com wrote:
Initially I was just trying the html, but later when I attempted more
complicated sites that weren't my own I noticed that large bulks of the site
were lost in the process. The urllib code essentially looks like what I was
trying but it didn't
Initially I was just trying the html, but later when I attempted more
complicated sites that weren't my own I noticed that large bulks of the site
were lost in the process. The urllib code essentially looks like what I was
trying but it didn't work as I had expected.
To be more specific, after
On 02/21/2013 07:12 AM, qoresu...@gmail.com wrote:
Why is it that when using urllib.urlopen then reading or urllib.urlretrieve,
does it only give me parts of the sites, loosing the formatting, images,
etc...? How can I get around this?
Start by telling us if you're using Python2 or Python
On 02/21/2013 12:47 PM, rh wrote:
On Thu, 21 Feb 2013 10:56:15 -0500
Dave Angel wrote:
On 02/21/2013 07:12 AM, qoresu...@gmail.com wrote:
I only just started Python and given that I know nothing about
network programming or internet programming of any kind really, I
thought it would be interes
On 02/21/2013 07:12 AM, qoresu...@gmail.com wrote:
I only just started Python and given that I know nothing about network
programming or internet programming of any kind really, I thought it would be
interesting to try write something that could create an archive of a website
for myself.
Ple
Are you just trying to get the html? If so, you can use this code-
*import urllib*
*
*
*# fetch the and download a webpage, nameing it test.html*
*urllib.urlretrieve("http://www.web2py.com/";, filename="test.html")*
I recommend using the requests library, as it's easier to use and more
powerful:
I only just started Python and given that I know nothing about network
programming or internet programming of any kind really, I thought it would be
interesting to try write something that could create an archive of a website
for myself. With this I started trying to use the urllib library, howe
Nick Cash wrote:
> > In python2, this work if "something" is a regular file on the
> > system as well as a remote URL. The 2to3 script convert this to
> > urllib.request.urlopen. But it does not work anymore if "something"
> > is just a file name.
> >
> > My aim is to let the user specify a "file
> In python2, this work if "something" is a regular file on the system as
> well as a remote URL. The 2to3 script convert this to
> urllib.request.urlopen. But it does not work anymore if "something"
> is just a file name.
>
> My aim is to let the user specify a "file" on the command line and have
In python2, I use this code:
a=urllib.urlopen(something)
In python2, this work if "something" is a regular file on the system as
well as a remote URL. The 2to3 script convert this
to urllib.request.urlopen. But it does not work anymore if "something"
is just a file name.
My aim is to let the us
On 6/12/2012 11:42 PM, Andrew Berg wrote:
On 6/13/2012 1:17 AM, John Nagle wrote:
What does "urllib2" want? Percent escapes? Punycode?
Looks like Punycode is the correct answer:
https://en.wikipedia.org/wiki/Internationalized_domain_name#ToASCII_and_ToUnicode
I haven't tried it, though.
/lib/python2.6/urllib2.py", line 126, in urlopen
> return _opener.open(url, data, timeout)
> File "/usr/lib/python2.6/urllib2.py", line 391, in open
> response = self._open(req, data)
> File "/usr/lib/python2.6/urllib2.py", line 409, in _open
>
Well not really! does not work with '☃.net'
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.6/urllib2.py", line 39
byknj4f>
>
> with
>
> urllib2.urlopen(s1)
>
> in Python 2.7 on Windows 7. This produces a Unicode exception:
>
> >>> s1
> u'http://\u043f\u0440\u0438\**u043c\u0435\u0440.\u0438\**
> u0441\u043f\u044b\u0442\u0430\**u043d\u0438\u0435'
> >>> fd
On 6/13/2012 1:17 AM, John Nagle wrote:
> What does "urllib2" want? Percent escapes? Punycode?
Looks like Punycode is the correct answer:
https://en.wikipedia.org/wiki/Internationalized_domain_name#ToASCII_and_ToUnicode
I haven't tried it, though.
--
CPython 3.3.0a3 | Windows NT 6.1.7601.17790
I'm trying to open
http://пример.испытание
with
urllib2.urlopen(s1)
in Python 2.7 on Windows 7. This produces a Unicode exception:
>>> s1
u'http://\u043f\u0440\u0438\u043c\u0435\u0440.\u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435'
>>> fd = urllib2.u
On Sep 9, 6:02 pm, Steven D'Aprano wrote:
> matt wrote:
> > When I try to look at "resp_body" I get this error:
>
> > IOError: [Errno 35] Resource temporarily unavailable
>
> > I posted to the same URI using curl and it worked fine, so I don't
> > think it has to do with the server.
>
> Are your P
matt wrote:
> When I try to look at "resp_body" I get this error:
>
> IOError: [Errno 35] Resource temporarily unavailable
>
> I posted to the same URI using curl and it worked fine, so I don't
> think it has to do with the server.
Are your Python code and curl both using the same proxy? It may
I'm using urllib2's urlopen function to post to a service which should
return a rather lengthy JSON object as the body of its response.
Here's the code:
{{{
ctype, body = encode_multipart(fields, files)
url = 'http://someservice:8080/path/to/resource'
headers = {'
Hi,
I am new to this list, I don't really know if I should post here my request.
Anyway.
The following code is raising httplib.BadStatusLine on urllib2.urlopen(url)
url =
'https://stat.netaffiliation.com/requete.php?login=xxx&mdp=yyy&debut=2011-05-01&fin=2011-05-12'
On Mon, Feb 28, 2011 at 9:44 AM, Terry Reedy wrote:
> On 2/28/2011 10:21 AM, Grant Edwards wrote:
>> As somebody else has already said, if the site provides an API that
>> they want you to use you should do so rather than hammering their web
>> server with a screen-scraper.
>
> If there any generi
On 2/28/2011 10:21 AM, Grant Edwards wrote:
As somebody else has already said, if the site provides an API that
they want you to use you should do so rather than hammering their web
server with a screen-scraper.
If there any generic method for finding out 'if the site provides an
API" and spe
On 2011-02-28, Chris Rebert wrote:
> On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw wrote:
>> I have a working urlopen routine which opens
>> a url, parses it for tags and prints out
>> the links in the page. On some sites, wikipedia for
>> instance, i get a
>
On Sun, 27 Feb 2011 22:19:18 -0800, Chris Rebert wrote:
> On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw
> wrote:
>> I have a working urlopen routine which opens a url, parses it for
>> tags and prints out the links in the page. On some sites, wikipedia for
>> instance,
On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw wrote:
> I have a working urlopen routine which opens
> a url, parses it for tags and prints out
> the links in the page. On some sites, wikipedia for
> instance, i get a
>
> HTTP error 403, forbidden.
>
> What is the differen
I have a working urlopen routine which opens
a url, parses it for tags and prints out
the links in the page. On some sites, wikipedia for
instance, i get a
HTTP error 403, forbidden.
What is the difference in accessing the site through a web browser
and opening/reading the URL with python
You are right, Thanks.
On Thu, Jan 6, 2011 at 12:55 PM, Ian Kelly wrote:
> On Thu, Jan 6, 2011 at 10:26 AM, Ariel wrote:
> > Hi everybody:
> >
> > I get an error when I used urllib2.urlopen() to open a remote file in a
> ftp
> > server, My code is the fol
On Thu, Jan 6, 2011 at 10:26 AM, Ariel wrote:
> Hi everybody:
>
> I get an error when I used urllib2.urlopen() to open a remote file in a ftp
> server, My code is the following:
>
>>>> file = 'ftp:/192.168.250.14:2180/RTVE/VIDEOS/Thisisit.wmv'
Looks to me l
Hi everybody:
I get an error when I used urllib2.urlopen() to open a remote file in a ftp
server, My code is the following:
>>> file = 'ftp:/192.168.250.14:2180/RTVE/VIDEOS/Thisisit.wmv'
>>> mydata = urllib2.urlopen(file)
Traceback (most recent call last):
File &quo
On Tuesday 09 November 2010, 03:10:24 Lawrence D'Oliveiro wrote:
> In message <4cd7987e$0$1674$742ec...@news.sonic.net>, John Nagle
wrote:
> >It's the New York Times' paywall. They're trying to set a
> > cookie, and will redirect the URL until you store and return the
> > cookie.
>
> And if t
In message <4cd7987e$0$1674$742ec...@news.sonic.net>, John Nagle wrote:
>It's the New York Times' paywall. They're trying to set a cookie,
> and will redirect the URL until you store and return the cookie.
And if they find out you’re acessing them from a script, they’ll probably
try to find
On 11/7/2010 5:51 PM, D'Arcy J.M. Cain wrote:
On Sun, 7 Nov 2010 19:30:23 -0600
Wenhuan Yu wrote:
I tried to open a link with urlopen:
import urllib2
alink = "
http://feeds.nytimes.com/click.phdo?i=ff074d9e3895247a31e8e5efa5253183";
f = urllib2.urlopen(alink)
print f.read
the link in browser. Any way to get solve this? Thanks.
>
> I checked with my tools and was told that it redirects more than five
> times. Maybe it's not infinite but too many for urlopen.
The default value of urllib2.HTTPRedirectHandler.max_redirections is 10.
Setting it to 11 allow
On Sun, 7 Nov 2010 19:30:23 -0600
Wenhuan Yu wrote:
> I tried to open a link with urlopen:
>
> import urllib2
> alink = "
> http://feeds.nytimes.com/click.phdo?i=ff074d9e3895247a31e8e5efa5253183";
> f = urllib2.urlopen(alink)
> print f.read()
>
> and g
I tried to open a link with urlopen:
import urllib2
alink = "
http://feeds.nytimes.com/click.phdo?i=ff074d9e3895247a31e8e5efa5253183";
f = urllib2.urlopen(alink)
print f.read()
and got the followinig error:
urllib2.HTTPError: HTTP Error 301: The HTTP server returned a redirect error
t
In article ,
J. Cliff Dyer wrote:
>On Thu, 2010-04-15 at 11:25 -0700, koranthala wrote:
>>
>>Suppose I am doing the following:
>> req = urllib2.urlopen('http://www.python.org')
>> data = req.read()
>>
>>When is the actual data received? is
handler = urllib2.urlopen(req) is taking way too much time to retrieve
the URL. The same code using sockets in PHP doesn't delay this long.
I had 'Authorization':'Basic ' + base64.b64encode("username:password")
in my header though.
[ I didnt use HTTPPasswordMg
On Thu, 2010-04-15 at 11:25 -0700, koranthala wrote:
> Hi,
>Suppose I am doing the following:
> req = urllib2.urlopen('http://www.python.org')
> data = req.read()
>
>When is the actual data received? is it done by the first line? or
> is it done only
On Thu, 2010-04-15 at 11:25 -0700, koranthala wrote:
> Hi,
>Suppose I am doing the following:
> req = urllib2.urlopen('http://www.python.org')
> data = req.read()
>
>When is the actual data received? is it done by the first line? or
> is it done only
Hi,
Suppose I am doing the following:
req = urllib2.urlopen('http://www.python.org')
data = req.read()
When is the actual data received? is it done by the first line? or
is it done only when req.read() is used?
My understanding is that when urlopen is done itself, we would hav
En Sat, 24 Oct 2009 20:10:21 -0300, deja user
escribió:
I want to use urlopen() to open either a http://... file or a local
file File:C:/... I don't have problems opening and reading the file
either way. But when I run the script on a server (ArcGIS server),
the request won't c
I want to use urlopen() to open either a http://... file or a local
file File:C:/... I don't have problems opening and reading the file
either way. But when I run the script on a server (ArcGIS server),
the request won't complete if it was trying to open a local file.
Even though I
On Tue, 18 Aug 2009 13:05:03 +, Sleepy Cabbage wrote:
> Thanks for the time you've spent anyway Peter. I have superkaramba
> installed and the rest of the script is running fine, it's only when I
> put the urlopen part in that it comes back with errors. The quotes ar
Thanks for the time you've spent anyway Peter. I have superkaramba
installed and the rest of the script is running fine, it's only when I
put the urlopen part in that it comes back with errors. The quotes are
just to make it readable on here as my first attempt at posting muted
Sleepy Cabbage wrote:
> This is the script up to where the error seems to fall:
>
> "#!/usr/bin/env superkaramba"
> "# -*- coding: iso-8859-1 -*-"
>
> "import karamba"
> "import subprocess"
> "from subprocess import Popen
added to my playlist.
>>
>> If I open a python console and add the following:
>>
>> ">>>import urllib2"
>> ">>>from urllib2 import urlopen"
>>
>> ">>>nowplaying = str.split(urlopen('http://www.heartea
:
>
> ">>>import urllib2"
> ">>>from urllib2 import urlopen"
>
> ">>>nowplaying = str.split(urlopen('http://www.hearteastmids.co.uk//
> jsfiles/NowPlayingDisplay.aspx?f=http%3A%2F%2Frope.ccap.fimc.net%2Ffeeds%
> 2Fnowpl
I'm scripting a superkaramba theme using python and have intgrated output
from amarok. I have also would like to show the artist and song title from
a radio stream i've added to my playlist.
If I open a python console and add the following:
">>>import urllib2"
&q
I'm scripting a superkaramba theme using python and have intgrated output
from amarok. I have also would like to show the artist and song title from
a radio stream i've added to my playlist.
If I open a python console and add the following:
>>>import urllib2
>>>
On 10 Aug, 18:11, "Diez B. Roggisch" wrote:
> dorzey wrote:
> > "geturl - this returns the real URL of the page fetched. This is
> > useful because urlopen (or the opener object used) may have followed a
> > redirect. The URL of the page fetched may not be the
dorzey wrote:
> "geturl - this returns the real URL of the page fetched. This is
> useful because urlopen (or the opener object used) may have followed a
> redirect. The URL of the page fetched may not be the same as the URL
> requested." from
> http://www.voidspace.org.u
e having
> >j> a semicolon in the url , while fetching the page using
> >j> urllib2.urlopen, all such href's containing 'semicolons' are
> >j> truncated.
> >j> For example the
> >hrefhttp://travel.yahoo.com/p-travelguide-6901959-pune_restaur
>>>>> jitu (j) wrote:
>j> Hi,
>j> A html page contains 'anchor' elements with 'href' attribute having
>j> a semicolon in the url , while fetching the page using
>j> urllib2.urlopen, all such href's containing &
"geturl - this returns the real URL of the page fetched. This is
useful because urlopen (or the opener object used) may have followed a
redirect. The URL of the page fetched may not be the same as the URL
requested." from
http://www.voidspace.org.uk/python/articles/urllib2.shtml#info-
On Aug 10, 4:39 pm, jitu wrote:
> Hi,
>
> A html page contains 'anchor' elements with 'href' attribute having
> a semicolon in the url , while fetching the page using
> urllib2.urlopen, all such href's containing 'semicolons'
Hi,
A html page contains 'anchor' elements with 'href' attribute having
a semicolon in the url , while fetching the page using
urllib2.urlopen, all such href's containing 'semicolons' are
truncated.
For example the href
http://travel.yahoo.com/p-travelgu
> Dave Angel (DA) wrote:
>DA> Piet van Oostrum wrote:
>>>
>DA> But the raw page didn't have any javascript. So what about that original
>DA> raw page triggered additional stuff to be loaded?
>DA> Is it "user agent", as someone else brought out? And is there somewhere I
Piet van Oostrum wrote:
DA> But the raw page didn't have any javascript. So what about that original
DA> raw page triggered additional stuff to be loaded?
DA> Is it "user agent", as someone else brought out? And is there somewhere I
DA> can read more about that aspect of thing
> Dave Angel (DA) wrote:
>DA> Piet van Oostrum wrote:
>>>
>DA> If Mozilla had seen a page with this line in an appropriate place, it'd
>DA> immediately begin loading the other page, at "someotherurl" But there's no
>DA> such line.
>>>
>>>
>DA> Next, I looked for javascript. The Moz
On Fri, Aug 7, 2009 at 3:47 AM, Dave Angel wrote:
>
>
> Piet van Oostrum wrote:
>>
>>
>>>
>>> DA> All I can guess is that it has something to do with "browser type" or
>>> DA> cookies. And that would make lots of sense if this was a cgi page.
>>> But
>>> DA> the URL doesn't look like that, as it
Piet van Oostrum wrote:
DA> If Mozilla had seen a page with this line in an appropriate place, it'd
DA> immediately begin loading the other page, at "someotherurl" But there's no
DA> such line.
DA> Next, I looked for javascript. The Mozilla page contains lots of
DA> javascript, b
1 - 100 of 188 matches
Mail list logo