On Thu, Mar 3, 2016 at 2:36 AM, Fabien wrote:
> On 03/02/2016 03:35 PM, Matt Wheeler wrote:
>>
>> I agree that six should probably handle this,
>
>
> Thank you Matt and Chris for your answers. Do you think I should open an
> issue on six? It sounds unlikely that I am the first one having this
> pr
On 03/02/2016 03:35 PM, Matt Wheeler wrote:
I agree that six should probably handle this,
Thank you Matt and Chris for your answers. Do you think I should open an
issue on six? It sounds unlikely that I am the first one having this
problem...
(until this difference with urlopen I have found
On Thu, Mar 3, 2016 at 1:35 AM, Matt Wheeler wrote:
>> from six.moves.urllib.request import urlopen
>>
>> try:
>> with urlopen('http://www.google.com') as resp:
>> _ = resp.read()
>> except AttributeError:
>> # python 2
>> resp = urlopen('http://www.google.com')
>> _ = resp
On 2 March 2016 at 14:05, Fabien wrote:
> [snip]
> My question is: why does the python3 version need a "with" block while the
> python2 version doesn't? Can I skip the "with" entirely, or should I rather
> do the following:
It's not a case of "need", using the "with" construction is an added
feat
Nick Cash wrote:
> > In python2, this work if "something" is a regular file on the
> > system as well as a remote URL. The 2to3 script convert this to
> > urllib.request.urlopen. But it does not work anymore if "something"
> > is just a file name.
> >
> > My aim is to let the user specify a "file
> In python2, this work if "something" is a regular file on the system as
> well as a remote URL. The 2to3 script convert this to
> urllib.request.urlopen. But it does not work anymore if "something"
> is just a file name.
>
> My aim is to let the user specify a "file" on the command line and have
On Mon, Feb 28, 2011 at 9:44 AM, Terry Reedy wrote:
> On 2/28/2011 10:21 AM, Grant Edwards wrote:
>> As somebody else has already said, if the site provides an API that
>> they want you to use you should do so rather than hammering their web
>> server with a screen-scraper.
>
> If there any generi
On 2/28/2011 10:21 AM, Grant Edwards wrote:
As somebody else has already said, if the site provides an API that
they want you to use you should do so rather than hammering their web
server with a screen-scraper.
If there any generic method for finding out 'if the site provides an
API" and spe
On 2011-02-28, Chris Rebert wrote:
> On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw wrote:
>> I have a working urlopen routine which opens
>> a url, parses it for tags and prints out
>> the links in the page. On some sites, wikipedia for
>> instance, i get a
>>
>> HTTP error 403, forbidden.
>>
>>
On Sun, 27 Feb 2011 22:19:18 -0800, Chris Rebert wrote:
> On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw
> wrote:
>> I have a working urlopen routine which opens a url, parses it for
>> tags and prints out the links in the page. On some sites, wikipedia for
>> instance, i get a
>>
>> HTTP error 40
On Sun, Feb 27, 2011 at 9:38 PM, monkeys paw wrote:
> I have a working urlopen routine which opens
> a url, parses it for tags and prints out
> the links in the page. On some sites, wikipedia for
> instance, i get a
>
> HTTP error 403, forbidden.
>
> What is the difference in accessing the site t
On Tue, 18 Aug 2009 13:05:03 +, Sleepy Cabbage wrote:
> Thanks for the time you've spent anyway Peter. I have superkaramba
> installed and the rest of the script is running fine, it's only when I
> put the urlopen part in that it comes back with errors. The quotes are
> just to make it readabl
Thanks for the time you've spent anyway Peter. I have superkaramba
installed and the rest of the script is running fine, it's only when I
put the urlopen part in that it comes back with errors. The quotes are
just to make it readable on here as my first attempt at posting muted the
text.
--
h
Sleepy Cabbage wrote:
> This is the script up to where the error seems to fall:
>
> "#!/usr/bin/env superkaramba"
> "# -*- coding: iso-8859-1 -*-"
>
> "import karamba"
> "import subprocess"
> "from subprocess import Popen, PIPE, STDOUT, call"
> "import urllib"
> "from urllib import urlopen"
>
>
On Tue, 18 Aug 2009 13:21:59 +0200, Peter Otten wrote:
> Sleepy Cabbage wrote:
>
>> I'm scripting a superkaramba theme using python and have intgrated
>> output from amarok. I have also would like to show the artist and song
>> title from a radio stream i've added to my playlist.
>>
>> If I open
Sleepy Cabbage wrote:
> I'm scripting a superkaramba theme using python and have intgrated output
> from amarok. I have also would like to show the artist and song title from
> a radio stream i've added to my playlist.
>
> If I open a python console and add the following:
>
> ">>>import urllib2"
oops, remove the ,80 since port is not needed. Well, in my case it wasn't
working with port. notice it gives me 404, but this with my domain
>>> att=urllib2.urlopen(site+payload,80).readlines()
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.6/urllib2.py",
I would try:
site="http://www.bput.org/";
payloads="alert('xss')"
attack= urllib2.urlopen(site+payloads,80).readlines()
-Alex Goretoy
http://www.alexgoretoy.com
somebodywhoca...@gmail.com
On Sun, Jan 11, 2009 at 2:49 AM, Steve Holden wrote:
> Paul Rubin wrote:
> > asit writes:
> >> site="ww
Paul Rubin wrote:
> asit writes:
>> site="www.bput.org"
>> payloads="alert('xss')"
>> attack= urllib2.urlopen(site+payloads,80).readlines()
>>
>> according to my best knowledge, the above code is correct.
>> but why it throws exceptio
>
> The code is incorrect. Look at the string ou are sen
asit writes:
> site="www.bput.org"
> payloads="alert('xss')"
> attack= urllib2.urlopen(site+payloads,80).readlines()
>
> according to my best knowledge, the above code is correct.
> but why it throws exceptio
The code is incorrect. Look at the string ou are sending into
urlopen. What on e
On Sat, Jan 10, 2009 at 9:56 AM, asit wrote:
> site="www.bput.org"
> payloads="alert('xss')"
> attack= urllib2.urlopen(site+payloads,80).readlines()
>
> according to my best knowledge, the above code is correct.
> but why it throws exceptio
Because it's not correct. It's trying to load
www.b
asit wrote:
> site="www.bput.org"
> payloads="alert('xss')"
> attack= urllib2.urlopen(site+payloads,80).readlines()
>
> according to my best knowledge, the above code is correct.
> but why it throws exceptio
what exception it throw?
--
Steve Holden+1 571 484 6266 +1 800 494 3119
H
On Apr 26, 12:39 pm, "Dave Dean" <[EMAIL PROTECTED]> wrote:
> Hi all,
> I'm running into some trouble using urllib.urlopen to grab a page from our
> corporate intranet. The name of the internal site is simplyhttp://web(no
> www or com). I can use urlopen to grab a site likehttp://www.google.com
En Thu, 15 Mar 2007 21:12:46 -0300, John Nagle <[EMAIL PROTECTED]>
escribió:
> I was looking at the code for "urllib", and there's
> some undocumented "FTP cacheing" code in there that's not thread safe.
> The documentation for "urllib"
>
> Is there any good reason to keep that code in "
Paul McNett wrote:
> Tempo wrote:
> > Hello. I am getting an error and it has gotten me stuck. I think the
> > best thing I can do is post my code and the error message and thank
> > everybody in advanced for any help that you give this issue. Thank you.
> >
> > #
> > Here's the code:
Tempo wrote:
> Hello. I am getting an error and it has gotten me stuck. I think the
> best thing I can do is post my code and the error message and thank
> everybody in advanced for any help that you give this issue. Thank you.
>
> #
> Here's the code:
> #
>
> import urlli
Tempo wrote:
> Hello. I am getting an error and it has gotten me stuck. I think the
> best thing I can do is post my code and the error message and thank
> everybody in advanced for any help that you give this issue. Thank you.
>
> #
> Here's the code:
> #
>
> import urll
27 matches
Mail list logo