Emeka writes:
> Thanks it worked when parsed with json.load. However, it needed this
> decode('utf'):
>
> data = json.loads(respData.decode('utf-8'))
So it does. The response data is bytes.
There's also a way to wrap a decoding reader between the response object
and the JSON parser (json.load in
Jussi,
Thanks it worked when parsed with json.load. However, it needed this
decode('utf'):
data = json.loads(respData.decode('utf-8'))
On Thu, Apr 7, 2016 at 6:01 AM, Jussi Piitulainen <
jussi.piitulai...@helsinki.fi> wrote:
> Emeka writes:
>
> > Hello All,
> >
> > import urllib.request
> > imp
Emeka writes:
> Hello All,
>
> import urllib.request
> import re
>
> url = 'https://www.everyday.com/
>
>
>
> req = urllib.request.Request(url)
> resp = urllib.request.urlopen(req)
> respData = resp.read()
>
>
> paragraphs = re.findall(r'\[(.*?)\]',str(respData))
> for eachP in paragraphs:
> p
Emeka writes:
> Hello All,
>
> import urllib.request
> import re
>
> url = 'https://www.everyday.com/
This URL doesn't resolve for me, so I can't reproduce the behaviour.
> I got the below:
> "Coke - Yala Market Branch""NO. 113 IKU BAKR WAY YALA"""
> But what I need is
>
> 'Coke - Yala Market
Hello All,
import urllib.request
import re
url = 'https://www.everyday.com/
req = urllib.request.Request(url)
resp = urllib.request.urlopen(req)
respData = resp.read()
paragraphs = re.findall(r'\[(.*?)\]',str(respData))
for eachP in paragraphs:
print("".join(eachP.split(',')[1:-2]))
>def category_iterator(source):
> source = iter(source)
> try:
>while True:
> item = source.next()
This gave me a lot of inspiration. After a couple of days of banging
my head against the wall, I finally figured out a code that could
attach headers, titles, numbers,
On Feb 29, 7:56 pm, I V <[EMAIL PROTECTED]> wrote:
> On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote:
> > return s == s.upper()
>
> A couple of people in this thread have used this to test for an upper
> case string. Is there a reason to prefer it to s.isupper() ?
Premature decreptiude, officer
I V wrote:
> On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote:
>> return s == s.upper()
>
> A couple of people in this thread have used this to test for an upper
> case string. Is there a reason to prefer it to s.isupper() ?
In my case you can put it down to ignorance or forgetfulness, dependi
I V wrote:
> On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote:
>> return s == s.upper()
>
> A couple of people in this thread have used this to test for an upper
> case string. Is there a reason to prefer it to s.isupper() ?
Note that these tests are not equivalent:
>>> s = "123"
>>> s.isuppe
I V wrote:
> On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote:
>> return s == s.upper()
>
> A couple of people in this thread have used this to test for an upper
> case string. Is there a reason to prefer it to s.isupper() ?
For my part? forgetfulness brought on by underuse of .isupper()
-tk
On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote:
> return s == s.upper()
A couple of people in this thread have used this to test for an upper
case string. Is there a reason to prefer it to s.isupper() ?
--
http://mail.python.org/mailman/listinfo/python-list
I tried to make a simple abstraction of my problem, but it's probably
better to get down to it. For the funkiness of the data, I'm
relatively new to Python and I'm either not processing it well or it's
because of BeautifulSoup.
Basically, I'm using BeautifulSoup to strip the tables from the
Feder
On Feb 29, 4:09 pm, [EMAIL PROTECTED] wrote:
> Hi all,
>
> I have some data with some categories, titles, subtitles, and a link
> to their pdf and I need to join the title and the subtitle for every
> file and divide them into their separate groups.
>
> So the data comes in like this:
>
> data = ['
[EMAIL PROTECTED] wrote:
> Hi all,
>
> I have some data with some categories, titles, subtitles, and a link
> to their pdf and I need to join the title and the subtitle for every
> file and divide them into their separate groups.
>
> So the data comes in like this:
>
> data = ['RULES', 'title','sub
[EMAIL PROTECTED] wrote:
> Hi all,
>
> I have some data with some categories, titles, subtitles, and a link
> to their pdf and I need to join the title and the subtitle for every
> file and divide them into their separate groups.
>
> So the data comes in like this:
>
> data = ['RULES', 'title','
> I have some data with some categories, titles, subtitles, and a link
> to their pdf and I need to join the title and the subtitle for every
> file and divide them into their separate groups.
>
> So the data comes in like this:
>
> data = ['RULES', 'title','subtitle','pdf',
> 'title1','subtitle1
[EMAIL PROTECTED] a écrit :
> Hi all,
>
> I have some data with some categories, titles, subtitles, and a link
> to their pdf and I need to join the title and the subtitle for every
> file and divide them into their separate groups.
>
> So the data comes in like this:
>
> data = ['RULES', 'title
Hi all,
I have some data with some categories, titles, subtitles, and a link
to their pdf and I need to join the title and the subtitle for every
file and divide them into their separate groups.
So the data comes in like this:
data = ['RULES', 'title','subtitle','pdf',
'title1','subtitle1','pdf1
18 matches
Mail list logo