Re: Joining Strings

2016-04-07 Thread Jussi Piitulainen
Emeka writes: > Thanks it worked when parsed with json.load. However, it needed this > decode('utf'): > > data = json.loads(respData.decode('utf-8')) So it does. The response data is bytes. There's also a way to wrap a decoding reader between the response object and the JSON parser (json.load in

Re: Joining Strings

2016-04-07 Thread Emeka
Jussi, Thanks it worked when parsed with json.load. However, it needed this decode('utf'): data = json.loads(respData.decode('utf-8')) On Thu, Apr 7, 2016 at 6:01 AM, Jussi Piitulainen < jussi.piitulai...@helsinki.fi> wrote: > Emeka writes: > > > Hello All, > > > > import urllib.request > > imp

Re: Joining Strings

2016-04-06 Thread Jussi Piitulainen
Emeka writes: > Hello All, > > import urllib.request > import re > > url = 'https://www.everyday.com/ > > > > req = urllib.request.Request(url) > resp = urllib.request.urlopen(req) > respData = resp.read() > > > paragraphs = re.findall(r'\[(.*?)\]',str(respData)) > for eachP in paragraphs: > p

Re: Joining Strings

2016-04-06 Thread Ben Finney
Emeka writes: > Hello All, > > import urllib.request > import re > > url = 'https://www.everyday.com/ This URL doesn't resolve for me, so I can't reproduce the behaviour. > I got the below: > "Coke - Yala Market Branch""NO. 113 IKU BAKR WAY YALA""" > But what I need is > > 'Coke - Yala Market

Joining Strings

2016-04-06 Thread Emeka
Hello All, import urllib.request import re url = 'https://www.everyday.com/ req = urllib.request.Request(url) resp = urllib.request.urlopen(req) respData = resp.read() paragraphs = re.findall(r'\[(.*?)\]',str(respData)) for eachP in paragraphs: print("".join(eachP.split(',')[1:-2]))

Re: joining strings question

2008-03-01 Thread patrick . waldo
>def category_iterator(source): > source = iter(source) > try: >while True: > item = source.next() This gave me a lot of inspiration. After a couple of days of banging my head against the wall, I finally figured out a code that could attach headers, titles, numbers,

Re: joining strings question

2008-02-29 Thread Gerard Flanagan
On Feb 29, 7:56 pm, I V <[EMAIL PROTECTED]> wrote: > On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote: > > return s == s.upper() > > A couple of people in this thread have used this to test for an upper > case string. Is there a reason to prefer it to s.isupper() ? Premature decreptiude, officer

Re: joining strings question

2008-02-29 Thread Steve Holden
I V wrote: > On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote: >> return s == s.upper() > > A couple of people in this thread have used this to test for an upper > case string. Is there a reason to prefer it to s.isupper() ? In my case you can put it down to ignorance or forgetfulness, dependi

Re: joining strings question

2008-02-29 Thread Peter Otten
I V wrote: > On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote: >> return s == s.upper() > > A couple of people in this thread have used this to test for an upper > case string. Is there a reason to prefer it to s.isupper() ? Note that these tests are not equivalent: >>> s = "123" >>> s.isuppe

Re: joining strings question

2008-02-29 Thread Tim Chase
I V wrote: > On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote: >> return s == s.upper() > > A couple of people in this thread have used this to test for an upper > case string. Is there a reason to prefer it to s.isupper() ? For my part? forgetfulness brought on by underuse of .isupper() -tk

Re: joining strings question

2008-02-29 Thread I V
On Fri, 29 Feb 2008 08:18:54 -0800, baku wrote: > return s == s.upper() A couple of people in this thread have used this to test for an upper case string. Is there a reason to prefer it to s.isupper() ? -- http://mail.python.org/mailman/listinfo/python-list

Re: joining strings question

2008-02-29 Thread patrick . waldo
I tried to make a simple abstraction of my problem, but it's probably better to get down to it. For the funkiness of the data, I'm relatively new to Python and I'm either not processing it well or it's because of BeautifulSoup. Basically, I'm using BeautifulSoup to strip the tables from the Feder

Re: joining strings question

2008-02-29 Thread baku
On Feb 29, 4:09 pm, [EMAIL PROTECTED] wrote: > Hi all, > > I have some data with some categories, titles, subtitles, and a link > to their pdf and I need to join the title and the subtitle for every > file and divide them into their separate groups. > > So the data comes in like this: > > data = ['

Re: joining strings question

2008-02-29 Thread Robert Bossy
[EMAIL PROTECTED] wrote: > Hi all, > > I have some data with some categories, titles, subtitles, and a link > to their pdf and I need to join the title and the subtitle for every > file and divide them into their separate groups. > > So the data comes in like this: > > data = ['RULES', 'title','sub

Re: joining strings question

2008-02-29 Thread Steve Holden
[EMAIL PROTECTED] wrote: > Hi all, > > I have some data with some categories, titles, subtitles, and a link > to their pdf and I need to join the title and the subtitle for every > file and divide them into their separate groups. > > So the data comes in like this: > > data = ['RULES', 'title','

Re: joining strings question

2008-02-29 Thread Tim Chase
> I have some data with some categories, titles, subtitles, and a link > to their pdf and I need to join the title and the subtitle for every > file and divide them into their separate groups. > > So the data comes in like this: > > data = ['RULES', 'title','subtitle','pdf', > 'title1','subtitle1

Re: joining strings question

2008-02-29 Thread Bruno Desthuilliers
[EMAIL PROTECTED] a écrit : > Hi all, > > I have some data with some categories, titles, subtitles, and a link > to their pdf and I need to join the title and the subtitle for every > file and divide them into their separate groups. > > So the data comes in like this: > > data = ['RULES', 'title

joining strings question

2008-02-29 Thread patrick . waldo
Hi all, I have some data with some categories, titles, subtitles, and a link to their pdf and I need to join the title and the subtitle for every file and divide them into their separate groups. So the data comes in like this: data = ['RULES', 'title','subtitle','pdf', 'title1','subtitle1','pdf1