urrlib2 multithreading error

2007-01-16 Thread viscanti
Hi,

I'm using urllib2 to retrieve some data usign http in a multithreaded
application.
Here's a piece of code:
req = urllib2.Request(url, txdata, txheaders)
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', user_agent)]
request = opener.open(req)
data = request.read(1024)

I'm trying to read only the first 1024 bytes to retrieve http headers
(if is html then I will retrieve the entire page).
When I use it on a single thread everything goes ok, when I create
multiple threads the execution halts and the program terminates, just
before the last line (when I execute the request.read(.) ). Obviously I
tried to catch the exception but it doesn't work, the interpreter exits
without any exception or message.
How can I solve this?

lv

-- 
http://mail.python.org/mailman/listinfo/python-list


XMLRPC Server

2007-02-06 Thread viscanti
Hi, I'm trying to create an XMLRPC server using apache + python (cgi).
It's not too difficult to configure everything, but I would like to
tune it in order to receive up to 2000 calls per minute without any
problems. Do Pthon CGIs use threading?
I need to make it very efficient, but I haven't found much information
about Python CGI optimization.
The called function will update a table in a mysql db. I will use
triggers to export data from the table updated by the xmlrpc server to
other tables used by the backend application.

any hint?

lv

-- 
http://mail.python.org/mailman/listinfo/python-list


unicode html

2006-07-17 Thread lorenzo . viscanti
X-No-Archive: yes
Hi, I've found lots of material on the net about unicode html
conversions, but still i'm having many problems converting unicode
characters to html entities. Is there any available function to solve
this issue?
As an example I would like to do this kind of conversion:
\uc3B4 => รด
for all available html entities.

thanks,
lorenzo

-- 
http://mail.python.org/mailman/listinfo/python-list


Execution timeout

2006-07-30 Thread lorenzo . viscanti
X-No-Archive: yes

Hi,
I'm using feedparser to parse some xml feeds.
As others reported
(http://sourceforge.net/tracker/index.php?func=detail&aid=1519461&group_id=112328&atid=661937
) the library halts while parsing some feeds.

To overcome this issue I was thinking about creating some kind of
wrapper for feedparser that encapsulates a timeout.
So after launching the parse method wait a few seconds and if the
control does not return mark the feed as bad.
I haven't much experience with Python so I'm not able to code it, any
hint?

Is there a better method to avoid this kind of problem?

Thanks,
Lorenzo

-- 
http://mail.python.org/mailman/listinfo/python-list