Re: Async IO Server with Blocking DB

2012-04-04 Thread Jean-Paul Calderone
On Apr 3, 6:13 pm, looking for  wrote:
> Hi
>
> We are thinking about building a webservice server and considering
> python event-driven servers i.e. Gevent/Tornado/ Twisted or some
> combination thereof etc.
>
> We are having doubts about the db io part. Even with connection
> pooling and cache, there is a strong chance that server will block on
> db. Blocking for even few ms is bad.
>
> can someone suggest some solutions or is async-io is not at the prime-
> time yet.
>
> Thanks

Twisted provides support for any DB-API module via
twisted.enteprise.adbapi,
which wraps the module in an asynchronous API (implemented using a
thread pool).

Since the calls all happen in separate threads, it doesn't matter that
they block.

If you're not talking about a SQL database or a DB-API module, maybe
be more
specific about the kind of database I/O you have in mind.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How good is security via hashing

2011-06-07 Thread Jean-Paul Calderone
On Jun 7, 6:18 am, Robin Becker  wrote:
> A python web process is producing files that are given randomized names of 
> the form
>
> hh-MMDDhhmmss-.pdf
>
> where rrr.. is a 128bit random number (encoded as base62). The intent of the
> random part is to prevent recipients of one file from being able to guess the
> names of others.
>
> The process was originally a cgi script which meant each random number was
> produced thusly
>
> pid is process id, dur is 4 bytes from /dev/urandom.
>
> random.seed(long(time.time()*someprimeint)|(pid<<64)|(dur<<32))
> rrr = random.getrandbits(128)
>
> is this algorithm safe? Is it safe if the process is switched to fastcgi and 
> the
> initialization is only carried out once and then say 50 rrr values are 
> generated.

How much randomness do you actually have in this scheme?  The PID is
probably difficult
for an attacker to know, but it's allocated roughly monotonically with
a known
wrap-around value.  The time is probably roughly known, so it also
contributes less
than its full bits to the randomness.  Only dur is really
unpredictable.  So you have
something somewhat above 4 bytes of randomness in your seed - perhaps
8 or 10.  That's
much less than even the fairly small 16 bytes of "randomness" you
expose in the
filename.

The random module is entirely deterministic, so once the seed is known
the value you
produce is known too.

Is 10 bytes enough to thwart your attackers?  Hard to say, what does
an attack look like?

If you want the full 16 bytes of unpredictability, why don't you just
read 16 bytes from
/dev/urandom and forget about all the other stuff?

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How good is security via hashing

2011-06-07 Thread Jean-Paul Calderone
On Jun 7, 7:35 am, Robin Becker  wrote:
> On 07/06/2011 11:26, Nitin Pawar wrote:> Have you tried using UUID module?
>
> > Its pretty handy and comes with base64 encoding function which gives
> > extremely high quality randon strings
>
> > ref:
> >http://stackoverflow.com/questions/621649/python-and-random-keys-of-2...
>
> ..
> I didn't actually ask for a suitable method for doing this; I assumed that Tim
> Peters' algorithm (at least I think he's behind most of the python random
> support) is pretty good so that the bits produced are indeed fairly good
> approximations to random.
>
> I guess what I'm asking is whether any sequence that's using random to 
> generate
> random numbers is predictable if enough samples are drawn. In this case 
> assuming
> that fastcgi is being used can I observe a sequence of generated numbers and
> work out the state of the generator. If that is possible then the sequence
> becomes deterministic and such a scheme is useless. If I use cgi then we're
> re-initializing the sequence hopefully using some other unrelated randomness 
> for
> each number.
>
> Uuid apparently uses machine internals etc etc to try and produce randomness,
> but urandom and similar can block so are probably not entirely suitable.

/dev/urandom does not block, that's the point of it as compared to /
dev/random.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: GIL in alternative implementations

2011-06-07 Thread Jean-Paul Calderone
On Jun 7, 12:03 am, "Gabriel Genellina" 
wrote:
> En Sat, 28 May 2011 14:05:16 -0300, Steven D'Aprano  
>  escribi :
>
>
>
>
>
>
>
>
>
> > On Sat, 28 May 2011 09:39:08 -0700, John Nagle wrote:
>
> >> Python allows patching code while the code is executing.
>
> > Can you give an example of what you mean by this?
>
> > If I have a function:
>
> > def f(a, b):
> >     c = a + b
> >     d = c*3
> >     return "hello world"*d
>
> > how would I patch this function while it is executing?
>
> I think John Nagle was thinking about rebinding names:
>
> def f(self, a, b):
>    while b>0:
>      b = g(b)
>      c = a + b
>      d = self.h(c*3)
>    return "hello world"*d
>
> both g and self.h may change its meaning from one iteration to the next,  
> so a complete name lookup is required at each iteration. This is very  
> useful sometimes, but affects performance a lot.
>

And even the original example, with only + and * can have side-
effects.  Who knows how a defines __add__?

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Secure ssl connection with wrap_socket

2011-07-05 Thread Jean-Paul Calderone
On Jul 5, 4:52 am, Andrea Di Mario  wrote:
> Hi, I'm a new python user and I'm writing a small web service with ssl.
> I want use a self-signed certificate like in 
> wiki:http://docs.python.org/dev/library/ssl.html#certificates
> I've used wrap_socket, but if i try to use
> cert_reqs=ssl.CERT_REQUIRED, it doesn't work with error:
>
> urllib2.URLError:  specified for verification of other-side certificates.>
>
> It works only with CERT_NONE (the default) but with this option i
> could access to the service in insicure mode.
>
> Have you some suggestions for my service?
>

Also specify some root certificates to use in verifying the peer's
certificate.  Certificate verification works by proceeding from a
collection of "root" certificates which are explicitly trusted.  These
are used to sign other certificates (which may in turn be used to sign
others, which in turn...).  The process of certificate verification is
the process of following the signatures from the certificate in use by
the server you connect to back up the chain until you reach a root
which you have either decided to trust or not.  If the signatures are
all valid and the root is one you trust, then you have established a
connection to a trusted entity.  If any signature is invalid, or the
root is not one you trust, then you have not.

The root certificates are also called the "ca certificates" or
"certificate authority certificates".  `wrap_socket` accepts a
`ca_certs` argument.  See 
http://docs.python.org/library/ssl.html#ssl-certificates
for details about that argument.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Secure ssl connection with wrap_socket

2011-07-06 Thread Jean-Paul Calderone
On Jul 6, 4:44 am, AndDM  wrote:
> On Jul 5, 4:08 pm, Jean-Paul Calderone 
> wrote:
>
>
>
> > On Jul 5, 4:52 am, Andrea Di Mario  wrote:
>
> > > Hi, I'm a new python user and I'm writing a small web service with ssl.
> > > I want use a self-signed certificate like in 
> > > wiki:http://docs.python.org/dev/library/ssl.html#certificates
> > > I've used wrap_socket, but if i try to use
> > > cert_reqs=ssl.CERT_REQUIRED, it doesn't work with error:
>
> > > urllib2.URLError:  > > specified for verification of other-side certificates.>
>
> > > It works only with CERT_NONE (the default) but with this option i
> > > could access to the service in insicure mode.
>
> > > Have you some suggestions for my service?
>
> > Also specify some root certificates to use in verifying the peer's
> > certificate.  Certificate verification works by proceeding from a
> > collection of "root" certificates which are explicitly trusted.  These
> > are used to sign other certificates (which may in turn be used to sign
> > others, which in turn...).  The process of certificate verification is
> > the process of following the signatures from the certificate in use by
> > the server you connect to back up the chain until you reach a root
> > which you have either decided to trust or not.  If the signatures are
> > all valid and the root is one you trust, then you have established a
> > connection to a trusted entity.  If any signature is invalid, or the
> > root is not one you trust, then you have not.
>
> > The root certificates are also called the "ca certificates" or
> > "certificate authority certificates".  `wrap_socket` accepts a
> > `ca_certs` argument.  
> > Seehttp://docs.python.org/library/ssl.html#ssl-certificates
> > for details about that argument.
>
> > Jean-Paul
>
> Hi Jean-Paul, i thought that with self-signed certificate i shouldn't
> use ca_certs option. Now, i've created a ca-authority and i use this
> command:
>
>  self.sock = ssl.wrap_socket(sock, certfile = "ca/certs/
> myfriend.cert.pem", keyfile = "ca/private/myfriend.key.pem",
> ca_certs="/home/andrea/ca/certs/cacert.pem",
> cert_reqs=ssl.CERT_REQUIRED)
>
> When i use the some machine as client-server it works, but, when i use
> another machine as client, i've this:
>
> Traceback (most recent call last):
>   File "loginsender.py", line 48, in 
>     handle = url_opener.open('https://debian.andrea.it:10700/%s+%s'%
> (DATA,IPIN))
>   File "/usr/lib/python2.6/urllib2.py", line 391, in open
>     response = self._open(req, data)
>   File "/usr/lib/python2.6/urllib2.py", line 409, in _open
>     '_open', req)
>   File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
>     result = func(*args)
>   File "loginsender.py", line 33, in https_open
>     return self.do_open(self.specialized_conn_class, req)
>   File "/usr/lib/python2.6/urllib2.py", line 1145, in do_open
>     raise URLError(err)
> urllib2.URLError:  0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib>
>
> I see that i should create a certificate with server, client and ca
> autority, but i haven't clear the ca_certs option and which path i
> should use.
> Have you any suggestion?

You need to have the CA certificate on any machine that is going to
verify the certificate used on the SSL connection.  The path just
needs to be the path to that CA certificate on the client machine.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Howto Deferred

2011-07-14 Thread Jean-Paul Calderone
On Jul 14, 3:07 am, marco  wrote:
> Hello gals and guys,
>
> I'm an experienced Python user and I'd like to begin playing with
> Twisted.
> I started RTFM the tutorial advised on the official site and I found it
> really useful and well done.
>
> Now I'd like to practice a bit by coding a little program that reads
> strings from a serial device and redirects them remotely via TCP. For
> that sake I'm trying to use deferred.
>

Deferreds probably aren't a good solution for this problem.  They're
useful
for one-time events, but you have an event that repeats over and over
again
with different data.

> In the tutorial, a deferred class is instantiated at factory level, then
> used and destroyed.
>
> And here things get harder for me.
> Now, in my test program I need to manage data which comes in a "random"
> manner, and I thought about doing it in a few possible ways:
>
> 1. create a deferred at factory level and every time I read something
> from the serial port add some callbacks:
>
> class SerToTcpProtocol(Protocol):
>
>   def dataReceived(self, data):
>     # deferred is already instantiated and launched
>     # self.factory.sendToTcp sends data to the TCP client
>     self.factory.deferred.addCallback(self.factory.sendToTcp, data)
>

Or you could do self.factory.sendToTcp(data)

> 2. or, either, create a deferred at protocol level every time I receive
> something, then let the deferred do what I need and destroy it:
>
> class SerToTcpProtocol(Protocol):
>
>   def dataReceived(self, data):
>     d = defer.Deferred()
>     d.addCallback(self.factory.sendToTcp, data)
>     d.callback(data)
>

Same here. :)

> 3. or again, use a deferred list:
>
> class SerToTcpProtocol(Protocol):
>
>   def dataReceived(self, data):
>     d = defer.Deferred()
>     d.addCallback(self.factory.sendToTcp, data)
>     self.factory.listDeferred.addCallback(lambda d)
>     d.callback(data)
>

I'm not sure what the listDeferred is there for.

Deferreds are a good abstraction for "do one thing and then tell
me what the result was".  You have a different sort of thing here,
where there isn't much of a result (sending to tcp probably always
works until you lose your connection).  A method call works well
for that.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking against NULL will be eliminated?

2011-03-03 Thread Jean-Paul Calderone
On Mar 3, 8:16 am, Neil Cerutti  wrote:
> On 2011-03-03, Tom Zych  wrote:
>
> > Carl Banks wrote:
> >> Perl works deterministically and reliably.  In fact, pretty much every
> >> language works deterministically and reliably.  Total non-argument.
>
> > Well, yes. I think the real issue is, how many surprises are
> > waiting to pounce on the unwary developer. C is deterministic
> > and reliable, but full of surprises.
>
> Point of order, for expediency, C and C++ both include lots and
> lots of indeterminate stuff. A piece of specific C code can be
> totally deterministic, but the language is full of undefined
> corners.
>
> > Python is generally low in surprises. Using "if "
> > is one place where you do have to think about unintended
> > consequences.
>
> Python eschews undefined behavior.
>

C and C++ have standards, and the standards describe what they don't
define.

Python has implementations.  The defined behavior is whatever the
implementation does.  Until someone changes it to do something else.

It's not much of a comparison.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Questions about GIL and web services from a n00b

2011-04-16 Thread Jean-Paul Calderone
On Apr 16, 10:44 am, a...@pythoncraft.com (Aahz) wrote:
> In article ,
> Raymond Hettinger   wrote:
>
>
>
> >Threading is really only an answer if you need to share data between
> >threads, if you only have limited scaling needs, and are I/O bound
> >rather than CPU bound
>
> Threads are also useful for user interaction (i.e. GUI apps).  
>

I suppose that's why most GUI toolkits use a multithreaded model.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Equivalent code to the bool() built-in function

2011-04-19 Thread Jean-Paul Calderone
On Apr 19, 10:23 am, Grant Edwards  wrote:
> On Tue, Apr 19, 2011 at 7:09 AM, Christian Heimes  wrote:
> > Am 18.04.2011 21:58, schrieb John Nagle:
> >> ?? ?? This is typical for languages which backed into a "bool" type,
> >> rather than having one designed in. ??The usual result is a boolean
> >> type with numerical semantics, like
>
> >> ??>>> True + True
> >> 2
>
> > I find the behavior rather useful. It allows multi-xor tests like:
>
> > if a + b + c + d != 1:
> > ?? ??raise ValueError("Exactly one of a, b, c or d must be true.")
>
> I guess I never thought about it, but there isn't an 'xor' operator to
> go along with 'or' and 'and'.  Must not be something I need very often.
>

You also can't evaluate xor without evaluating both operands, meaning
there
is never a short-circuit; both and and or can short-circuit, though.
Also
boolean xor is the same as !=.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pickling over a socket

2011-04-19 Thread Jean-Paul Calderone
On Apr 19, 6:27 pm, Roger Alexander  wrote:
> Thanks everybody, got it working.
>
>  I appreciate the help!
>
> Roger.

It's too bad none of the other respondents pointed out to you that you
_shouldn't do this_!  Pickle is not suitable for use over the network
like this.  Your server accepts arbitrary code from clients and
executes it.  It is completely insecure.  Do not use pickle and
sockets together.  Notice the large red box at the top of .

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why doesn't this asyncore.dispatcher.handle_read() get called?

2011-04-20 Thread Jean-Paul Calderone
On Apr 20, 12:25 pm, Dun Peal  wrote:
> Hi,
>
> I'm writing and testing an asyncore-based server. Unfortunately, it
> doesn't seem to work. The code below is based on the official docs and
> examples, and starts a listening and sending dispatcher, where the
> sending dispatcher connects and sends a message to the listener - yet
> Handler.handle_read() never gets called, and I'm not sure why. Any
> ideas?
>
> Thanks, D.
>
> import asyncore, socket, sys
>
> COMM_PORT = 9345
>
> class Handler(asyncore.dispatcher):
>     def handle_read(self):
>         print 'This never prints'
>
> class Listener(asyncore.dispatcher):
>     def __init__(self, port=COMM_PORT):
>         asyncore.dispatcher.__init__(self)
>         self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
>         self.set_reuse_addr()
>         self.bind(('', port))
>         self.listen(5)
>
>     def handle_accept(self):
>         client, addr = self.accept()
>         print 'This prints.'
>         return Handler(client)
>
> class Sender(asyncore.dispatcher):
>     def __init__(self, host):
>         asyncore.dispatcher.__init__(self)
>         self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
>         self.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
>         self.connect( (host, COMM_PORT) )
>         self.buffer = 'Msg\r\n'
>
>     def handle_connect(self):
>         pass
>
>     def writable(self):
>         return len(self.buffer) > 0
>
>     def handle_write(self):
>         sent = self.send(self.buffer)
>         self.buffer = self.buffer[sent:]
>
> def test_communication():
>     from multiprocessing import Process
>     def listener():
>         l = Listener()
>         asyncore.loop(timeout=10, count=1)
>     lis = Process(target=listener)
>     lis.start()
>     def sender():
>         s = Sender('localhost')
>         asyncore.loop(timeout=10, count=1)
>     sen = Process(target=sender)
>     sen.start()
>     lis.join()
>
> test_communication()

You didn't let the program run long enough for the later events to
happen.  loop(count=1) basically means one I/O event will be processed
- in the case of your example, that's an accept().  Then asyncore is
done and it never gets to your custom handle_read.

So you can try passing a higher count to loop, or you can add your own
loop around the loop call.  Or you can switch to Twisted which
actually makes testing a lot easier than this - no need to spawn
multiple processes or call accept or recv yourself.  Here's a somewhat
equivalent Twisted-based version of your program:

from twisted.internet.protocol import ServerFactory, Protocol
from twisted.internet import reactor

factory = ServerFactory()
factory.protocol = Protocol

reactor.listenTCP(0, factory)
reactor.run()

It's hard to write the equivalent unit test, because the test you
wrote for the asyncore-based version is testing lots of low level
details which, as you can see, don't actually appear in the Twisted-
based version because Twisted does them for you already.  However,
once you get past all that low-level stuff and get to the part where
you actually implement some of your application logic, you might have
tests for your protocol implementation that look something like this:

from twisted.trial.unittest import TestCase
from twisted.test.proto_helpers import StringTransport

from yourapp import Handler # Or a better name

class HandlerTests(TestCase):
def test_someMessage(self):
"""
When the "X" message is received, the "Y" response is sent
back.
"""
transport = StringTransport()
protocol = Handler()
protocol.makeConnection(transport)
protocol.dataReceived("X")
self.assertEqual(transport.value(), "Y")

Hope this helps,
Jean-Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When is PEP necessary?

2011-04-23 Thread Jean-Paul Calderone
On Apr 23, 5:09 pm, Daniel Kluev  wrote:
> On Sat, Apr 23, 2011 at 11:16 PM, Disc Magnet  wrote:
> > Is PEP necessary to add a new package to the standard library?
> > *skip*
>
> Don't forget that Python is not limited to CPython. Other
> implementations need these PEPs to provide compliant packages.
> While its not that important for pure-python modules, anything tied to
> C-API better be documented, or it becomes a nightmare to keep
> non-CPython version having identical interface.
>

Unit tests actually serve this purpose much better than do PEPs.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sockets: bind to external interface

2011-04-25 Thread Jean-Paul Calderone
On Apr 25, 3:49 pm, Chris Angelico  wrote:
> On Tue, Apr 26, 2011 at 5:37 AM, Hans Georg Schaathun  
> wrote:
>
> > Has anyone found a simple solution that can be administered without
> > root privileges?  I mean simpler than passing the ip address
> > manually :-)
>
> You can run 'ifconfig' without being root, so there must be a way. At
> very worst, parse ifconfig's output.
>
> The way you talk of "the" external interface, I'm assuming this
> computer has only one. Is there a reason for not simply binding to
> INADDR_ANY aka 0.0.0.0? Do you specifically need to *not* bind to
> 127.0.0.1?
>
> Chris Angelico

Binding to 0.0.0.0 is usually the right thing to do.  The OP should
probably do that unless he has some particular reason for doing
otherwise.  The comment about "the standard solution of binding to
whatever socket.gethostname() returns" suggests that perhaps he wasn't
aware that actually the standard solution is to bind to 0.0.0.0.

However, the system stack can usually be tricked into revealing some
more information this way:

>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
>>> s.connect(('1.2.3.4', 1234))
>>> s.getsockname()
('192.168.1.148', 47679)

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: connect SIGINT to custom interrupt handler

2011-05-18 Thread Jean-Paul Calderone
On May 18, 9:28 am, Christoph Scheingraber 
wrote:
> On 2011-05-15, Miki Tebeka  wrote:
>
> > Why not just catch KeyboardInterrupt?
>
> Would it be possible to continue my program as nothing had happened in
> that case (like I did before, setting a flag to tell main() to finish the
> running data download and quit instead of starting the next data download
> {it's a for-loop})?
>
> I have tried it, but after catching the KeyboardInterrupt I could only
> continue to the next iteration.

No, since the exception being raised represents a different flow of
control
through the program, one that is mutually exclusive with the flow of
control
which would be involved with continuing the processing in the
"current"
iteration of your loop.

Setting SA_RESTART on SIGINT is probably the right thing to do.  It's
not
totally clear to me from the messages in this thread if you managed to
get
that approach working.  The most commonly encountered problem with
this
approach is that it means that any blocking (eg I/O) operation in
progress
won't be interrupted and you'll have to wait for it to complete
normally.
In this case, it sounds like this is the behavior you actually want,
though.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sanitizing filename strings across platforms

2011-05-31 Thread Jean-Paul Calderone
On May 31, 10:17 pm, Tim Chase  wrote:
> Scenario: a file-name from potentially untrusted sources may have
> odd filenames that need to be sanitized for the underlying OS.
> On *nix, this generally just means "don't use '/' or \x00 in your
> string", while on Win32, there are a host of verboten characters
> and file-names.  Then there's also checking the abspath/normpath
> of the resulting name to make sure it's still in the intended folder.
>
> I've read through [1] and have started to glom together various
> bits from that thread.  My current course of action is something like
>
>   SACRED_WIN32_FNAMES = set(
>     ['CON', 'PRN', 'CLOCK$', 'AUX', 'NUL'] +
>     ['LPT%i' % i for i in range(32)] +
>     ['CON%i' % i for i in range(32)] +
>
>   def sanitize_filename(fname):
>     sane = set(string.letters + string.digits + '-_.[]{}()$')
>     results = ''.join(c for c in fname if c in sane)
>     # might have to check sans-extension
>     if results.upper() in SACRED_WIN32_FNAMES:
>       results = "_" + results
>     return results
>
> but if somebody already has war-hardened code they'd be willing
> to share, I'd appreciate any thoughts.
>

There's http://pypi.python.org/pypi/filepath/0.1 (taken from
twisted.python.filepath).

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] txaws 0.5.0

2017-12-27 Thread Jean-Paul Calderone
Hello all,

I'm pleased to announce the release of txAWS 0.5.0.  txAWS is a library for
interacting with Amazon Web Services (AWS) using Twisted.

You can download the release from PyPI .

Since the last release, the following enhancements have been made:

Features
> 
> - txaws.s3.client.S3Client.get_bucket now accepts a ``prefix`` parameter
> for
>   selecting a subset of S3 objects. (#78)
> - txaws.ec2.client.EC2Client now has a ``get_console_output`` method
> binding
>   the ``GetConsoleOutput`` API. (#82)


Thanks to everyone who contributed and to Least Authority TFA GmbH
 for sponsoring my work on this release.

Jean-Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Announcing txAWS 0.2.3.1

2017-01-09 Thread Jean-Paul Calderone
I've just release txAWS 0.2.3.1.  txAWS is a library for interacting with
Amazon Web Services (AWS) using Twisted.

AWSServiceEndpoint's ssl_hostname_verification's parameter now defaults to
True instead of False.  This affects all txAWS APIs which issue requests to
AWS endpoints.  For any application which uses the default
AWSServiceEndpoints, the server's TLS certificate will now be verified.

This resolves a security issue in which txAWS applications were vulnerable
to man-in-the-middle attacks which could either steal sensitive information
or, possibly, alter the AWS operation requested.

The new release is available on PyPI in source and wheel forms.  You can
also find txAWS at its new home on github, .

Special thanks to Least Authority Enterprises
() for
sponsoring the work to find and fix this issue and to publish this new
release.

Jean-Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


[ANN] txkube 0.3.0

2018-08-08 Thread Jean-Paul Calderone
Hello all,

I'm pleased to announce a new release of txkube, a Twisted-based library
for interacting with Kubernetes using the HTTP API.  The big news for this
release is support for Python 3.6.  Also included is support for multiple
configuration files in the KUBECONFIG environment variable which allows for
better configuration management practices.

Here is an example of txkube usage, taken from the README:

   from __future__ import print_function
   from twisted.internet.task import react

   from txkube import network_kubernetes_from_context

   @react
   def main(reactor):
   k8s = network_kubernetes_from_context(reactor, u"minikube")
   d = k8s.versioned_client()
   d.addCallback(
   lambda client: client.list(client.model.v1.Namespace)
   )
   d.addCallback(print)
   return d

You can download txkube from PyPI <https://pypi.python.org/pypi>
You can contribute to its development on GitHub
<https://github.com/LeastAuthority/txkube>.

Thanks to Least Authority TFA GmbH <https://leastauthority.com/> for
sponsoring this development and to Craig Rodrigues for his efforts on
Python 3 porting work.

Jean-Paul Calderone
<https://as.ynchrono.us/>
-- 
https://mail.python.org/mailman/listinfo/python-list


Tahoe-LAFS on Python 3 - Call for Porters

2019-09-24 Thread Jean-Paul Calderone
Hello Pythonistas,


Earlier this year a number of Tahoe-LAFS
 community members began an effort
to port Tahoe-LAFS from Python 2 to Python 3.  Around five people are
currently involved in a part-time capacity.  We wish to accelerate the
effort to ensure a Python 3-compatible release of Tahoe-LAFS can be made
before the end of upstream support for CPython 2.x.


Tahoe-LAFS is a Free and Open system for private, secure, decentralized
storage.  It encrypts and distributes your data across multiple servers.
If some of the servers fail or are taken over by an attacker, the entire
file store continues to function correctly, preserving your privacy and
security.


Foolscap , a dependency of Tahoe-LAFS,
is also being ported.  Foolscap is an object-capability-based RPC protocol
with flexible serialization.


Some details of the porting effort are available in a milestone on the
Tahoe-LAFS trac instance
.


For this help, we are hoping to find a person/people with significant prior
Python 3 porting experience and, preferably, some familiarity with Twisted,
though in general the Tahoe-LAFS project welcomes contributors of all
backgrounds and skill levels.


We would prefer someone to start with us as soon as possible and no later
than October 15th. If you are interested in this opportunity, please send
us any questions you have, as well as details of your availability and any
related work you have done previously (GitHub, LinkedIn links, etc). If you
would like to find out more about this opportunity, please contact us at
jessielisbetfrance at gmail (dot) com or on IRC in #tahoe-lafs on Freenode.

Jean-Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle to source code

2005-10-26 Thread Jean-Paul Calderone
On 26 Oct 2005 06:15:35 -0700, Gabriel Genellina <[EMAIL PROTECTED]> wrote:
>> I want to convert from pickle format to python source code. That is,
>> given an existing pickle, I want to produce a textual representation
>> which, when evaluated, yields the original object (as if I had
>> unpickled the pickle).
>> I know of some transformations pickle/xml (Zope comes with one such
>> tool, gnosis xml is another) so I believe I could build something based
>> on them.
>> But I dont want to reinvent the wheel, I wonder if anyone knows of a
>> library which could do what I want?
>
>An example to make things clear:
>
>class MyClass:
>def __init__(self,a,b):
>self.a=a
>self.b=b
>def foo(self):
>self.done=1
># construct an instance and work with it
>obj = MyClass(1,2)
>obj.foo()
># save into file
>pickle.dump(obj,file('test.dat','wb'))
>
>Then, later, another day, using another process, I read the file and
>want to print a block of python code equivalent to the pickle saved in
>the file.
>That is, I want to *generate* a block of code like this:
>
>xxx = new.instance(MyClass)
>xxx.a = 1
>xxx.b = 2
>xxx.done = 1
>
>Or perhaps:
>
>xxx = new.instance(MyClass, {'a':1,'b':2,'done':1})
>
>In other words, I need a *string* which, being sent to eval(), would
>return the original object state saved in the pickle.
>As has been pointed, repr() would do that for simple types. But I need
>a more general solution.
>
>The real case is a bit more complicated because there may be references
>to other objects, involving the persistent_id mechanism of pickles, but
>I think it should not be too difficult. In this example, if xxx.z
>points to another external instance for which persistent_id returns
>'1234', would suffice to output another line like:
>xxx.z = external_reference('1234')
>
>I hope its more clear now.

You may find twisted.persisted.aot of some use.  Here is an example:

>>> class Foo:
... def __init__(self, x, y):
... self.x = x
... self.y = y
...
>>> a = Foo(10, 20)
>>> b = Foo('hello', a)
>>> c = Foo(b, 'world')
>>> a.x = c
>>> from twisted.persisted import aot
>>> print aot.jellyToSource(a)
app=Ref(1,
  Instance('__main__.Foo',
x=Instance('__main__.Foo',
  x=Instance('__main__.Foo',
x='hello',
y=Deref(1),),
  y='world',),
y=20,))
>>> 

AOT is unmaintained in Twisted, and may not support some newer features of 
Python (eg, datetime or deque instances).  If this seems useful, you may want 
to contribute patches to bring it up to the full level of functionality you 
need.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help: Quick way to test if value lies within a list of lists of ranges?

2005-10-27 Thread Jean-Paul Calderone
On Thu, 27 Oct 2005 15:48:53 +0100 (BST), Ben O'Steen <[EMAIL PROTECTED]> wrote:
>Scenario:
>=
>
>Using PyGame in particular, I am trying to write an application that will
>run a scripted timeline of events, eg at 5.5 seconds, play xxx.mp3 and put
>the image of a ball on screen, at 7.8 seconds move the ball up and down.
>At this point, I hear you say 'Oh, like Flash'.
>
> [snip - how do I make it go fast?]

I'm sure you'll get a lot of suggestions for fast algorithms
to solve this problem, but before you do, let me suggest that
this is actually a premature optimization.

I bet there are much more interesting problems to be solved in
this project, ones that you could work on and test, even while
doing something as unbelievable slow as looping over 100 or so
objects. ;)

You can always optimize later, after you've identified that this
operation is actually a bottleneck.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: db.DB_CREATE|db.DB_INIT_MPOOL|db.DB_THREAD|db.DB_INIT_CDB

2005-10-27 Thread Jean-Paul Calderone
On Fri, 28 Oct 2005 00:17:54 +0800, "Neville C. Dempsey" <[EMAIL PROTECTED]> 
wrote:
>Why does this python program fail to read record "600"?
>
>#!/usr/bin/env python
>import bsddb # twiceopen.py
>
>key="600"
>btf=bsddb.db.DB_INIT_THREAD
>
>list1=bsddb.btopen("twiceopen.tbl",btflags=btf)
>list1[key]="we have the technology"
>
>list2=bsddb.btopen("twiceopen.tbl",btflags=btf)
>#print "key:",key,"val:",list2[key] # doesn't work...
>print "first:",list2.first() # also fails first time...
>
>list1.close()
>list2.close()
>
>Maybe the solution needs one of:
>  db.DB_CREATE|db.DB_INIT_MPOOL|db.DB_THREAD|db.DB_INIT_CDB

You really want to use transactions if you are going to access a database 
concurrently.  So that means DB_INIT_TXN.  Now since you need transactions, you 
need an environment, so you want to let bsddb create whatever environment files 
it needs.  So that means DB_CREATE.  Now since there's an environment, every 
process but the first to open it needs to *join* it, rather than opening it in 
the usual way.  So that means DB_JOINENV for all but the first open call.

Except I don't think btopen() supports half these operations.  You really want 
to use bsddb.db.DBEnv and bsddb.DB.  Or a library that wraps them more 
sensibly: 
.
  You probably don't want everything there, but the DatabaseEnvironment class 
(and supporting code) should be useful.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: lambda functions within list comprehensions

2005-10-29 Thread Jean-Paul Calderone
On 29 Oct 2005 14:25:24 -0700, Max Rybinsky <[EMAIL PROTECTED]> wrote:
>Hello!
>
>Please take a look at the example.
>
 a = [(x, y) for x, y in map(None, range(10), range(10))] # Just a list of 
 tuples
 a
>[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8,
>8), (9, 9)]
>
>Now i want to get a list of functions x*y/n, for each (x, y) in a:
>
 funcs = [lambda n: x * y / n for x, y in a]
>
>It looks consistent!
>
 funcs
>[ at 0x010F3DF0>,  at 0x010F7CF0>,
> at 0x010F7730>,  at 0x010FD270>,
> at 0x010FD0B0>,  at 0x010FD5B0>,
> at 0x010FD570>,  at 0x010FD630>,
> at 0x01100270>,  at 0x011002B0>]
>
>...and functions are likely to be different.

A search of this group would reveal one or two instances in the past in which 
this question has been asked and answered:

  http://article.gmane.org/gmane.comp.python.general/427200
  http://article.gmane.org/gmane.comp.python.general/424389
  http://article.gmane.org/gmane.comp.python.general/399224
  http://article.gmane.org/gmane.comp.python.general/390097
  http://article.gmane.org/gmane.comp.python.general/389011
  http://article.gmane.org/gmane.comp.python.general/334625

I could go on for a while longer, but hopefully some of the linked material 
will answer your question.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket receive file does not match sent file

2005-11-06 Thread Jean-Paul Calderone
On 6 Nov 2005 09:13:03 -0800, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
>I wrote two simple socket program.
>one for sending a file and the other for receiving the file.
>but when I run it, a curious thing happened.
>The received file was samller that the sent file.

Your sender does not take care to ensure the entire file is sent.  It will 
semi-randomly drop bytes from various areas in the middle of the file.  Here's 
a sender that works correctly:

  from twisted.internet import reactor, protocol
  from twisted.protocols import basic
  from twisted.python import log

  filename = sys.argv[1]
  host = sys.argv[2]

  class Putter(protocol.Protocol):
  def connectionMade(self):
  fs = basic.FileSender()
  d = fs.beginFileTransfer(file(filename, 'b'), self.transport)
  d.addCallback(self.finishedTransfer)
  d.addErrback(self.transferFailed)

  def finishedTransfer(self, result):
  self.transport.loseConnection()

  def transferFailed(self, err):
  print 'Transfer failed'
  err.printTraceback()
  self.transport.loseConnection()

  def connectionLost(self, reason):
  reactor.stop()

  f = protocol.ClientFactory()
  f.protocol = Putter
  reactor.connectTCP(host, 9000, f)
  reactor.run()

Of course, this is still not entirely correct, since the protocol specified by 
your code provides not mechanism for the receiving side to determine whether 
the connection was dropped because the file was fully transferred or because of 
some transient network problem or other error.  One solution to this is to 
prefix the file's contents with its length.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: RAW_INPUT

2005-11-07 Thread Jean-Paul Calderone
On Mon, 07 Nov 2005 12:14:15 -0600, A D <[EMAIL PROTECTED]> wrote:
>On Mon, 2005-11-07 at 07:57 -0800, john boy wrote:
>> I am having trouble with the following example used in a tutorial:
>>
>> print "Halt !"
>> s = raw_input ("Who Goes there? ")
>> print "You may pass,", s
>
>at this print line you need to do
>print "you may pass, %s" % s
>
>this will allow you to enter the string s into the output sentence

No, this is totally irrelevant.  The print in the original code will work just 
fine.

>
>>
>> I run this and get the following:
>> Halt!
>> Who Goes there?
>>
>> --thats itif I hit enter again "You may pass,"
>> appears...
>>
>> In the example after running you should get:
>>
>> Halt!
>> Who Goes there? Josh
>> You may pass, Josh
>>
>> I'm assuming s=Josh...but that is not included in the statement at all
>> I don't know how you put "Josh" in or how you got it to finish running
>> w/o hitting enter after "Who goes there?"
>>
>> What am I doing wrong?

Did you try typing "Josh" and then enter?  Or any other name, for that matter...

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using python for writing models: How to run models in restricted python mode?

2005-11-07 Thread Jean-Paul Calderone
On 7 Nov 2005 12:54:40 -0800, vinjvinj <[EMAIL PROTECTED]> wrote:
>I have an application which allows multiple users to write models.
>These models get distributed on a grid of compute engines. users submit
>their models through a web interface. I want to
>
>1. restrict the user from doing any file io, exec, import, eval, etc. I
>was thinking of writing a plugin for pylint to do all the checks? Is
>this is a good way given that there is no restricted python. What are
>the things I should serach for in python code
>
>2. restrict the amount of memory a module uses as well. For instance
>how can I restrict a user from doing a = range(100) or similar
>tasks so that my whole compute farm does not come down.

There is currently no way to do either of these things.  The most realistic 
approach at this time seems to be to rely on your operating system's 
capabilities to limit resource access and usage on a per-process.  That is, run 
each piece of submitted code in a separate, unprivileged process, with the 
appropriate limits in place.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sqlite3 decode error

2005-11-08 Thread Jean-Paul Calderone
On Tue, 08 Nov 2005 16:27:25 -0400, David Pratt <[EMAIL PROTECTED]> wrote:
>Recently I have run into an issue with sqlite where I encode strings
>going into sqlite3 as utf-8.  I guess by default sqlite3 is converting
>this to unicode since when I try to decode I get an attribute error
>like this:
>
>AttributeError: 'unicode' object has no attribute 'decode'
>
>The code and data I am preparing is to work on postgres as well a
>sqlite so there are a couple of things I could do.  I could always
>store any data as unicode to any db, or test the data to determine
>whether it is a string or unicode type when it comes out of the
>database so I can deal with this possibility without errors. I will
>likely take the first option but I looking for a simple test to
>determine my object type.
>
>if I do:
>
> >>>type('maybe string or maybe unicode')
>
>I get this:
>
> >>>
>
>I am looking for something that I can use in a comparison.
>
>How do I get the type as a string for comparison so I can do something
>like
>
>if type(some_data) == 'unicode':
>   do some stuff
>else:
>   do something else
>

You don't actually want the type as a string.  What you seem to be leaning 
towards is the builtin function "isinstance":

if isinstance(some_data, unicode):
# some stuff
elif isinstance(some_data, str):
# other stuff
...

But I think what you actually want is to be slightly more careful about what 
you place into SQLite3.  If you are storing text data, insert is as a Python 
unicode string (with no NUL bytes, unfortunately - this is a bug in SQLite3, or 
maybe the Python bindings, I forget which).  If you are storing binary data, 
insert it as a Python buffer object (eg, buffer('1234')).  When you take text 
data out of the database, you will get unicode objects.  When you take bytes 
out, you will get buffer objects (which you can convert to str objects with 
str()).

You may want to look at Axiom () to 
see how it handles each of these cases.  In particular, the "text" and "bytes" 
types defined in the attributes module 
().

By only encoding and decoding at the border between your application and the 
outside world, and the border between your application and the data, you will 
eliminate the possibility for a class of bugs where encodings are forgotten, or 
encoded strings are accidentally combined with unicode strings.

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Goto XY

2005-11-08 Thread Jean-Paul Calderone
On 8 Nov 2005 17:27:24 -0800, [EMAIL PROTECTED] wrote:
>Is there some command in python so that I can read a key's input and
>then use a gotoxy() function to move the cursor on screen?  e.g.:
>(psuedo-code)
>
>When the right arrow is pushed, cursor gotoxy(x+1,y)
>

You can uses curses for this, on platforms where curses is supported.  Twisted 
Conch also includes some terminal manipulation code.  Both are basically 
POSIX-only, though Twisted might expand to work with win32 at some point.  At 
some point in the past I think there was a win32 curses port, but I don't think 
it's maintained anymore.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Storing empties

2005-11-08 Thread Jean-Paul Calderone
On Tue, 8 Nov 2005 19:28:36 -0800, Alex Martelli <[EMAIL PROTECTED]> wrote:
>Aahz <[EMAIL PROTECTED]> wrote:
>   ...
>> >For pickling, object() as a unique "nothing here, NOT EVEN a None"
>> >marker (AKA sentinel) works fine.
>>
>> How does that work?  Maybe I'm missing something obvious.
>>
>> sentinel = object()
>> class C:
>> def __init__(self, foo=sentinel):
>> self.foo = foo
>> def process(self):
>> if self.foo is not sentinel:
>> 
>>
>> Now, the way I understand this, when your application restarts and an
>> instance of C is read from a pickle, your sentinel is going to be a
>> different instance of object() and process() will no longer work
>> correctly.  Are you suggesting that you need to pickle the sentinel with
>> the instance?  Or is there some other trick I'm missing?
>
>Yes, I'd set self.sentinel=sentinel (and test wrt that) -- while in the
>abstract it would be better to set sentinel at class level, since
>classes are only pickled "by name" that wouldn't work.
>
>If you don't need the absolute ability to pass ANY argument to C(),
>there are of course all sorts of workaround to save some small amount of
>memory -- any object that's unique and you never need to pass can play
>the same role as a sentinel, obviously.

This is a reasonable trick, though:

class sentinel:
pass

Now sentinel pickles and unpickles in a manner which agrees with the above 
pattern without any extra works.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Goto XY

2005-11-08 Thread Jean-Paul Calderone
On Tue, 8 Nov 2005 22:33:47 -0500, "Chris F.A. Johnson" <[EMAIL PROTECTED]> 
wrote:
> [snip]
>
> To read a single keystroke, see Claudio Grondi's post in the
> thread "python without OO" from last January.
>
> Function and cursor keys return more than a single character, so
> more work is required to decode them. The principle is outlined in
> ;
> the code there is for the shell, but translating them to python
> should be straightforward. I'll probably do it myself when I have
> the time or the motivation.
>

Like this?

http://cvs.twistedmatrix.com/cvs/trunk/twisted/conch/insults/insults.py?view=markup&rev=14863

Jean-Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lie Hetland book: Beginning Python..

2005-11-09 Thread Jean-Paul Calderone
On Wed, 09 Nov 2005 17:57:46 +, Steve Holden <[EMAIL PROTECTED]> wrote:
>Gerhard Häring wrote:
>> Vittorio wrote:
>>
> [snip]
>>
>> I think about the only place I wrote a bit about the differences was in
>> the pysqlite 2.0 final announcement:
>>
>> http://lists.initd.org/pipermail/pysqlite/2005-May/43.html
>>
>Unfortunately this appears to mean that pysqlite2 isn't fully DB
>API-conformant.
>
> >>> import pysqlite2
> >>> pysqlite2.paramstyle
>Traceback (most recent call last):
>   File "", line 1, in ?
>AttributeError: 'module' object has no attribute 'paramstyle'
> >>>

The DB-API2ness is at pysqlite2.dbapi2:

>>> from pysqlite2 import dbapi2
>>> dbapi2.paramstyle
'qmark'
>>> dbapi2.threadsafety
1
>>> 

etc. :)

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: web interface

2005-11-09 Thread Jean-Paul Calderone
On Wed, 9 Nov 2005 19:08:28 +, Tom Anderson <[EMAIL PROTECTED]> wrote:
>On Mon, 7 Nov 2005, Ajar wrote:
>
>> I have a stand alone application which does some scientific
>> computations. I want to provide a web interface for this app. The app is
>> computationally intensive and may take long time for running. Can
>> someone suggest me a starting point for me? (like pointers to the issues
>> involved in this,
>
>You're probably best off starting a new process or thread for the
>long-running task, and having the web interface return to the user right
>after starting it; you can then provide a second page on the web interface
>where the user can poll for completion of the task, and get the results if
>it's finished. You can simulate the feel of a desktop application to some
>extent by having the output of the starter page redirect the user to the
>poller page, and having the poller page refresh itself periodically.
>
>What you really want is a 'push' mechanism, by which the web app can
>notify the browser when the task is done, but, despite what everyone was
>saying back in '97, we don't really have anything like that.

Yea, there's no way something like this will work:

# push.tac
from zope.interface import Interface, implements 

from twisted.internet import defer, reactor
from twisted.application import service, internet

from nevow import appserver, loaders, tags, rend, athena

class ICalculator(Interface):
def add(x, y):
"""
Add x and y.  Return a Deferred that fires with the result.
"""

class Adder(object):
implements(ICalculator)

def add(self, x, y):
# Go off and talk to a database or something, I don't know.  We'll
# pretend something interesting is happening here, but actually all
# we do is wait a while and then return x + y.
d = defer.Deferred()
reactor.callLater(5, d.callback, x + y)
return d


class LongTaskPage(athena.LivePage):   
docFactory = loaders.stan(tags.html[
tags.head[
tags.directive('liveglue'),
tags.title['Mystical Adding Machine of the Future'],
tags.script(type="text/javascript")["""

/* onSubmit handler for the form on the page: ask the server
 * to add two numbers together.  When the server sends the
 * result, stick it into the page.
 */
function onAdd(x, y) {
var d = server.callRemote('add', x, y);   
var resultNode = document.getElementById('result-node');
d.addCallback(function(result) {
resultNode.innerHTML = String(result);
});
d.addErrback(function(err) {
var s = "Uh oh, something went wrong: " + err + ".";
resultNode.innerHTML = s
});
resultNode.innerHTML = "Thinking...";
return false;
}""",
],
],
tags.body[
tags.form(onsubmit="return onAdd(this.x.value, this.y.value);")[
tags.input(type="text", name="x"),
"+",
tags.input(type="text", name="y"),
"=",
tags.span(id="result-node")[
tags.input(type="submit"),
],
],
],
])

class Root(rend.Page):
docFactory = loaders.stan(tags.html[
tags.head[
tags.meta(**{"http-equiv": "refresh", "content": "0;calc"})]])

def child_calc(self, ctx):
return LongTaskPage(ICalculator, Adder())

application = service.Application("Push Mechanism, like back in '97")
webserver = internet.TCPServer(8080, appserver.NevowSite(Root()))
webserver.setServiceParent(application)
# eof

Right?  But maybe you can explain why, for those of us who aren't enlightened.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python obfuscation

2005-11-09 Thread Jean-Paul Calderone
On 9 Nov 2005 11:34:38 -0800, The Eternal Squire <[EMAIL PROTECTED]> wrote:
>Perhaps this could be a PEP:
>
>1)  Add a system path for decryption keys.
>2)  Add a system path for optional decryptors supplied by user
> (to satisfy US Export Control)
>3)  When importing a module try:  import routine except importation
>error : for all decryptors present for all keys present run decryptor
>upon module and retry, finally raise importation error.
>
>With PGP encryption one could encrypt the pyc's with the private key
>and sell a public key to the end user.

What's to stop someone from publishing the decrypted code online for anyone to 
download?

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: web interface

2005-11-10 Thread Jean-Paul Calderone
On 10 Nov 2005 05:31:29 -0800, Michele Simionato <[EMAIL PROTECTED]> wrote:
>
>
>I have been looking for an example like this for a while, so thanks to
>J.P. Calderone.
>Unfortunately, this kind of solution is pretty much browser-dependent.
>For instance,
>I tried it and it worked with Firefox, but not with MSIE 5.01 and it
>will not work with any
>browser if you disable Javascript. So, I don't think there is a real
>solution
>for this kind of problem as of today (I would love to be wrong,
>though).
>

It depends on JavaScript, yes.  It'll probably work with IE in an upcoming 
release, though.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamically Update Class Definitions?

2005-11-11 Thread Jean-Paul Calderone
On Sat, 12 Nov 2005 06:24:57 GMT, Chris Spencer <[EMAIL PROTECTED]> wrote:
>Chris Spencer wrote:
>> Alex Martelli wrote:
>
>>> If you're in no hurry, you COULD loop over all of gc.get_objects(),
>>> identify all those which are instances of old_class and "somehow" change
>>> their classes to new_class -- of course, x.__class__ = new_class may
>>> well not be sufficient, in which case you'll have to pass to update a
>>> callable to do the instance-per-instance job.
>>
>>
>> Couldn't I just loop over gc.get_referrers(cls), checking for instances
>> of the class object? Since class instances refer to their class, the gc
>> seems to be doing the exact same thing as Hudson's fancy metaclass. Or
>> am I missing something?
>>
>> Chris
>
>In fact, the following code seems to work, and doesn't require any
>modification to new-style class based code:

There are lots of cases where you cannot rebind the __class__ attribute.  For a 
comprehensive treatment of this idea (but still not a completely functionality 
implementation), take a look at 
.
  On another note, the usage of threads in this code is totally insane and 
unsafe.  Even for strictly development purposes, I would expect it to introduce 
so many non-deterministic and undebuggable failures as to make it cost more 
time than it saves.  You really want to stop the rest of the program, then 
update things, then let everything get going again.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: XUL behavior in Python via XPCOM, Mozilla

2005-11-12 Thread Jean-Paul Calderone
On Sat, 12 Nov 2005 14:25:51 -0600, Terry Hancock <[EMAIL PROTECTED]> wrote:
>I recently saw a claim that Mozilla XUL behaviors (normally
>scripted in Javascript) can (or perhaps will) be scriptable
>in Python.
>
>Also, "other languages such as Java or Python are supported
>through XPCOM", said about Mozilla (from Luxor website).
>
>Yes, I know several ways to *generate* XUL from Python, and
>at least one way to use XUL to create interfaces for Python
>programs, but in this case, I'm talking about defining
>button action behavior in XUL by calling Python scripts.
>
>I know that Javascript is the preferred language, but I've
>seen several references to being able to do this in Python,
>including a claim that a release was targeted for early
>November (2005), to provide this.
>
>Now I can't find it again.  Anyway, I was hoping someone
>on c.l.p / python.org would have a reliable reference on
>this.

I'm not sure which claim you read, but perhaps it was in reference to PyXPCOM?  


I'm not quite sure if you are looking for the product itself or the 
announcement about it.  Anyway, hope this helps.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: is parameter an iterable?

2005-11-15 Thread Jean-Paul Calderone
On 15 Nov 2005 11:26:23 -0800, py <[EMAIL PROTECTED]> wrote:
>Dan Sommers wrote:
>> Just do it.  If one of foo's callers passes in a non-iterable, foo will
>> raise an exception, and you'll catch it during testing
>
>That's exactly what I don't want.  I don't want an exception, instead I
>want to check to see if it's an iterableif it is continue, if not
>return an error code.

Error codes are not the common way to do things in Python.  Exceptions are.  
There's generally no reason to avoid exceptions.  Error codes allow errors to 
pass silently, which leads to bugs that nobody notices for long periods of time.

You should let the exception be raised.  You shouldn't try to return an error 
code.

>I can't catch it during testing since this is going to be used by 
>other people.

Then they'll catch it during their testing >:)  If you return an error code 
instead, they are just as likely to pass in bad data, and even *more* likely to 
not see that an error has occurred, causing their programs to be incorrect.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating (rather) generic plugin framework?

2005-11-16 Thread Jean-Paul Calderone
On Wed, 16 Nov 2005 17:14:27 +0200, Edvard Majakari <[EMAIL PROTECTED]> wrote:
>Hi,
>
>My idea is to create a system working as follows: each module knows
>path to plugin directory, and that directory contains modules which
>may add hooks to some points in the code.
>
>Inspired by http://www.python.org/pycon/2005/papers/7/pyconHooking.html

You may want to look at a few existing Python plugin systems.  To get you 
started, here's a link to the Twisted plugin system documentation: 


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: General question about Python design goals

2005-11-27 Thread Jean-Paul Calderone
On 27 Nov 2005 19:49:26 -0800, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>Robert Kern <[EMAIL PROTECTED]> writes:
>> Use cases are the primary tool for communicating those practical
>> needs. If you can't think of a single use case, what's the point of
>> implementing something? Or rather, why should someone else implement
>> it if you don't know how you would use it?
>
>I can't think of a single use case for the addition (+) operator
>working where either of the operands happens to be the number
>0x15f1ef02d9f0c2297e37d44236d8e8ddde4a34c96a8200561de00492cb94b82 (a
>random number I just got out of /dev/urandom).  I've never heard of
>any application using that number, and the chances of it happening by
>coincidence are impossibly low.  But if Python were coded in a way
>that made the interpreter crash on seeing that number, I'd call that
>a bug needing fixing.

If you seriously believe what you just wrote, you have failed to
understand the phrase "use case" (and possibly a lot of other
things related to programming ;)

However (fortunately for you) I suspect you don't.  If you really
did, you may want to pick up one of those platitude-filled XP books
and give it a careful read.  You may find there's more there than
you were previously aware.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there anything that pickle + copy_reg cannot serialize?

2005-12-08 Thread Jean-Paul Calderone
On Thu, 08 Dec 2005 22:42:32 +0800, Maurice LING <[EMAIL PROTECTED]> wrote:
>Hi,
>
>I need to look into serialization for python objects, including codes,
>recursive types etc etc. Currently, I have no idea exactly what needs to
>be serialized, so my scope is to be as wide as possible.
>
>I understand that marshal is extended by pickle to serialize class
>instances, shared elements, and recursive data structures
>(http://www.effbot.org/librarybook/pickle.htm) but cannot handle code
>types. pickle can be used together with copy_reg and marshal to
>serialize code types as well
>(http://www.effbot.org/librarybook/copy-reg.htm).
>
>So my question will be, are there anything that pickle/copy_reg/marshal
>combination cannot serialize? If so, what are the workarounds?

Since copy_reg lets you specify arbitrary code to serialize arbitrary 
objects, you shouldn't run into any single object that you cannot 
serialize to a pickle.

However, both pickle implementations are recursive, so you will be 
limited by the amount of memory you can allocate for your stack.  By 
default, this will limit you to something like object graphs 333 edges 
deep or so (if I'm counting stack frames correctly).  Note that this 
does not mean you cannot serialize more than 333 objects at a time, 
merely that if it takes 333 or more steps to go from the first object 
to any other object in the graph (using the traversal order pickle 
uses), the pickling will fail.  You can raise this limit, to a point, 
with sys.setrecursionlimit().

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there anything that pickle + copy_reg cannot serialize?

2005-12-08 Thread Jean-Paul Calderone
On Fri, 09 Dec 2005 02:17:10 +0800, Maurice LING <[EMAIL PROTECTED]> wrote:
>
>> Since copy_reg lets you specify arbitrary code to serialize arbitrary
>> objects, you shouldn't run into any single object that you cannot
>> serialize to a pickle.
>
> [snip - example of pickling code objects]
>
>
>I cannot understand 2 things, which I seek assistance for:
>1. Is code object the only thing can cannot be pickled (less facing
>recursion limits)?

No.  There are lots of objects that cannot be pickled by default.  Any 
extension type which does not explicitly support it cannot be pickled.  
Generators cannot be pickled.  Method descriptors can't be pickled.  Et 
cetera.

>2. In the above example, how copy_reg works with pickle?

Any time pickle thinks it has found something it cannot pickle, it asks 
the copy_reg module for some help.  The above example basically teaches 
the copy_reg module how to give the pickle module the help it needs for 
code objects.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Validating an email address

2005-12-09 Thread Jean-Paul Calderone
On Fri, 9 Dec 2005 11:10:04 +, Tom Anderson <[EMAIL PROTECTED]> wrote:
>Hi all,
>
>A hoary old chestnut this - any advice on how to syntactically validate an
>email address? I'd like to support both the display-name-and-angle-bracket
>and bare-address forms, and to allow everything that RFC 2822 allows (and
>nothing more!).
>
>Currently, i've got some regexps which recognise a common subset of
>possible addresses, but it would be nice to do this properly - i don't
>currently support quoted pairs, quoted strings, or whitespace in various
>places where it's allowed. Adding support for those things using regexps
>is really hard. See:
>
>http://www.ex-parrot.com/~pdw/Mail-RFC822-Address.html
>
>For a level to which i am not prepared to stoop.
>
>I hear the email-sig are open to adding a validation function to the email
>package, if a satisfactory one can be written; i would definitely support
>their doing that.

The top part of 

 contains a parser that, IIRC, is basically complete.  There are unit tests 
nearby, too.  The code is MIT licensed.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dectecting dir changes

2005-12-09 Thread Jean-Paul Calderone
On Fri, 09 Dec 2005 16:50:05 +, Steve Holden <[EMAIL PROTECTED]> wrote:
>chuck wrote:
>> I need to write a daemon for Solaris that monitors a directory for
>> incoming FTP transfers.  Under certain conditions, when the transfer is
>> complete I need to send an email notification, and do other stuff.
>> Win32 provides FindFirstChangeNotification(), but as best I can tell
>> this isn't supported on Solaris.
>>
>[...]
>>
>> Suggestions?
>>
>Write an FTP server in Python, then it will know exactly when each file
>transfer is complete, and it can do the mailing itself!

Or use an existing Python FTP server! ;)

Untested:

  from zope.interface import implements
  from twisted.protocols import ftp

  class UploadNotifyingAvatar(ftp.FTPShell):
  def uploadCompleted(self, path):
  """
  Override me!
  """
  # But, for example:
  from twisted.mail import smtp
  smtp.sendmail(
  '127.0.0.1',
  '[EMAIL PROTECTED]',
  ['[EMAIL PROTECTED]'],
  'Hey!  Someone uploaded a file: %r' % (path,))

  def openForWriting(self, path):
  writer = ftp.FTPShell.openForWriting(path)
  return CloseNotifyingWriter(
  writer,
  lambda: self.uploadCompleted(path))
  
  class CloseNotifyingWriter(object):
  implements(ftp.IWriteFile)

  def __init__(self, wrappedWriter, onUploadCompletion):
  self.wrappedWriter = wrappedWriter
  self.onUploadCompletion = onUploadCompletion

  def receive(self):
  receiver = self.wrappedWriter.receive()
  receiver.addCallback(
  CloseNotifyingReceiver,
  self.onUploadCompletion)
  return receiver

  class CloseNotifyingReceiver(object):
  def __init__(self, wrappedReceiver, onUploadCompletion):
  self.wrappedReceiver = wrappedReceiver
  self.onUploadCompletion = onUploadCompletion

  def registerProducer(self, producer, streaming):
  return self.wrappedReceiver.registerProducer(producer, streaming)

  def unregisterProducer(self):
  # Upload's done!
  result = self.wrappedReceiver.unregisterProducer()
  self.onUploadCompletion()
  return result

  def write(self, bytes):
  return self.wrappedReceiver.write(bytes)

Hook it up to a Realm and a Portal and override uploadCompleted to taste and 
you've got an FTP server that should do you (granted, it's a little long - some 
of this code could be be factored to take advantage of parameterizable 
factories to remove a bunch of the boilerplate).

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Overloading

2005-12-09 Thread Jean-Paul Calderone
On Fri, 09 Dec 2005 18:29:12 +0100, Johannes Reichel <[EMAIL PROTECTED]> wrote:
>Hi!
>
>In C++ you can overload functions and constructors. For example if I have a
>class that represents a complex number, than it would be nice if I can
>write two seperate constructors
>
>class Complex:
>
>def __init__(self):
>self.real=0
>self.imag=0
>
>def __init__self(self,r,i):
>self.real=r
>self.imag=i
>

class Complex:
def __init__(self, r=0, i=0):
self.real = r
self.imag = i

>
>How would I do this in python?
>
>And by the way, is it possible to overload operators like +,-,*?
>
>def operator+(self,complex2):
>Complex res
>res.real=self.real+complex2.real
>res.imag=self.imag+complex2.imag
>
>return res

def __add__(self, complex2):
res = Complex(self.real + complex2.real, self.imag + complex2.imag)
return res

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dectecting dir changes

2005-12-09 Thread Jean-Paul Calderone
On 9 Dec 2005 09:56:03 -0800, chuck <[EMAIL PROTECTED]> wrote:
>Hmmm, that is an interesting idea.  I've noticed the new book on
>Twisted, thinking about picking it up.
>
>I assume that this little snippet will handle multiple/concurrent
>incoming transfers via threading/sub-process, is scalable, secure, etc?

Correct, except for the threading/sub-process part.  Events from sockets are 
handled in a single thread in a single process.

>
>I could even run it on a non-standard port making it a bit more
>(ob)secure.

Indeed, although I mainly suggested it because I thought you were tied to FTP.  
If you actually have security concerns, you might want to use SFTP instead 
(which Twisted also supports, but to which the code I sent is only partly 
applicable).

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Jean-Paul Calderone
On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>Let's talk about the problem I really want help with.  I brought up a
>proposal earlier, but it was only half serious.  I realize Python is too
>sacred to accept such a heretical change. ;-)
>
>Here's the real problem: maintaining import statements when moving
>sizable blocks of code between modules is hairy and error prone.
>
>I move major code sections almost every day.  I'm constantly
>restructuring the code to make it clearer and simpler, to minimize
>duplication, and to meet new requirements.  To give you an idea of the
>size I'm talking about, just today I moved around 600 lines between
>about 8 modules, resulting in a 1400 line diff.  It wasn't just
>cut-n-paste, either: nearly every line I moved needed adjustment to work
>in its new context.
>
>While moving and adjusting the code, I also adjusted the import
>statements.  When I finished, I ran the test suite, and sure enough, I
>had missed some imports.  While the test suite helps a lot, it's
>prohibitively difficult to cover all code in the test suite, and I had

I don't know about this :)

>lingering doubts about the correctness of all those import statements.
>So I examined them some more and found a couple more mistakes.
>Altogether I estimate I spent 20% of my time just examining and fixing
>import statements, and who knows what other imports I missed.
>
>I'm surprised this problem isn't more familiar to the group.  Perhaps
>some thought I was asking a newbie question.  I'm definitely a newbie in
>the sum of human knowledge, but at least I've learned some tiny fraction
>of it that includes Python, DRY, test-first methodology, OOP, design
>patterns, XP, and other things that are commonly understood by this
>group.  Let's move beyond that.  I'm looking for ways to gain just a
>little more productivity, and improving the process of managing imports
>could be low-hanging fruit.
>
>So, how about PyDev?  Does it generate import statements for you?  I've
>never succeeded in configuring PyDev to perform autocompletion, but if
>someone can say it's worth the effort, I'd be willing to spend time
>debugging my PyDev configuration.
>
>How about PyLint / PyChecker?  Can I configure one of them to tell me
>only about missing / extra imports?  Last time I used one of those
>tools, it spewed excessively pedantic warnings.  Should I reconsider?

I use pyflakes for this: .  The 
*only* things it tells me about are modules that are imported but never used 
and names that are used but not defined.  It's false positive rate is something 
like 1 in 10,000.

>
>Is there a tool that simply scans each module and updates the import
>statements, subject to my approval?  Maybe someone has worked on this,
>but I haven't found the right Google incantation to discover it.

This is something I've long wanted to add to pyflakes (or as another feature of 
pyflakes/emacs integration).

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Jean-Paul Calderone
On Sat, 10 Dec 2005 13:40:12 -0500, Kent Johnson <[EMAIL PROTECTED]> wrote:
>Jean-Paul Calderone wrote:
>> On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway
>> <[EMAIL PROTECTED]> wrote:
>>> How about PyLint / PyChecker?  Can I configure one of them to tell me
>>> only about missing / extra imports?  Last time I used one of those
>>> tools, it spewed excessively pedantic warnings.  Should I reconsider?
>>
>>
>> I use pyflakes for this: <http://divmod.org/trac/wiki/DivmodPyflakes>.
>> The *only* things it tells me about are modules that are imported but
>> never used and names that are used but n
>
>Do any of these tools (PyLint, PyChecker, pyflakes) work with Jython? To
>do so they would have to work with Python 2.1, primarily...

Pyflakes will *check* Python 2.1, though you will have to run pyflakes 
itself using Python 2.3 or newer.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Jean-Paul Calderone
On Sat, 10 Dec 2005 11:54:47 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>Jean-Paul Calderone wrote:
>> On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>>>How about PyLint / PyChecker?  Can I configure one of them to tell me
>>>only about missing / extra imports?  Last time I used one of those
>>>tools, it spewed excessively pedantic warnings.  Should I reconsider?
>>
>>
>> I use pyflakes for this: <http://divmod.org/trac/wiki/DivmodPyflakes>.  The 
>> *only* things it tells me about are modules that are imported but never used 
>> and names that are used but not defined.  It's false positive rate is 
>> something like 1 in 10,000.
>
>That's definitely a good lead.  Thanks.
>
>> This is something I've long wanted to add to pyflakes (or as another feature 
>> of pyflakes/emacs integration).
>
>Is there a community around pyflakes?  If I wanted to contribute to it,
>could I?
>

A bit of one.  Things are pretty quiet (since pyflakes does pretty 
much everything it set out to do, and all the bugs seem to have been 
fixed (holy crap I'm going to regret saying that)), but if you mail
[EMAIL PROTECTED] with questions/comments/patches, or open a 
ticket in the tracker for a fix or enhancement, someone will 
definitely pay attention.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about tuple lengths

2005-12-14 Thread Jean-Paul Calderone
On Wed, 14 Dec 2005 09:54:31 -0800, "Carl J. Van Arsdall" <[EMAIL PROTECTED]> 
wrote:
>
> From my interpreter prompt:
>
> >>> tuple = ("blah")
> >>> len(tuple)
>4
> >>> tuple2 = ("blah",)
> >>> len (tuple2)
>1
>
>So why is a tuple containing the string "blah" without the comma of
>length four? Is there a good reason for this or is this a bug?

It's not a tuple :)

>>> t = ("blah")
>>> type(t)

>>> t2 = ("blah",)
>>> type(t2)

>>> t3 = "blah",
>>> type(t3)

>>> 

It's the comma that makes it a tuple.  The parenthesis are only required in 
cases where the expression might mean something else without them.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threading in python

2005-12-14 Thread Jean-Paul Calderone
On 14 Dec 2005 10:15:08 -0800, Aahz <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>Carl J. Van Arsdall <[EMAIL PROTECTED]> wrote:
>
>>Because of this global interpreter lock does this mean its impossible to
>>get speed up with threading on multiple processor systems?  I would
>>think so because only one python thread can execute at any one time.  Is
>>there a way to get around this?  This isn't something I need to do, I'm
>>just curious at this point.
>
>You either need to run multiple processes or run code that mostly calls
>into C libraries that release the GIL.  For example, a threaded spider
>scales nicely on SMP.

Yes.  Nearly as well as a single-threaded spider ;)

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wed Development - Dynamically Generated News Index

2005-12-17 Thread Jean-Paul Calderone
On 17 Dec 2005 23:14:33 -0800, [EMAIL PROTECTED] wrote:
>Hi to all,
>
>I am somewhat somewhat new to Python, but apart from this I am just not
>seeing lots of resources on what I am trying to do.  I have seen them
>in other languages like PHP and ASP.
>
>I am building a simple MySQL news database,  which would contain, a
>headline, a date, main story(body) and a graphic associated with each
>story.  I would like to generate an index of the pages in this database
>( ie a news index with links to the articles) an to have a news
>administrator upload and delete stories graphic etc.
>
>I have read many articles on Python CGI programming and I have Googled
>extensively, but have not seen any kind of examples of how this can be
>done in Python.
>
>I would be grateful for any assistance or pointers.

Using Nevow and Twisted Web, this might look something like (untested)...

  from nevow import rend, loaders, tags

  class NewsIndex(rend.Page):
  docFactory = loaders.stan(tags.html[
  tags.body(render=tags.directive('index'))])

  def __init__(self, connpool):
  super(NewsIndex, self).__init__()
  self.connpool = connpool

  def _retrieveIndex(self):
  return self.connpool.runQuery(
  "SELECT articleId, articleTitle, articleImage FROM articles")

  def _retrieveArticle(self, articleId):
  return self.connpool.runQuery(
  "SELECT articleBody FROM articles WHERE articleId = ?",
  (articleId,))

  def render_index(self, ctx, data):
  return self._retrieveIndex().addCallback(lambda index: [
  tags.div[
  tags.a(href=articleID)[
  tags.img(src=articleImage),
  articleTitle]]
  for (articleID, articleTitle, articleImage)
  in index])

  def childFactory(self, ctx, name):
  return self._retrieveArticle(name).addCallback(lambda articleBody:
  ArticlePage(articleBody))


  class ArticlePage(rend.Page):
  docFactory = loaders.stan(tags.html[
  tags.body(render=tags.directive('article'))])

  def __init__(self, articleBody):
  super(ArticlePage, self).__init__()
  self.articleBody = articleBody

  def render_article(self, ctx, data):
  return self.articleBody

  from twisted.enterprise import adbapi

  # Whatever DB-API 2.0 stuff you want
  cp = adbapi.ConnectionPool('pypgsql', ...)

  from twisted.application import service, internet
  from nevow import appserver

  application = service.Application("News Site")
  webserver = appserver.NevowSite(NewsIndex(cp))
  internet.TCPServer(80, webserver).setServiceParent(application)

  # Run with twistd -noy 

For more information about Nevow, checkout the Wiki - 
 - or the mailing list - 
 - or the IRC 
channel - #twisted.web on freenode.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wed Development - Dynamically Generated News Index

2005-12-18 Thread Jean-Paul Calderone
On 18 Dec 2005 12:27:55 -0800, [EMAIL PROTECTED] wrote:
>Hi Jean-Paul,
>
>I am a bit lost in you code.  Is it possible for you to step through
>it?

For in depth-assistance, it would probably be best to continue on [EMAIL 
PROTECTED]  You might also want to check out some of the links on 
.  There is also a friendly, helpful 
IRC channel on freenode: #twisted.web.

I'll give a brief overview of what's going on, though:

  from nevow import rend, loaders, tags

  # Subclass rend.Page - this is the base class for pretty 
  # much anything that represents an entire HTTP Resource
  class NewsIndex(rend.Page):

  # Set the template for this page.  Since it's just a 
  # throw-away example, use some really cheesy stan (the 
  # stuff with the square brackets).  The below template 
  # turns into something like
  # [whatever render_index returns]
  docFactory = loaders.stan(tags.html[
  tags.body(render=tags.directive('index'))])

  # Boring initializer that takes a database connection 
  # pool to issue queries against.
  def __init__(self, connpool):
  super(NewsIndex, self).__init__()
  self.connpool = connpool

  # "model" function - doesn't know about web, just 
  # knows the db schema; returns a Deferred that fires
  # with information about all the articles in the database.
  def _retrieveIndex(self):
  return self.connpool.runQuery(
  "SELECT articleId, articleTitle, articleImage FROM articles")

  # Another model function.  This one gets the body text for
  # one particular article.
  def _retrieveArticle(self, articleId):
  return self.connpool.runQuery(
  "SELECT articleBody FROM articles WHERE articleId = ?",
  (articleId,))

  # The render function for the guts of the index page.  This 
  # uses one of the model functions to get all the articles in 
  # the database.  Since _retrieveIndex returns a Deferred, it 
  # doesn't use the results right away - instead of defines a 
  # lambda that will get invoked with the article information 
  # when it is received.

  # There's some display logic here, too.  Some more stan, which
  # renders to something like:
  # 
  #   
  # 
  # article title
  #   
  # 
  # for each article in the index
  def render_index(self, ctx, data):
  return self._retrieveIndex().addCallback(lambda index: ctx.tag[[
  tags.div[
  tags.a(href=articleID)[
  tags.img(src=articleImage),
  articleTitle]]
  for (articleID, articleTitle, articleImage)
  in index]])

  # Allow this Resource to have children.  Above, we generated links
  # to "articleID" for each article in the index.  This function
  # handles those links by using the other model function to retrieve
  # the body of the requested article and return another Page subclass
  # instance for it.
  def childFactory(self, ctx, name):
  return self._retrieveArticle(name).addCallback(lambda articleBody:
  ArticlePage(articleBody))


  # The other display class.  This one is pretty boring.  It just 
  # displays the body of an article.
  class ArticlePage(rend.Page):
  docFactory = loaders.stan(tags.html[
  tags.body(render=tags.directive('article'))])

  def __init__(self, articleBody):
  super(ArticlePage, self).__init__()
  self.articleBody = articleBody

  # Just spit out the article body.  Note I changed one thing here.  
  # In the original version, I left out the "ctx.tag[...]" around 
  # the article body.  This would have removed the  tag from 
  # the resulting document!  Ooops.  I also left this out of 
  # render_index above (so I've added it there as well).
  def render_article(self, ctx, data):
  return ctx.tag[self.articleBody]

  from twisted.enterprise import adbapi

  # Whatever DB-API 2.0 stuff you want
  cp = adbapi.ConnectionPool('pypgsql', ...)

  from twisted.application import service, internet
  from nevow import appserver

  application = service.Application("News Site")
  webserver = appserver.NevowSite(NewsIndex(cp))
  internet.TCPServer(80, webserver).setServiceParent(application)

  # Run with twistd -noy 

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Which Python web framework is most like Ruby on Rails?

2005-12-22 Thread Jean-Paul Calderone
On 21 Dec 2005 13:53:55 -0800, Pierre Quentel <[EMAIL PROTECTED]> wrote:
>Just to add some more confusion to the discussion, here is what I've
>found about other web frameworks :
>CherryPy : BSD
>Django : BSD
>Jonpy : Python licence
>Quixote : CNRI
>Skunkweb : GPL or BSD
>Snakelets : MIT
>Subway : ? + licence of the components
>PythonWeb : LGPL (will consider BSD-Style or Python if LGPL is a
>problem)
>Turbogears : MIT + licence of the components
>Twisted : LGPL

Not for more than a year.  Twisted is MIT licensed.  Also, Twisted isn't a web 
framework, though it includes an HTTP server.

>Webware : Python licence
>Zope : ZPL (Zope Public Licence)
>
>There doesn't seem to be an obvious choice, but the GPL isn't used much
>here
>

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IMAP4 Memory Error

2005-12-23 Thread Jean-Paul Calderone
On Fri, 23 Dec 2005 14:21:27 +1100, Dody Suria Wijaya <[EMAIL PROTECTED]> wrote:
>Noah wrote:
>> This looks like a bug in your build of Python 2.4.2 for Windows.
>> Basically it means that C's malloc() function in the Python interpreter
>> failed.
>>
>
>On a second trial, it's also failed on Python 2.3.5 for Windows, Python
>2.3.3 for Windows, and Python 2.2.3 for Windows. So this seems to me as
>a Windows system related bug, not a particular version of Python bug.

Arguably, it's a bug in Python's imaplib module.  Sure, the Windows memory 
allocator is feeble and falls over when asked to do perfectly reasonable 
things.  But Python runs on Windows, so Python should do what it takes to work 
on Windows (or mark imaplib UNIX-only).

This particular issue can be avoided most of the time by reading in smaller 
chunks.

You might also address it as a deployment issue, and run fewer programs on the 
host in question, or reboot it more frequently.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: XPath-like filtering for ElementTree

2005-12-26 Thread Jean-Paul Calderone
On 26 Dec 2005 15:42:56 -0800, Gerard Flanagan <[EMAIL PROTECTED]> wrote:
>Pseudo-XPath support for ElementTree with the emphasis on 'Pseudo'.
>
>http://gflanagan.net/site/python/pagliacci/ElementFilter.html
>
> [snip]
>
>ns = "xmlns1"
>path = r"{%s}To/[EMAIL PROTECTED]'Mrs Jones' and @test==3]" %
>(ns,ns)

How about promoting the query to a set of Python objects?  eg,

path = Query(
ns.xmlns1.To / ns.xmlns1.mailto['name': 'Mrs Jones',
'test': '3'])

That's just off the top of my head as an example of the kind of thing 
I mean.  There is probably a better (more consistent, flexible, easier 
to read, etc) spelling possible.

The advantages of this over strings is that you can break the query up 
into multiple pieces, pass parts of it around as real, live objects with 
introspectable APIs, allow for mutation of portions of the query, 
re-arrange it, etc.  All this is possible with strings too, just way 
harder :)

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python coding contest

2005-12-27 Thread Jean-Paul Calderone
On Tue, 27 Dec 2005 14:02:57 -0700, Tim Hochberg <[EMAIL PROTECTED]> wrote:
>Shane Hathaway wrote:
>> Paul McGuire wrote:
>>
>>
>> Also, here's another cheat version.  (No, 7seg.com does not exist.)
>>
>>import urllib2
>>def seven_seg(x):return urllib2.urlopen('http://7seg.com/'+x).read()
>>
>And another one from me as well.
>
>class a:
>  def __eq__(s,o):return 1
>seven_seg=lambda i:a()
>

This is shorter as "__eq__=lambda s,o:1".

But I can't find the first post in this thread... What are you 
guys talking about?

Jean-Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simple question on Parameters...

2005-12-28 Thread Jean-Paul Calderone
On 28 Dec 2005 12:37:32 -0800, KraftDiner <[EMAIL PROTECTED]> wrote:
>I have defined a method as follows:
>
>def renderABezierPath(self, path, closePath=True, r=1.0, g=1.0, b=1.0,
>a=1.0, fr=0.0, fg=0.0, fb=0.0, fa=.25):
>
>Now wouldn't it be simpler if it was:
>
>def renderABezierPath(self, path, closePath=True, outlineColor,
>fillColor):
>
>But how do you set default vaules for outlineColor and fillColors?
>Like should these be simple lists or should they be structures or
>classes...

  def renderABezierPath(self, path, closePath=True,
outlineColor=(1, 1, 1),
fillColor=(0, 0, 0.25)):
  ...

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is this a refrence issue?

2005-12-28 Thread Jean-Paul Calderone
On Wed, 28 Dec 2005 14:40:45 -0800, "Carl J. Van Arsdall" <[EMAIL PROTECTED]> 
wrote:
>KraftDiner wrote:
>> I understand that everything in python is a refrence
>>
>> I have a small problem..
>>
>> I have a list and want to make a copy of it and add an element to the
>> end of the new list,
>> but keep the original intact
>>
>> so:
>> tmp = myList
>>
>
>tmp = myList is a shallow copy
>

"tmp = myList" isn't a copy at all.  A shallow copy is like this:

  tmp = myList[:]

or like this:

  import copy
  tmp = copy.copy(myList)

This is as opposed to a deep copy, which is like this:

  import copy
  tmp = copy.deepcopy(myList)

What "tmp = myList" does is to create a new *reference* to the 
very same list object.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is len() restricted to (positive) 32-bit values?

2005-12-29 Thread Jean-Paul Calderone
On 29 Dec 2005 19:14:36 -0800, Josh Taylor <[EMAIL PROTECTED]> wrote:
>I have a class that wraps a large file and tries to make it look like a
>string w.r.t. slicing.  Here, "large file" means on the order of
>hundreds of GB.  All the slicing/indexing stuff through __getitem__()
>works fine, but len() is quite broken.  It seems to be converting the
>value returned by __len__() to a 32-bit integer.  If the conversion
>yields a negative number, it raises an exception.
>
>I'm running Python 2.4.1 on an Opteron running RedHat FC3.  It's a
>64-bit processor, and Python ints appear to be 64-bit as well, so even
>if len() only works with ints, it should still be able to handle 64-bit
>values.

Conspicuous timing:

  

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python as a Server vs Running Under Apache

2006-01-01 Thread Jean-Paul Calderone
On 1 Jan 2006 14:44:07 -0800, mojosam <[EMAIL PROTECTED]> wrote:
>I guess I'm a little confused, and this certainly comes from not yet
>having tried to do anything with Python on a web server.
>
>I remarked once to a Python programmer that it appeared to me that if I
>had a web page that called a Python program, that the server would:
>1. Load Python
>2. Run the program
>3. Unload Python

This is true of any CGI.  It is part of the definition of CGI.

>
>Then the next time it has to serve up that page, it would have to
>repeat the process.  This seems inefficient, and it would slow the site
>down.  The programmer confirmed this.  He said that's why I should use
>mod_python.  It stays resident.

There are lots of ways to write web applications aside from CGIs.  mod_python 
is one.

>
>Is this advice accurate?  Are there other things to consider?  Isn't
>there just some way (short of running something like Zope) that would
>keep Python resident in the server's RAM?  This is a shared server, so
>the web host probably doesn't like stuff sitting around in RAM.

Using Twisted, FastCGI, SCGI, or even BaseHTTPServer in the standard library 
will address /this/ particular issue (there are lots of other solutions, too, 
not just these four).  Some of them may address other issues better or worse 
than others.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threading for a newbie

2006-01-05 Thread Jean-Paul Calderone
On Thu, 05 Jan 2006 17:15:20 -0500, Koncept <[EMAIL PROTECTED]> wrote:
>
>Hi. I am fairly new to Python programming and am having some trouble
>wrapping my head around threading.
>

It's pretty much the standard threading model.

>This is a very basic example of what I am trying to do, and would
>greatly appreciate having this code hacked to pieces so that I can
>learn from you folks with experience.
>
>What I would like to learn from this example is how to use threads to
>call on other classes and append/modify results in a list outside of
>scope (basically keep track of variables throught the total threading
>process and return them somehow afterwards ). I was thinking about
>using some sort of global, but I am not sure what the best approach to
>this is.

Why would anyone want to learn that?  What you want to do is learn how 
to *avoid* doing that :P  Writing threaded programs that do anything 
other than allow individual threads to operate on an isolated chunk of 
input and return an isolated chunk of output is extremely difficult; 
perhaps even impossible.

>
>Thanks kindly for any assistance you may be able to offer.
>
>-- code --
>
>import time, random, threading
>
>order = []
>
>class Foo:
>   def __init__(self, person):
>  print "\nFoo() recieved %s\n" % person
>
>class Bar(threading.Thread, Foo):
>   def __init__(self, name):
>  threading.Thread.__init__(self, name = name)
>  self.curName = name
>   def run(self):
>  global order
>  sleepTime = random.randrange(1,6)
>  print "Starting thread for %s in %d seconds" % \
> (self.getName(), sleepTime)
>  time.sleep(sleepTime)
>  Foo.__init__(self,self.getName())
>  print "%s's thread has completed" % self.getName()
>  order.append(self.getName())

Note that above you are sharing not only `order' between different 
threads (the append method of which is threadsafe, so this is 
nominally okay) and you are *also* sharing standard out, which is 
not safe to use in the manner you are using it.  The program contains 
a subtle race condition which can cause output to be misformated.
You need to change the print statements to single sys.stdout.write 
calls, or use a lock around any print.

>
>def main():
>   for person in ['Bill','Jane','Steve','Sally','Kim']:
>  thread = Bar(person)
>  thread.start()

  To answer the question you asked, accumulate the threads in a 
list.  After you have started them all, loop over the list calling 
join on each one.  join will block until the thread's function 
returns.

>
>   # How do I print "order" after all the threads are complete?
>   print "\nThreads were processed in the following order:"
>   for i, person in enumerate(order): print "%d. %s" % (i+1,person)
>
>if __name__ == "__main__":
>   main()
>

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Spelling mistakes!

2006-01-06 Thread Jean-Paul Calderone
On 6 Jan 2006 07:57:04 -0800, KraftDiner <[EMAIL PROTECTED]> wrote:
>try this:
>
>class x(object):
>   def __init__(self):
>  self.someName = "hello"
>   def someMethod(self):
>  self.sumName = "bye"
>
>find that bug.
>

[EMAIL PROTECTED]:~$ cat > xobj.py
class x(object):
def __init__(self):
self.someName = "hello"
def someMethod(self):
self.sumName = "bye"
[EMAIL PROTECTED]:~$ cat > test_xobj.py
from twisted.trial import unittest

import xobj

class XObjTestCase(unittest.TestCase):
def testSomeName(self):
x = xobj.x()
self.assertEquals(x.someName, "hello")
x.someMethod()
self.assertEquals(x.someName, "bye")
[EMAIL PROTECTED]:~$ trial test_xobj.py 
Running 1 tests.
test_xobj
  XObjTestCase
testSomeName ...   [FAIL]

=
[FAIL]: test_xobj.XObjTestCase.testSomeName

  File "/home/exarkun/test_xobj.py", line 10, in testSomeName
self.assertEquals(x.someName, "bye")
twisted.trial.unittest.FailTest: 'hello' != 'bye'
-
Ran 1 tests in 0.278s

FAILED (failures=1)
[EMAIL PROTECTED]:~$ 

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: question about mutex.py

2006-01-06 Thread Jean-Paul Calderone
On 6 Jan 2006 14:44:39 -0800, [EMAIL PROTECTED] wrote:
>Hi, I was looking at the code in the standard lib's mutex.py, which is
>used for queuing function calls. Here is how it lets you acquire a
>lock:

Did you read the module docstring?

Of course, no multi-threading is implied -- hence the funny interface
for lock, where a function is called once the lock is aquired.

If you are looking for a mutex suitable for multithreaded use, see the 
threading module.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Calling foreign functions from Python? ctypes?

2006-01-06 Thread Jean-Paul Calderone
On Sat, 07 Jan 2006 00:55:45 GMT, Neil Hodgson <[EMAIL PROTECTED]> wrote:
>Paul Watson:
>
>> Neil Hodgson wrote:
>>>It is unlikely that ctypes will be included in the standard Python
>>> build as it allows unsafe memory access making it much easier to crash
>>> Python.
>> Does extending Python with any C/C++ function not do the same thing?
>
>No. It is the responsibility of the extension author to ensure that
>there is no possibility of crashing Python. With ctypes, you have a
>generic mechanism that enables Python code to cause a crash.

Aahhh, come on.  ctypes is crazy useful.  Besides:

  [EMAIL PROTECTED]:~$ python < .
  Segmentation fault
  [EMAIL PROTECTED]:~$ python
  Python 2.4.2 (#2, Sep 30 2005, 21:19:01) 
  [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import os
  >>> import marshal
  >>> for i in range(1024):
  ... try:
  ... marshal.loads(os.urandom(16))
  ... except:
  ... pass
  ... 
  Segmentation fault
  [EMAIL PROTECTED]:~$ python
  Python 2.4.2 (#2, Sep 30 2005, 21:19:01) 
  [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import dl
  >>> dl.open('/lib/libc.so.6').call('memcpy', 1, 2, 3)
  Segmentation fault
  [EMAIL PROTECTED]:~$ python
  Python 2.4.2 (#2, Sep 30 2005, 21:19:01) 
  [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import sys
  >>> sys.setrecursionlimit(10)
  __main__:1: DeprecationWarning: integer argument expected, got float
  >>> (lambda f: f(f))(lambda f: f(f))
  Segmentation fault
  [EMAIL PROTECTED]:~$ 

I could probably dig up a few more, if you want.  So what's ctypes on top of 
this?

Jean-Paul



>
>Neil
>--
>http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Failing unittest Test cases

2006-01-10 Thread Jean-Paul Calderone
On 10 Jan 2006 13:49:17 -0800, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>[EMAIL PROTECTED] writes:
>> Got any ideas how that is to be accomplished short of jiggering the
>> names so they sort in the order you want them to run?
>
>How about with a decorator instead of the testFuncName convention,
>i.e. instead of
>
>   def testJiggle():   # "test" in the func name means it's a test case
>  ...
>
>use:
>
>@test
>def jiggletest():  # nothing special about the name "jiggletest"
>   ...
>
>The hack of searching the module for functions with special names was
>always a big kludge and now that Python has decorators, that seems
>like a cleaner way to do it.

It's neither a hack nor a kludge.  It's introspection and it's a 
valuable technique for solving various problems. 

Just because something has been around for a long time does not 
necessarily mean it's bad.

>
>In the above example, the 'test' decorator would register the
>decorated function with the test framework, say by appending it to a
>list.  That would make it trivial to run them in code order.

Not that I think this is a bad idea, but ordering things based on 
the side-effects of a decorator seems rather magical.  This, on top 
of the fact that unit tests are only unit tests as long as they are 
only testing a something relatively isolated (let's call it a "unit") 
an, would make me wary of using this technique in any unit tests I 
write. 

The kind of thing being discussed is not without value - but it 
falls into the realm of integration testing, not unit testing. 
I'm not just slinging vocabulary around here, either.  It's 
important to keep the two kinds of tests separate.  Integration 
tests are great for telling you that something has gone wrong, 
but if you get rid of all your unit tests in the process of writing 
integration tests, you will have a more difficult time determining 
*what* has gone wrong.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I test if an argument is a sequence or a scalar?

2006-01-10 Thread Jean-Paul Calderone
On 10 Jan 2006 15:18:22 -0800, [EMAIL PROTECTED] wrote:
>I want to be able to pass a sequence (tuple, or list) of objects to a
>function, or only one.

Generally it's better to keep your API consistent.  If you are going 
to take sequences, only take sequences.  Don't try to make the 
interface more convenient by allowing single values to be passed in 
by themselves: it just leads to confusion and complexity.

>
>It's easy enough to do:
>
>isinstance(var, (tuple, list))

Above you said sequences - what about strings (buffers, unicode and 
otherwise) or arrays or xrange objects?  Those are all sequences too, 
you know.

>
>But I would also like to accept generators. How can I do this?

Ah, but generators *aren't* sequences.  I'm guessing you really want 
to accept any "iterable", rather than any sequence.  The difference is 
that all you can do with an iterable is loop over it.  Sequences are 
iterable, but you can also index them.

I hope you heed my advice above, but in case you're curious, the easiest 
way to tell an iterable from a non-iterable is by trying to iterate over 
it.  Actually, by doing what iterating over it /would/ have done:

   >>> iter("hello world")
   
   >>> iter([1, 2, 3, 4])
   
   >>> def foo():
   ...   yield 1
   ...   yield 2
   ...   yield 3
   ... 
   >>> iter(foo())
   

This works for anything that is iterable, as you can see.  However, it 
won't work for anything:

   >>> iter(5)
   Traceback (most recent call last):
 File "", line 1, in ?
   TypeError: iteration over non-sequence

This is "duck typing" in action.

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Marshal Obj is String or Binary?

2006-01-14 Thread Jean-Paul Calderone
On Sat, 14 Jan 2006 16:58:55 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote:
>"Giovanni Bajo" <[EMAIL PROTECTED]> writes:
>> [EMAIL PROTECTED] wrote:
>>> Try...
>> for i in bytes: print ord(i)
>>> or
>> len(bytes)
>>> What you see isn't always what you have. Your database is capable of
>>> storing \ x 0 0 characters, but your string contains a single byte of
>>> value zero. When Python displays the string representation to you, it
>>> escapes the values so they can be displayed.
>> He can still store the repr of the string into the database, and then
>> reconstruct it with eval:
>
>repr and eval are overkill for this, and as as result create a
>security hole. Using encode('string-escape') and
>decode('string-escape') will do the same job without the security
>hole:

Using marshal at all introduces a similar security hole, so security is not an 
argument against repr()/eval() in this context.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: proposal: another file iterator

2006-01-15 Thread Jean-Paul Calderone
On 15 Jan 2006 16:44:24 -0800, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>I find pretty often that I want to loop through characters in a file:
>
>  while True:
> c = f.read(1)
> if not c: break
> ...
>
>or sometimes of some other blocksize instead of 1.  It would sure
>be easier to say something like:
>
>   for c in f.iterbytes(): ...
>
>or
>
>   for c in f.iterbytes(blocksize): ...
>
>this isn't anything terribly advanced but just seems like a matter of
>having the built-in types keep up with language features.  The current
>built-in iterator (for line in file: ...) is useful for text files but
>can potentially read strings of unbounded size, so it's inadvisable for
>arbitrary files.
>
>Does anyone else like this idea?

It's a pretty useful thing to do, but the edge-cases are somewhat complex.  
When I just want the dumb version, I tend to write this:

for chunk in iter(lambda: f.read(blocksize), ''):
...

Which is only very slightly longer than your version.  I would like it even 
more if iter() had been written with the impending doom of lambda in mind, so 
that this would work:

for chunk in iter('', f.read, blocksize):
...

But it's a bit late now.  Anyhow, here are some questions about your 
iterbytes():

  * Would it guarantee the chunks returned were read using a single read?  If 
blocksize were a multiple of the filesystem block size, would it guarantee 
reads on block-boundaries (where possible)?

  * How would it handle EOF?  Would it stop iterating immediately after the 
first short read or would it wait for an empty return?

  * What would the buffering behavior be?  Could one interleave calls to 
.next() on whatever iterbytes() returns with calls to .read() on the file?

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Space left on device

2006-01-16 Thread Jean-Paul Calderone
On 16 Jan 2006 07:52:46 -0800, sir_alex <[EMAIL PROTECTED]> wrote:
>Is there any function to see how much space is left on a device (such
>as a usb key)? I'm trying to fill in an mp3 reader in a little script,
>and this information could be very useful! Thanks!

If you are on a platform with statvfs(2):

>>> import os
>>> s = os.statvfs('/')
>>> s.f_bavail * s.f_bsize
59866619904L
>>> 

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: magical expanding hash

2006-01-17 Thread Jean-Paul Calderone
On Tue, 17 Jan 2006 16:47:15 -0800, James Stroud <[EMAIL PROTECTED]> wrote:
>braver wrote:
>> Well, I know some python, but since there are powerful and magical
>> features in it, I just wonder whether there're some which address this
>> issue better than others.
>>
>
>In python, += is short, of course, for
>
>a = a + 1
>

No it's not.  It's short for

  if isinstance(a, object):
  if hasattr(a.__class__, '__iadd__'):
  _x = a.__class__.__iadd__(a, b)
  if _x is NotImplemented:
  a = b + a
  else:
  a = _x
  else:
  a = b + a
  else:
  if hasattr(a, '__iadd__'):
  _x = a.__iadd__(b)
  if _x is NotImplemented:
  a = b + a
  else:
  a = _x
  else:
  a = b + a

Roughly speaking, anyway.  Not that this is relevant to your point.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simultaneous connections

2006-01-20 Thread Jean-Paul Calderone
On 20 Jan 2006 06:01:15 -0800, datbenik <[EMAIL PROTECTED]> wrote:
>How can i write a program that supports simultaneous multipart
>download. So i want to open multiple connections to download one file.
>Is this possible. If so, how?

http://twistedmatrix.com/

>
>--
>http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Warning when new attributes are added to classes at run time

2006-07-19 Thread Jean-Paul Calderone
On Wed, 19 Jul 2006 20:42:40 GMT, Matthew Wilson <[EMAIL PROTECTED]> wrote:
>
>I sometimes inadvertently create a new attribute on an object rather
>update a value bound to an existing attribute.  For example:
>
> [snip]

Write more unit tests.  If you have mistakes like this, they will fail
and you will know you need to fix your code.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-24 Thread Jean-Paul Calderone
On Mon, 24 Jul 2006 11:22:49 -0700, "Carl J. Van Arsdall" <[EMAIL PROTECTED]> 
wrote:
>Steve Holden wrote:
>> Carl J. Van Arsdall wrote:
>> [... rant ...]
>>
>>> So with this whole "hey mr. nice thread, please die for me" concept gets
>>> ugly quickly in complex situations and doesn't scale well at all.
>>> Furthermore, say you have a complex systems where users can write
>>> pluggable modules.  IF a module gets stuck inside of some screwed up
>>> loop and is unable to poll for messages there's no way to kill the
>>> module without killing the whole system.  Any of you guys thought of a
>>> way around this scenario?
>>>
>>>
>>>
>>
>> Communications through Queue.Queue objects can help. But if you research
>> the history of this design decision in the language you should discover
>> there are fairly sound rasons for not allowing arbitrary "threadicide".
>>
>>
>>
>Right, I'm wondering if there was a way to make an interrupt driven
>communication mechanism for threads?  Example: thread receives a
>message, stops everything, and processes the message.
>

And what happens if the thread was halfway through a malloc call and
the data structures used to manage the state of the heap are in an
inconsistent state when the interrupt occurs?

This has been discussed many many times in the context of many many
languages and threading libraries.  If you're really interested, do
the investigation Steve suggested.  You'll find plenty of material.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-24 Thread Jean-Paul Calderone
On Mon, 24 Jul 2006 13:51:07 -0700, "Carl J. Van Arsdall" <[EMAIL PROTECTED]> 
wrote:
>Jean-Paul Calderone wrote:
>> On Mon, 24 Jul 2006 11:22:49 -0700, "Carl J. Van Arsdall" <[EMAIL 
>> PROTECTED]> wrote:
>>
>>> Steve Holden wrote:
>>>
>>>> Carl J. Van Arsdall wrote:
>>>> [... rant ...]
>>>>
>>>>
>>>>> So with this whole "hey mr. nice thread, please die for me" concept gets
>>>>> ugly quickly in complex situations and doesn't scale well at all.
>>>>> Furthermore, say you have a complex systems where users can write
>>>>> pluggable modules.  IF a module gets stuck inside of some screwed up
>>>>> loop and is unable to poll for messages there's no way to kill the
>>>>> module without killing the whole system.  Any of you guys thought of a
>>>>> way around this scenario?
>>>>>
>>>>>
>>>>>
>>>>>
>>>> Communications through Queue.Queue objects can help. But if you research
>>>> the history of this design decision in the language you should discover
>>>> there are fairly sound rasons for not allowing arbitrary "threadicide".
>>>>
>>>>
>>>>
>>>>
>>> Right, I'm wondering if there was a way to make an interrupt driven
>>> communication mechanism for threads?  Example: thread receives a
>>> message, stops everything, and processes the message.
>>>
>>>
>>
>> And what happens if the thread was halfway through a malloc call and
>> the data structures used to manage the state of the heap are in an
>> inconsistent state when the interrupt occurs?
>>
>> This has been discussed many many times in the context of many many
>> languages and threading libraries.  If you're really interested, do
>> the investigation Steve suggested.  You'll find plenty of material.
>>
>
>I've been digging around with Queue.Queue and have yet to come across
>any solution to this problem.

Right.  Queue.Queue doesn't even try to solve this problem.

>Queue.Queue just offers a pretty package 
>for passing around data, it doesn't solve the "polling" problem.

Exactly correct.

>I 
>wonder why malloc()'s can't be done in an atomic state (along with other
>operations that should be atomic, maybe that's a question for OS guys, I
>dunno).

(Note that malloc() is just a nice example - afaik it is threadsafe on
all systems which support threading at all)

Because it turns out that doing things atomically is difficult :)

It's not even always obvious what "atomic" means.  But it's not impossible
to do everything atomically (at least not provably ;), and in fact many
people try, most commonly by using mutexes and such.

However, unless one is extremely disciplined, critical sections generally
get overlooked.  It's often very difficult to find and fix these, and in
fact much of the time no one even notices they're broken until long after
they have been written, since many bugs in this area only surface under
particular conditions (ie, particular environments or on particular
hardware or under heavy load).

>Using Queue.Queue still puts me in a horribly inflexible
>"polling" scenario.  Yea, I understand many of the reasons why we don't
>have "threadicide", and why it was even removed from java.  What I don't
>understand is why we can't come up with something a bit better.  Why
>couldn't a thread relinquish control when its safe to do so?  

Of course, if you think about it, CPython does exactly this already.  The
GIL ensures that, at least on the level of the interpreter itself, no
thread switching can occur while data structures are in an inconsistent
state.  This works pretty well, since it means that almost anything you do
at the application level in a multithreaded application won't cause random
memory corruption or other fatal errors.

So why can't you use this to implement killable threads in CPython?  As it
turns out, you can ;)  Recent versions of CPython include the function
PyThreadState_SetAsyncExc.  This function sets an exception in another
thread, which can be used as a primitive for killing other threads.

Why does everyone say killing threads is impossible, then?  Well, for one
thing, the CPython developers don't trust you to use SetAsyncExc correctly,
so it's not exposed to Python programs :)  You have to wrap it yourself
if you want to call it.  For another thing, the granularity of exceptions
being raised is somewhat sketchy: an exception will not be raised while
a

Re: micro authoritative dns server

2006-07-24 Thread Jean-Paul Calderone
On 24 Jul 2006 14:45:25 -0700, [EMAIL PROTECTED] wrote:
>Hi,
>
>I'm new in python. I know that there are several classes for writing
>dns servers, but I don't understand it
>
>I just want to know if anyone could help me in writing a code for
>minimal authoritative dns server. I would like that anyone show me the
>code, minimal, for learn how expand it.
>
>The code I desireed should do the following:
>
>1) It has an hash:
>hosts = { "myhost"   => 84.34.56.67
>"localhost" => 127.0.0.1,
>"blue"  => fe80::20f:b0ff:fef2:f106
> }
>2) If any application asks if know any hosts, say nothing if this host
>is not in the hash (hosts). If this host is in the hash, return the ip
>3) And not do anything more
>
>So if we put this server in /etc/resolv.conf then the browsers only
>recognize the hosts we want

Twisted includes a DNS server which is trivially configurable to perform
this task.  Take a look.

  http://twistedmatrix.com/
  http://twistedmatrix.com/projects/names/documentation/howto/names.html

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie Q: Class Privacy (or lack of)

2006-07-24 Thread Jean-Paul Calderone
On Tue, 25 Jul 2006 02:49:06 GMT, Steve Jobless <[EMAIL PROTECTED]> wrote:
>Hi,
>
>I just started learning Python. I went through most of the tutorial at
>python.org. But I noticed something weird. I'm not talking about the
>__private hack.
>
>Let's say the class is defined as:
>
>  class MyClass:
>def __init__(self):
>  pass
>def func(self):
>  return 123
>
>But from the outside of the class my interpreter let me do:
>
>  x = MyClass()
>  x.instance_var_not_defined_in_the_class = 456
>
>or even:
>
>  x.func = 789
>
>After "x.func = 789", the function is totally shot.
>
>Are these bugs or features? If they are features, don't they create
>problems as the project gets larger?

If you do things like this, you will probably encounter problems, yes.

Fortunately the solution is simple: don't do things like this ;)

It is allowed at all because, to the runtime, "x.someattr = someval" is
no different from "self.someattr = someval".  The fact that a different
name is bound to a particular object doesn't matter.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Stack trace in C

2006-07-25 Thread Jean-Paul Calderone
On Tue, 25 Jul 2006 14:20:41 +0200, Andre Poenitz <[EMAIL PROTECTED]> wrote:
>
>
>Bear with me - I am new to Python. (And redirect me to a more suitable
>newsgroup in case this one is not appropriate.)
>
>I am trying to embed Python into a C++ application and want to get back
>a backtrace in case of errors in the python code.

I think you'd have more luck with the traceback module, which has such
methods as format_exception and print_tb.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-25 Thread Jean-Paul Calderone
On 25 Jul 2006 05:51:47 -0700, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>[EMAIL PROTECTED] writes:
>> Threadicide would not solve the problems you actually have, and it
>> tends to create other problems. What is the condition that makes
>> you want to kill the thread? Make the victim thread respond to that
>> condition itself.
>
>If the condition is a timeout, one way to notice it is with sigalarm,
>which raises an exception in the main thread.  But then you need a way
>to make something happen in the remote thread.

Raising an exception in your own thread is pretty trivial.  SIGALRM does
no good whatsoever here. :)

Besides, CPython will only raise exceptions between opcodes.  If a
misbehaving thread hangs inside an opcode, you'll never see the exception
from SIGALRM.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python stack trace on blocked running script.

2006-07-25 Thread Jean-Paul Calderone
On Tue, 25 Jul 2006 13:20:18 -0700, Rich Burridge <[EMAIL PROTECTED]> wrote:
>
>Hi all,
>
>If this is a frequently asked question, then just slap me silly and point me
>in the right direction.
>
>We are currently experiencing a hanging problem with Orca [1], a
>screen reader/magnifier written in Python. We know how to get a
>stack trace of the current thread in a Python problem, but the problem
>is that the script is currently blocked on a Bonobo [2] call, that is
>preventing us from doing this.
>
>What we are looking for is a way to get a complete stack
>trace of each running thread at this point. Sort of the Python equivalent
>of Control-\ in Java. Is there any way we can do this?
>
>Or any other suggestions on how to hone in and solve this problem?

You may be interested in this recent python-dev thread:

  http://article.gmane.org/gmane.comp.python.devel/82129

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need a compelling argument to use Django instead of Rails

2006-07-26 Thread Jean-Paul Calderone
On 26 Jul 2006 08:16:21 -0700, [EMAIL PROTECTED] wrote:
>
>Jaroslaw Zabiello wrote:
>> On Wed, 26 Jul 2006 16:25:48 +0200, Bruno Desthuilliers wrote:
>>
>> > I have difficulty imagining how a language could be more dynamic than
>> > Python...
>>
>> E.g. try to extends or redefine builtin Python classes on fly. Ruby is so
>> flexible that it can be used to create Domain-specific Programming
>> Languages.
>
>This, of course, is really cool if you are working
>all by yourself on a dissertation or something,
>but can be completely disasterous if you are
>actually working with other people who need to
>know what the expressions of the programming
>language mean and do.  Back in the day every
>good Common Lisp programmer wrote in a
>dialect completely incomprehensible to any other
>Common Lisp programmer.  Kinda fun, but not
>"best practice."
>   -- Aaron Watters
>

I think this is exactly correct.  However, note that Python does not
necessarily really have this limitation.  _CPython_ may prevent you
from changing the behavior of builtins, but that doesn't mean all
Python implementations do (in fact, even CPython used to let you add,
remove, or change the implementation of methods on builtin types).

For example:

[EMAIL PROTECTED]:~/Projects/pypy/trunk$ pypy/bin/py.py 
PyPy 0.9.0 in StdObjSpace on top of Python 2.4.3 (startuptime: 1.83 secs)
 list.append = lambda self, value: 'hello, world'
 x = []
 x.append(10)
'hello, world'
 x
[]
 

As alternate implementations become more widely used, it will be important
to nail down exactly which behaviors are implementation details and which
are language features.  Right now the line is pretty fuzzy in many cases.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-27 Thread Jean-Paul Calderone
On Thu, 27 Jul 2006 07:07:05 GMT, Dennis Lee Bieber <[EMAIL PROTECTED]> wrote:
>On Wed, 26 Jul 2006 17:38:06 -0700, "Carl J. Van Arsdall"
><[EMAIL PROTECTED]> declaimed the following in comp.lang.python:
>
>> Well, I guess I'm thinking of an event driven mechanism, kinda like
>> setting up signal handlers.  I don't necessarily know how it works under
>> the hood, but I don't poll for a signal.  I setup a handler, when the
>> signal comes, if it comes, the handler gets thrown into action.  That's
>> what I'd be interesting in doing with threads.
>>
>   Well, first off, something in the run-time IS doing the equivalent
>of polling. When IT detects the condition of a registered event has
>taken place, it calls the registered handler as a normal subroutine.
>Now, that polling may seem transparent if it is tied to, say the OS I/O
>operations. This means that, if the process never performs any I/O, the
>"polling" of the events never happens. Basically, as part of the I/O
>operation, the OS checks for whatever data signals an event (some bit
>set in the process table, etc.) has happened and actively calls the
>registered handler. But that handler runs as a subroutine and returns.
>
>   How would this handler terminate a thread? It doesn't run /as/ the
>thread, it runs in the context of whatever invoked the I/O, and a return
>is made to that context after the handler finishes.
>
>   You are back to the original problem -- such an event handler, when
>called by the OS, could do nothing more than set some data item that is
>part of the thread, and return... The thread is STILL responsible for
>periodically testing the data item and exiting if it is set.
>
>   If your thread is waiting for one of those many os.system() calls to
>return, you need to find some way to kill the child process (which may
>be something running on the remote end, since you emphasize using SSH).
>

While you are correct that signals are not the solution to this problem,
the details of this post are mostly incorrect.

If a thread never performs any I/O operations, signal handlers will still
get invokes on the arrival of a signal.

Signal handlers have to run in some context, and that ends up being the
context of some thread.  A signal handler is perfectly capable of exiting
the thread in which it is running.  It is also perfectly capable of
terminating any subprocess that thread may be responsible for.

However, since sending signals to specific threads is difficult at best
and impossible at worst, combined with the fact that in Python you
/still/ cannot handle a signal until the interpreter is ready to let you
do so (a fact that seems to have been ignored in this thread repeatedly),
signals end up not being a solution to this problem.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-27 Thread Jean-Paul Calderone
On Thu, 27 Jul 2006 02:30:03 -0500, Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>>  Hans wrote:
>> > Is there a way that the program that created and started a thread also 
>> > stops
>> > it.
>> > (My usage is a time-out).
>> >
>> > E.g.
>> >
>> > thread = threading.Thread(target=Loop.testLoop)
>> > thread.start() # This thread is expected to finish within a second
>> > thread.join(2)# Or time.sleep(2) ?
>>
>>  No, Python has no threadicide method
>
>Actually it does in the C API, but it isn't exported to python.
>ctypes can fix that though.

No, it has a method for raising an exception.  This isn't quite the
same as a method for killing a thread.  Also, this has been mentioned
in this thread before.  Unfortunately:

  import os, threading, time, ctypes

  class ThreadKilledError(Exception):
  pass

  _PyThreadState_SetAsyncExc = ctypes.pythonapi.PyThreadState_SetAsyncExc
  _c_ThreadKilledError = ctypes.py_object(ThreadKilledError)

  def timeSleep():
  time.sleep(100)

  def childSleep():
  os.system("sleep 100") # time.sleep(100)

  def catchException():
  while 1:
  try:
  while 1:
  time.sleep(0.0)
  except Exception, e:
  print 'Not exiting because of', e


  class KillableThread(threading.Thread):
  """
  Show how to kill a thread -- almost
  """
  def __init__(self, name="thread", *args, **kwargs):
  threading.Thread.__init__(self, *args, **kwargs)
  self.name = name
  print "Starting %s" % self.name

  def kill(self):
  """Kill this thread"""
  print "Killing %s" % self.name
  _PyThreadState_SetAsyncExc(self.id, _c_ThreadKilledError)

  def run(self):
  self.id = threading._get_ident()
  try:
  return threading.Thread.run(self)
  except Exception, e:
  print 'Exiting', e

  def main():
  threads = []
  for f in timeSleep, childSleep, catchException:
  t = KillableThread(target=f)
  t.start()
  threads.append(t)
  time.sleep(1)
  for t in threads:
  print 'Killing', t
  t.kill()
  for t in threads:
  print 'Joining', t
  t.join()

  if __name__ == '__main__':
  main()


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Info on continuations?

2006-08-08 Thread Jean-Paul Calderone
On 8 Aug 2006 08:07:02 -0700, [EMAIL PROTECTED] wrote:
>
>vasudevram wrote:
>> Hi,
>>
>> I am Googling and will do more, found some stuff, but interested to get
>> viewpoints of list members on:
>>
>> Continuations in Python.
>>
>> Saw a few URLs which had some info, some of which I understood. But
>> like I said, personal viewpoints are good to have.
>>
>> Thanks
>> Vasudev
>
>Could you be a little more specific on what you're looking for?
>Continuations are a big can of worms.
>
>In general, some of the historical uses of continuations are better
>represented as classes in python.  Generators provide some limited
>functionality as well, and will be able to send info both ways in
>python 2.5 to enable limited co-routines.  Stackless python allows you

Coroutines can switch out of stacks deeper than one frame.  Generators
cannot, even in Python 2.5.  You seem to be partially aware of this,
given your comments below, but have suffered some of the confusion PEP
342's naming seems intended to generate.

Python doesn't have continuations or coroutines at all.  It has generators,
and calling them anything else (even "limited coroutines" - depending on
what "limited" means (which no one really knows), Python 2.4 already had
these - the ability to pass information into a generator is not new, only
the syntax by which to do it is) can't help but lead to misunderstandings.

>to *really* suspend the stack at a given time and do a bunch of crazy
>stuff, but doesn't currently support 'full continuations'.
>

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: do people really complain about significant whitespace?

2006-08-09 Thread Jean-Paul Calderone
On Wed, 09 Aug 2006 13:47:03 +0100, Pierre Barbier de Reuille <[EMAIL 
PROTECTED]> wrote:
>Carl Banks wrote:
>> Michiel Sikma wrote:
>>> Op 8-aug-2006, om 1:49 heeft Ben Finney het volgende geschreven:
>>>
 As others have pointed out, these people really do exist, and they
 each believe their preconception -- that significant whitespace is
 intrinsically wrong -- is valid, and automatically makes Python a
 lesser language.
>>> Well, I most certainly disagree with that, of course, but you gotta
>>> admit that there's something really charming about running an auto-
>>> formatting script on a large piece of C code, turning it from an
>>> unreadable mess into a beautifully indented and organized document.
>>
>> The only time I get that satisfaction is when I run the formatter to
>> format some C code I'm asked to debug.  Quite often the problem was
>> something that could have been easily spotted if the coder had used
>> good indentation in the first place.  Though they probably wouldn't
>> have seen it anyways, considering the poor programming skills of most
>> engineers (the classical definition, not computer engineers).
>>
>> The very fact the code formatters exist should tell you that grouping
>> by indentation is superior.
>>
>>
>> Carl Banks
>>
>
>Problem being : grouping by indentation do *not* imply good indentation.
>For example, I had to read a piece of (almost working) code which looked
>like that :
>
>
>  if cond1 : stmt1
> stmt2
> stmt3
>  if cond2:  stmt4
> stmt5
>  elif cond3:stmt6
> stmt7
>  else:  stmt8
> stmt9
> stmt10
> stmt11
>

This isn't actually legal Python.  Each branch starts with a simple suite and 
then proceeds to try to have a full suite as well.  A block can consist of one 
or the other, but not both.  Additionally, the nested if/elif has the wrong 
indentation and fits into no suite.

The closest legal example I can think of is:

  if cond1 :
 stmt2
 stmt3
 if cond2:
stmt5
 elif cond3:
stmt7
  else:
 stmt9
 stmt10
 stmt11

Which, while ugly, actually makes it much clearer what is going on.

>
>So you can tell what you want, but this code is valid but impossible to
>read and impossible to reindent correctly. So although I personnaly like
>Python, I still don't think meaningful indentation is good.

If your example were actually valid (which it isn't), all it would
demonstrate is that Python should be even stricter about indentation,
since it would have meant that there is still some whitespace which
has no meaning and therefore can be adjusted in meaingless ways by
each programmer, resulting in unreadable junk.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: start a multi-sockets server (a socket/per thread) with different ports but same host

2006-08-12 Thread Jean-Paul Calderone
On 12 Aug 2006 09:00:02 -0700, zxo102 <[EMAIL PROTECTED]> wrote:
>Hi,
>   I am doing a small project using socket server and thread in python.
> This is first time for me to use socket and thread things.
>   Here is my case. I have 20 socket clients.  Each client send a set
>of sensor data per second to a socket server.  The socket server will
>do two things: 1. write data into a file via bsddb; 2. forward the data
>to a GUI written in wxpython.
>   I am thinking the code should work as follow (not sure it is
>feasible)
>  20 threads, each thread takes care of a socket server with a
>different port.
>   I want all socket servers start up and wait for client connection.
>In the attached demo code, It stops at the startup of first socket
>server somewhere in the following two lines and waits for client call:
>

Threads aren't the best way to manage the concurrency present in this
application.  Instead, consider using non-blocking sockets with an
event notification system.  For example, using Twisted, your program
might look something like this:

from twisted.internet import reactor, protocol, defer

class CumulativeEchoProtocol(protocol.Protocol):
def connectionMade(self):
# Stop listening on the port which accepted this connection
self.factory.port.stopListening()

# Set up a list in which to collect the bytes which we receive
self.received = []


def connectionLost(self, reason):
# Notify the main program that this connection has been lost, so
# that it can exit the process when there are no more connections.
self.factory.onConnectionLost.callback(self)

def dataReceived(self, data):
# Accumulate the new data in our list
self.received.append(data)
# And then echo the entire list so far back to the client
self.transport.write(''.join(self.data))

def allConnectionsLost():
# When all connections have been dropped, stop the reactor so the
# process can exit.
reactor.stop()

def main():
# Set up a list to collect Deferreds in.  When all of these Deferreds
# have had callback() invoked on them, the reactor will be stopped.
completionDeferreds = []
for i in xrange(20):
# Make a new factory for this port
f = protocol.ServerFactory()

# Make a Deferred for this port's connection-lost event and make
# it available to the protocol by way of the factory.
d = defer.Deferred()
f.onConnectionLost = d
completionDeferreds.append(d)
f.protocol = CumulativeEchoProtocol

# Start listening on a particular port number with this factory
port = reactor.listenTCP(2000 + i + 1, f)

# Make the port object available to the protocol as well, so that
# it can be shut down when a connection is made.
f.port = port

# Create a Deferred which will only be called back when all the other
# Deferreds in this list have been called back.
d = defer.DeferredList(completionDeferreds)

# And tell it to stop the reactor when it fires
d.addCallback(lambda result: allConnectionsLost())

# Start the reactor so things can start happening
reactor.run()

if __name__ == '__main__':
main()

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: self=pickle.load(file)? (Object loads itself)

2006-08-12 Thread Jean-Paul Calderone
On Sat, 12 Aug 2006 18:36:32 +0200, Anton81 <[EMAIL PROTECTED]> wrote:
>Hi!
>
>it seems that
>
>class Obj:
>def __init__(self):
>f=file("obj.dat")
>self=pickle.load(f)
>...
>
>doesn't work. Can an object load itself with pickle from a file somehow?
>What's an easy solution?

You are trying to implement a constructor (__new__) for the Obj class, but you 
have actually implemented the initializer (__init__).  In order to be able to 
control the actual creation of the instance object, you cannot use the 
initializer, since its purpose is to set up various state on an already created 
instance.  Instead, you may want to use a class method:

class Obj:
def fromPickleFile(cls, fileName):
return pickle.load(file(fileName))
fromPickleFile = classmethod(fromPickleFile)

You can then use this like so:

inst = Obj.fromPickleFile('obj.dat')

Jean-Paul


>
>Anton
>--
>http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: start a multi-sockets server (a socket/per thread) with different ports but same host

2006-08-12 Thread Jean-Paul Calderone
On 12 Aug 2006 10:44:29 -0700, zxo102 <[EMAIL PROTECTED]> wrote:
>Jean-Paul,
>Thanks a lot. The code is working. The python twisted is new to me too.
>Here are my three more questions:
>1. Since the code need to be started in a wxpyhon GUI (either by
>clicking a button or up with the GUI),  do I have to run the code in a
>thread (sorry, I have not tried it yet)?

You can try to use Twisted's wxPython integration support:

http://twistedmatrix.com/projects/core/documentation/howto/choosing-reactor.html#auto15

But note the warnings about how well it is likely to work.  Using a separate
thread might be the best solution.

>2. How can I grab the client data in the code? Can you write two lines
>for that? I really appreciate that.

I'm not sure what you mean.  The data is available in the `received' attribute
of the protocol instance.  Any code which needs to manipulate the data can get
that list and do whatever it likes with it.

>3. After I change
>self.transport.write(''.join(self.data))
>   to
>self.transport.write(''.join(data))
>  and scan all the ports with the following code twice (run twice).
>First round scanning says "succefully connected". But second round
>scanning says "failed". I have to restart your demo code to make it
>work.

I intentionally added code which shuts the server off after the first round
of connections is completed, since that seemed to be what your example
program was doing.  If you don't want this, just remove the shutdown code.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: start a multi-sockets server (a socket/per thread) with different ports but same host

2006-08-13 Thread Jean-Paul Calderone
On 12 Aug 2006 21:59:20 -0700, zxo102 <[EMAIL PROTECTED]> wrote:
>Jean-Paul,
>I just start to learn Twisted. Here is my simple case: I can find
>the data sent by clients in dataReceived but I don't know which
>client/which port the data is from. After I know where the data comes
>from, I can do different things there, for example, write them into
>different files via bsddb.  I am not sure if it is the correct way to
>do it.
>
>
>   def dataReceived(self, data):
>   # Accumulate the new data in our list
>   self.received.append(data)
>   # And then echo the entire list so far back to the client
>   self.transport.write(''.join(data))
>
>   print "> data: ", data
>   print " which Port? : ", self.factory.port  # unforunately it is
>an object here.

Check out self.transport.getHost().  It, too, is an object, with a 'port'
attribute.

You may also want to glance over the API documentation:

  http://twistedmatrix.com/documents/current/api/

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to tell machines endianism.

2006-08-15 Thread Jean-Paul Calderone
On 15 Aug 2006 10:06:02 -0700, KraftDiner <[EMAIL PROTECTED]> wrote:
>How can you tell if the host processor is a big or little endian
>machine?
>

  sys.byteorder

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone have a link handy to an RFC 821 compliant email address regex for Python?

2006-08-16 Thread Jean-Paul Calderone
On 16 Aug 2006 16:09:39 -0700, Simon Forman <[EMAIL PROTECTED]> wrote:
>fuzzylollipop wrote:
>> I want to do email address format validations, without turning to ANTLR
>> or pyparsing, anyone know of a regex that is COMPLIANT with RFC 821.
>> Most of the ones I have found from google searches are not really as
>> robust as I need them to be.
>
>Would email.Utils.parseaddr() fit the bill?
>
>http://docs.python.org/lib/module-email.Utils.html#l2h-3944
>

This is for RFC 2822 addresses.

http://divmod.org/trac/browser/sandbox/exarkun/smtp.py

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket client server... simple example... not working...

2006-10-04 Thread Jean-Paul Calderone
On 4 Oct 2006 19:31:38 -0700, SpreadTooThin <[EMAIL PROTECTED]> wrote:
>client:
>
>import socket
>s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>s.connect(("192.168.1.101", 8080))
>print 'Connected'
>s.send('ABCD')

Here you didn't check the return value of send to determine if all of the 
string was copied to the kernel buffer to be sent, so you may have only 
succeeded in sending part of 'ABCD'.

>buffer = s.recv(4)

in the above call, 4 is the maximum number of bytes recv will return.  It looks 
as though you are expecting it to return exactly 4 bytes, but in order to get 
that, you will need to check the length of the return value and call recv again 
with a lower limit until the combination of the return values of each call 
gives a total length of 4.

>print buffer
>s.send('exit')

Again, you didn't check the return value of send.

>
>
>server:
>
>serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>serversocket.bind(("192.168.1.101", 8080))
>serversocket.listen(5)
>print 'Listen'
>(clientsocket, address) = serversocket.accept()
>print 'Accepted'
>flag = True
>while flag:
>   chunk = serversocket.recv(4)

You're calling recv on serversocket instead of on clientsocket.  You're also 
relying on recv to return exactly 4 bytes, which it may not do.

>   if chunk == '':
>   raise RuntimeError, "socket connection broken"
>   elif chunk == 'exit':
>   flag = False
>   else:
>   serversocket.send(chunk)

Another missing check of the return value of send.

>print 'Done'
>
>Server says!
>
>Listen
>Accepted
>Traceback (most recent call last):
>  File "server.py", line 11, in ?
>chunk = serversocket.recv(4)
>socket.error: (57, 'Socket is not connected')
>
>
>Client says:
>Connected
>
>What have I done wrong now!

I recommend switching to Twisted.  The Twisted equivalent (I guess - the 
protocol defined above is strange and complex (probably unintentionally, due to 
the things you left out, like any form of delimiter) and I doubt I really 
understand the end goal you are working towards), minus bugs (untested):

# client.py
from twisted.internet import reactor, protocol

class Client(protocol.Protocol):
buf = ''
def connectionMade(self):
self.transport.write('ABCD')
def dataReceived(self, data):
self.buf += data
if len(self.buf) >= 4:
reactor.stop()

protocol.ClientCreator(reactor, Client).connectTCP('192.168.1.101', 8080)
reactor.run()

# server.py
from twisted.internet import reactor, protocol

class Server(protocol.Protocol):
buf = ''
def dataReceived(self, bytes):
self.buf += bytes
exit = self.buf.find('exit')
if exit != -1:
self.transport.write(self.buf[:exit])
self.buf = self.buf[exit + 4:]
reactor.stop()
else:
self.transport.write(self.buf)
self.buf = ''

f = protocol.ServerFactory()
f.protocol = Server
reactor.listenTCP('192.168.1.101', 8080, f)
reactor.run()

Hope this helps,

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket client server... simple example... not working...

2006-10-05 Thread Jean-Paul Calderone
On 5 Oct 2006 07:01:50 -0700, SpreadTooThin <[EMAIL PROTECTED]> wrote:
> [snip]
>
>Jean-Paul many thanks for this and your effort.
>but why is it every time I try to do something with 'stock' python I
>need another package?

Maybe you are trying to do things that are too complex :)

>By the time I've finished my project there are like 5 3rd party add-ons
>to be installed.

I don't generally find this to be problematic.

>I know I'm a python newbie... but I'm far from a developer newbie and
>that can be a recipe for
>disaster.

Not every library can be part of the standard library, neither can the
standard library satisfy every possible use-case.  Relying on 3rd party
modules isn't a bad thing.

>The stock socket should work and I think I've missed an
>obvious bug in the code other
>than checking the return status.
>

Well, I did mention one bug other than failure to check return values.
Maybe you missed it, since it was in the middle.  Go back and re-read
my response.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Restoring override of urllib.URLopener.open_https

2006-10-05 Thread Jean-Paul Calderone
On Fri, 6 Oct 2006 02:22:23 GMT, Bakker A <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>goyatlah   wrote:
>>
>>I think that you need a superclass above the M2Crypto one, and change
>>the open_https method back to the urllibs one.
>>
>
>I'm not sure I get your suggestion. What the M2Crypto module does is:
>
>   import m2urllib
>
>in its __init__.py, which blatantly does
>
>   from urllib import *
>   URLopener.open_https = open_https
>
>in turn, so there's no subclassing going on, and AFAIK, the original urllib
>code is irreversibly overwritten. Am I right?

Mostly.  However, Try importing urllib first, grabbing the open_https function,
then importing M2Crypto, then reversing what M2Crypto did by putting the
original open_https function back onto URLopener.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 342 misunderstanding

2006-10-08 Thread Jean-Paul Calderone
On 8 Oct 2006 12:33:02 -0700, [EMAIL PROTECTED] wrote:
>So I've been reading up on all the new stuff in PEP 342, and trying to
>understand its potential. So I'm starting with a few simple examples to
>see if they work as expected, and find they dont.
>
>I'm basically trying to do the following:
>
>for x in range(10):
>print x*2
>
>but coroutine-style.
>
>My initial try was:
>
 def printrange():
>... for x in range(10):
>... x = yield x
>... print x
>...
 g = printrange()
 for x in g:
>... g.send(x*2)
>...

Try this instead:

  >>> x = None
  >>> while 1:
  ... if x is None:
  ... send = None
  ... else:
  ... send = x * 2
  ... try:
  ... x = g.send(send)
  ... except StopIteration:
  ... break
  ...
  0
  2
  4
  6
  8
  10
  12
  14
  16
  18


>
>Now, I was expecting that to be 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20.
>
>What am I missing here?

Your code was calling next and send, when it should have only been calling
send.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can regular ol' python do a php include?

2006-10-10 Thread Jean-Paul Calderone
On Tue, 10 Oct 2006 15:00:08 GMT, John Salerno <[EMAIL PROTECTED]> wrote:
>Fredrik Lundh wrote:
>
>> you don't even need anything from the standard library to inserting output
>> from one function into given text...
>>
>> text = "... my page with %(foo)s markers ..."
>>
>> print text % dict(foo=function())
>
>Wow, thanks. So if I have a file called header.html that contains all my
>header markup, I could just insert this line into my html file? (which I
>suppose would become a py file)
>
>print open('header.html').read()
>
>Not quite as elegant as include('header.html'), but it seems like it
>would work.


def include(filename):
print open(filename).read()

include('header.html')

Behold, the power of a general purpose programming language.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie: trying to twist my head around twisted (and python)

2006-10-11 Thread Jean-Paul Calderone
On Wed, 11 Oct 2006 17:15:09 +0200, Jan Bakuwel <[EMAIL PROTECTED]> wrote:
>Hoi all,
>
>Please see below a small piece of python code that resembles a
>smtpserver using the twisted framework. The code is based on the example
>smtp server discussed in Twisted Network Programming Essentials.
>The unmodified example code can be found on the O'Reilly website:
>(http://examples.oreilly.com/twistedadn/, in ch08 subdir).
>
>I've removed all the code I don't (the original example write an email
>received with SMTP to a Maildir; I'll do something else with it). For
>the sake of the example, the only thing I'll do in eomReceived is print
>whether the message had any attachments or not, then return a "failed"
>if a message had any attachments and a "success" if not.
>
>According to the book, I need to return a "Deferred result" from the
>function eomReceived (see  below).
>
>In eomReceived, I intend to process the email then either return success
> or failure. I've been looking at the twisted way of using twisted :-).
>According to the book "its a little confusing for a start" and I have to
>agree :-)


The return value of eomReceived is used to determine whether to signal to
the SMTP client whether the message has been accepted.  Regardless of your
application logic, if you are taking responsibility for the message, you
should return a successful result.  If all of your processing is synchronous,
then you simply need to return twisted.internet.defer.succeed(None) at the
end of the function.  If you have asynchronous processing to do (it does not
appear as though you do), you will need to return a Deferred which only fires
once that processing has been completed.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Converting existing module/objects to threads

2006-10-19 Thread Jean-Paul Calderone
On Thu, 19 Oct 2006 14:09:11 -0300, Gabriel Genellina <[EMAIL PROTECTED]> wrote:
>At Thursday 19/10/2006 00:01, [EMAIL PROTECTED] wrote:
>> > Consider using the asyncore module instead of threads.
>>
>>I think that is a good point and I am considering using
>>asyncore/asynchat...  i'm a little confused as to how i can make this
>>model work.  There is no server communication without connection from
>>the client (me), which happens on intervals, not when data is available
>>on a socket or when the socket is available to be written, which is
>>always.  Basically i need to determine how to trigger the asynchat
>>process based on time.  in another application that i write, i'm the
>>server and the chat process happens every time the client wakes
>>up...easy and perfect for asyncore
>>
>>That is a solution i'd like to persue, but am having a hard time
>>getting my head around that as well.
>
>You have to write your own dispatcher (inheriting from async_chat) as any 
>other protocol. You can call asyncore.loop whith count=1 (or 10, but not 
>None, so it returns after a few iterations) inside your *own* loop. Inside 
>your loop, when time comes, call your_dispatcher.push(data) so the channel 
>gets data to be sent. Override collect_incoming_data() to get the response.
>You can keep your pending requests in a priority queue (sorted by time) and 
>check the current time against the top element's time.

You could also use Twisted, which provides time-based primitives in addition
to supporting network multiplexing without threads.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie: trying to twist my head around twisted (and python)

2006-10-20 Thread Jean-Paul Calderone
On Fri, 13 Oct 2006 18:44:35 +0200, Jan Bakuwel <[EMAIL PROTECTED]> wrote:
>Jean-Paul Calderone wrote:
>
>> The return value of eomReceived is used to determine whether to signal to
>> the SMTP client whether the message has been accepted.  Regardless of your
>> application logic, if you are taking responsibility for the message, you
>> should return a successful result.  If all of your processing is
>> synchronous,
>> then you simply need to return twisted.internet.defer.succeed(None) at the
>> end of the function.  If you have asynchronous processing to do (it does
>> not
>> appear as though you do), you will need to return a Deferred which only
>> fires
>> once that processing has been completed.
>
>There's still a bit of mystery in here for me...
>I'll be taking responsibility for the message in my code... but perhaps
>my code encounters a non resolvable error (such as disc full). In that
>case I would like to be able to signal a failure instead of success.
>
>Would the following code do the job "properly"?
>
>def eomReceived(self):
># message is complete, store it
>self.lines.append('') # add a trailing newline
>messageData = '\n'.join(self.lines)
>emailMessage = message_from_string(messageData)
>try:
>processEmail(emailMessage)
>return defer.succeed(None)
>except:
>return defer.fail
>#end eomReceived

Close.  Return defer.fail() and it's basically correct.  Sorry for the
delayed response, python-list is high enough traffic that tracking down
followups is kind of a hassle.  You might want to move over to the Twisted
list for further Twisted questions.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ZODB and Python 2.5

2006-10-20 Thread Jean-Paul Calderone
On Fri, 20 Oct 2006 14:04:46 -0500, Robert Kern <[EMAIL PROTECTED]> wrote:
>Andrew McLean wrote:
>> I'm going to have to delay upgrading to Python 2.5 until all the
>> libraries I use support it. One key library for me is ZODB. I've Googled
>>   and can't find any information on the developers' plans. Does anyone
>> have any information that might help?
>
>Since the Python development team tries hard to maintain backwards
>compatibility, the vast majority of Python packages will automatically support
>the newest release of Python in that they will work just dandy. Developers 
>don't
>really have plans to do that kind of support since it just happens.

Python 2.5 made quite a changes which were not backwards compatible,
though.  I think for the case of Python 2.4 -> Python 2.5 transition,
quite a few apps will be broken, many of them in relatively subtle
ways (for example, they may have been handling OSError instead of
WindowsError, or they may have relied on exceptions being classic classes,
or it might not be able to handle NullImporter, or it might define a
slightly buggy but previously working __hash__ which returns the id() of
an object or it may be unable to handle the unconditional stderr writes
that the compiler does when it encounters `with' used as a variable or it
might have relied on top-level code having a name of "?" rather than the
new "module" or it might have relied on the atime and mtime fields of a
stat structure being integers rather than floats).

>
>If you mean something else by "support" (like making use of new language or
>standard library features), then what do you mean?
>
>I would suggest, in order:
>
>1) Look on the relevant mailing list for people talking about using ZODB with
>Python 2.5.
>
>2) Just try it. Install Python 2.5 alongside 2.4, install ZODB, run the test 
>suite.
>

These are pretty good suggestions, though, particularly the latter.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   >