[Twisted-Python] Ldaptor: [PATCH] Extend test driver send_multiResponse() to return deferred and throw errors

2010-09-02 Thread Anton Gyllenberg
The deferred returned by the LDAP client send_multiResponse() method was
previously unused in all code covered by tests, and so the replacement
method in the test driver just returned None. The deferred is now used
in search() and this change makes the test driver also return a deferred
instead of None in order to make things work when run within the test
framework.

To make it possible to test failures in the client send() and
send_multiResponse() methods, the test driver is changed to accept Failure
instances in place of lists of LDAPProtocolResponses.  Doing this causes
the errback on the deferred to be called with this failure.

The LDAPSyntaxSearch and Bind test cases are augmented with one test each
to use the new failure functionality in the client test driver. As the
search() code a while back did not handle errors in the send_multiResponse()
deferred chain the test case would time out if run against older code.
Therefore a smaller timeout of 3 seconds is set for the LDAPSyntaxSearch test
case.
---

Discussion: With this change the old test cases pass and the code path
introduced by my modifications to send_multiResponse() is tested by a
new test case. I am still a bit unsure if I am testing the right thing
and if the original fix is the right thing to do. Any comments
welcome!

Code published on http://github.com/antong/ldaptor/tree/pu


 ldaptor/test/test_ldapsyntax.py |   25 +
 ldaptor/testutil.py |   18 +-
 2 files changed, 42 insertions(+), 1 deletions(-)

diff --git a/ldaptor/test/test_ldapsyntax.py b/ldaptor/test/test_ldapsyntax.py
index b8bcf53..46be06c 100755
--- a/ldaptor/test/test_ldapsyntax.py
+++ b/ldaptor/test/test_ldapsyntax.py
@@ -7,6 +7,7 @@ from ldaptor import config, testutil, delta
 from ldaptor.protocols.ldap import ldapsyntax, ldaperrors
 from ldaptor.protocols import pureldap, pureber
 from twisted.internet import defer
+from twisted.internet import error
 from twisted.python import failure
 from ldaptor.testutil import LDAPClientTestDriver

@@ -366,6 +367,7 @@ class
LDAPSyntaxAttributesModificationOnWire(unittest.TestCase):


 class LDAPSyntaxSearch(unittest.TestCase):
+timeout = 3
 def testSearch(self):
 """Test searches."""

@@ -641,6 +643,17 @@ class LDAPSyntaxSearch(unittest.TestCase):
 d.addCallbacks(testutil.mustRaise, eb)
 return d

+def testSearch_err(self):
+client=LDAPClientTestDriver([
+failure.Failure(error.ConnectionLost())
+])
+o = ldapsyntax.LDAPEntry(client=client, dn='dc=example,dc=com')
+d = o.search(filterText='(foo=a)')
+def eb(fail):
+fail.trap(error.ConnectionLost)
+d.addCallbacks(testutil.mustRaise, eb)
+return d
+
 class LDAPSyntaxDNs(unittest.TestCase):
 def testDNKeyExistenceSuccess(self):
 client = LDAPClientTestDriver()
@@ -1516,3 +1529,15 @@ class Bind(unittest.TestCase):
 fail.trap(ldaperrors.LDAPInvalidCredentials)
 d.addCallbacks(testutil.mustRaise, eb)
 return d
+
+def test_err(self):
+client = LDAPClientTestDriver([
+failure.Failure(error.ConnectionLost())])
+
+o=ldapsyntax.LDAPEntry(client=client,
+   dn='cn=foo,dc=example,dc=com')
+d = defer.maybeDeferred(o.bind, 'whatever')
+def eb(fail):
+fail.trap(error.ConnectionLost)
+d.addCallbacks(testutil.mustRaise, eb)
+return d
diff --git a/ldaptor/testutil.py b/ldaptor/testutil.py
index 8307cb9..cb25aa3 100644
--- a/ldaptor/testutil.py
+++ b/ldaptor/testutil.py
@@ -1,6 +1,7 @@
 """Utilities for writing Twistedy unit tests and debugging."""

 from twisted.internet import defer
+from twisted.python import failure
 from twisted.trial import unittest
 from twisted.test import proto_helpers
 from ldaptor import config
@@ -36,23 +37,37 @@ class LDAPClientTestDriver:
 messages are stored in self.sent, so you can assert that the sent
 messages are what they are supposed to be.

+It is also possible to include a Failure instance instead of a list
+of LDAPProtocolResponses to cause which will cause the errback
+to be called with the failure.
+
 """
 def __init__(self, *responses):
 self.sent=[]
 self.responses=list(responses)
 self.connected = None
 self.transport = FakeTransport(self)
+
 def send(self, op):
 self.sent.append(op)
 l = self._response()
 assert len(l) == 1, \
"got %d responses for a .send()" % len(l)
-return defer.succeed(l[0])
+r = l[0]
+if isinstance(r, failure.Failure):
+return defer.fail(r)
+else:
+return defer.succeed(r)
+
 def send_multiResponse(self, op, handler, *args, **kwargs):
+d = defer.Deferred()
 self.sent.append(op)
 responses = self._response()
 while responses

Re: [Twisted-Python] multiple workers

2010-09-02 Thread exarkun
On 1 Sep, 10:53 pm, ruslan.usi...@gmail.com wrote:
>Hello
>
>I try to write twisted based daemon that work in multiple workers, like
>this:
>
>from twisted.internet import reactor;
>from proxy import FASTCGIServerProxyFactory;
>import os;
>
>reactor.listenUNIX("/tmp/twisted-fcgi.sock", 
>FASTCGIServerProxyFactory());
>
>for i in xrange(3):
>  l_pid = os.fork();
>
>  if(l_pid == 0):
>break;
>
>reactor.run()
>
>I creatae 4 wokers(by num of CPU cores). Im my test all ok, but when i
>shutdown daemon i got follow error 3 times (because every worker try to
>unlink sock file: /tmp/twisted-fcgi.sock):

Using fork this way isn't supported.  Either fork before you import any 
Twisted modules, or use reactor.spawnProcess to create workers instead. 
If you must share a single listening socket amongst all the workers, you 
might be interested in .

Jean-Paul

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] multiple workers

2010-09-02 Thread ruslan usifov
Why it is not supported? I want behaviour like nginx http://nginx.org/, and
misunderstand why i can't implemented it throw twisted. Its' so easy. Every
process have it's own set sockets, and they doesn't share  this sockets
between each other. "OnConnect" event happens only once and which process
handle this event depend on operation system(select epoll, kqueue), in my
case this happens like round robin(FreeBSD 7.2-RELEASE-p8). Where here is
unsupported behaviour?
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] multiple workers

2010-09-02 Thread exarkun
On 05:26 pm, ruslan.usi...@gmail.com wrote:
>Why it is not supported?

"Why" it is not supported is that no one has decided to implement and 
support it.  If it's interesting behavior for you, then we would 
completely welcome you implementing it, and we'll even maintain the 
support for it once you've done that. :)

If you were asking about what specific implementation details cause it 
not to work now (more of a "how" question, sort of), then the answer to 
that probably varies from reactor to reactor, but it is all about how 
things end up being shared across the multiple processes created by 
fork.

I want behaviour like nginx http://nginx.org/, and
>misunderstand why i can't implemented it throw twisted. Its' so easy. 
>Every
>process have it's own set sockets, and they doesn't share  this sockets
>between each other. "OnConnect" event happens only once and which 
>process
>handle this event depend on operation system(select epoll, kqueue), in 
>my
>case this happens like round robin(FreeBSD 7.2-RELEASE-p8). Where here 
>is
>unsupported behaviour?

So, for example, epoll descriptors do survive fork().  However, kqueue 
descriptors don't.  So one necessary change for kqueue reactor to 
support this kind of behavior is to have the reactor somehow re- 
initialize itself after the fork.

Another problem is that certain resources are not simply duplicated by a 
fork().  A specific example is the one you brought up in your earlier 
post.  A unix socket only has one entity corresponding to it in the 
filesystem.  Twisted takes responsibility for cleaning these up, but 
after you fork(), there are two unix sockets and still only one 
filesystem entity.  This confuses one of the processes, since it 
believes it needs to delete the file.  Hardly rocket science to fix, but 
it's a specific case which needs to be handled.

And I'm sure you'll come across quite a few more specific cases which 
need to be handled.  This might get us back to the "why" a little - 
actually ensuring that everything will work properly when arbitrary 
forks are added is a major challenge.  I don't see any way to do it 
comprehensively, really.  That would leave you with a long, long 
adventure of fixing one little issue at a time for months or even years 
to come.  And each problem would only become evident after it bit you 
somehow.

That's probably why we have a ticket for an explicit file descriptor 
passing API, rather than a ticket for supporting arbitrary fork calls. 
The former is easier to test and be confident in than the latter.

Jean-Paul

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python