[Twisted-Python] CalendarServer
Hi, sorry for the cross-posting. I'm trying to compile CalendarServer and it has a script which get Twisted from subversion here: svn.twistedmatrix.com/svn/Twisted/branches/dav-take-two-3081-4 Also if I try to get latest SVN of Twisted as says in the page http://svn.twistedmatrix.com/ I get the following error: ~$ svn co svn://svn.twistedmatrix.com/svn/Twisted/trunk Twisted svn: Cannot be possible to connect to 'svn.twistedmatrix.com': time connection expired Is it subversion up there ?? is there to access in another way ?? Am I doing something wrong ?? Thanks. Fernando. ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
[Twisted-Python] Twisted Python vs. "Blocking" Python: Weird performance on small operations.
Hello Everyone! My name is Dirk Moors, and since 4 years now, I've been involved in developing a cloud computing platform, using Python as the programming language. A year ago I discovered Twisted Python, and it got me very interested, upto the point where I made the decision to convert our platform (in progress) to a Twisted platform. One year later I'm still very enthousiastic about the overal performance and stability, but last week I encountered something I did't expect; It appeared that it was less efficient to run small "atomic" operations in different deferred-callbacks, when compared to running these "atomic" operations together in "blocking" mode. Am I doing something wrong here? To prove the problem to myself, I created the following example (Full source- and test code is attached): - import struct def int2binAsync(anInteger): def packStruct(i): #Packs an integer, result is 4 bytes return struct.pack("i", i) d = defer.Deferred() d.addCallback(packStruct) reactor.callLater(0, d.callback, anInteger) return d def bin2intAsync(aBin): def unpackStruct(p): #Unpacks a bytestring into an integer return struct.unpack("i", p)[0] d = defer.Deferred() d.addCallback(unpackStruct) reactor.callLater(0, d.callback, aBin) return d def int2binSync(anInteger): #Packs an integer, result is 4 bytes return struct.pack("i", anInteger) def bin2intSync(aBin): #Unpacks a bytestring into an integer return struct.unpack("i", aBin)[0] - While running the testcode I got the following results: (1 run = converting an integer to a byte string, converting that byte string back to an integer, and finally checking whether that last integer is the same as the input integer.) *** Starting Synchronous Benchmarks. *(No Twisted => "blocking" code)* -> Synchronous Benchmark (1 runs) Completed in 0.0 seconds. -> Synchronous Benchmark (10 runs) Completed in 0.0 seconds. -> Synchronous Benchmark (100 runs) Completed in 0.0 seconds. -> Synchronous Benchmark (1000 runs) Completed in 0.0034850159 seconds. -> Synchronous Benchmark (1 runs) Completed in 0.036408722 seconds. -> Synchronous Benchmark (10 runs) Completed in 0.36216077 seconds. *** Synchronous Benchmarks Completed in* 0.406000137329* seconds. *** Starting Asynchronous Benchmarks . *(Twisted => "non-blocking" code)* -> Asynchronous Benchmark (1 runs) Completed in 34.509629 seconds. -> Asynchronous Benchmark (10 runs) Completed in 34.509905 seconds. -> Asynchronous Benchmark (100 runs) Completed in 34.513114 seconds. -> Asynchronous Benchmark (1000 runs) Completed in 34.585657 seconds. -> Asynchronous Benchmark (1 runs) Completed in 35.282924 seconds. -> Asynchronous Benchmark (10 runs) Completed in 41.492000103 seconds. *** Asynchronous Benchmarks Completed in *42.1460001469* seconds. Am I really seeing factor 100x?? I really hope that I made a huge reasoning error here but I just can't find it. If my results are correct then I really need to go and check my entire cloud platform for the places where I decided to split functions into atomic operations while thinking that it would actually improve the performance while on the contrary it did the opposit. I personaly suspect that I lose my cpu-cycles to the reactor scheduling the deferred-callbacks. Would that assumption make any sense? The part where I need these conversion functions is in marshalling/protocol reading and writing throughout the cloud platform, which implies that these functions will be called constantly so I need them to be superfast. I always though I had to split the entire marshalling process into small atomic (deferred-callback) functions to be efficient, but these figures tell me otherwise. I really hope someone can help me out here. Thanks in advance, Best regards, Dirk Moors twistedbenchmark.py Description: Binary data ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] Twisted Python vs. "Blocking" Python: Weird performance on small operations.
Dirk, Using deferred directly in your bin2intAsync() may be somewhat less efficient than some other way described in Recipe 439358: [Twisted] From blocking functions to deferred functions recipe (http://code.activestate.com/recipes/439358/) You would get same effect (asynchronous execution) - but potentially more efficiently - by just decorating your synchronous methods as: from twisted.internet.threads import deferToThread deferred = deferToThread.__get__ @deferred def int2binAsync(anInteger): #Packs an integer, result is 4 bytes return struct.pack("i", anInteger) @deferred def bin2intAsync(aBin): #Unpacks a bytestring into an integer return struct.unpack("i", aBin)[0] Kind regards, Valeriy Pogrebitskiy vpogr...@verizon.net On Oct 13, 2009, at 9:18 AM, Dirk Moors wrote: Hello Everyone! My name is Dirk Moors, and since 4 years now, I've been involved in developing a cloud computing platform, using Python as the programming language. A year ago I discovered Twisted Python, and it got me very interested, upto the point where I made the decision to convert our platform (in progress) to a Twisted platform. One year later I'm still very enthousiastic about the overal performance and stability, but last week I encountered something I did't expect; It appeared that it was less efficient to run small "atomic" operations in different deferred-callbacks, when compared to running these "atomic" operations together in "blocking" mode. Am I doing something wrong here? To prove the problem to myself, I created the following example (Full source- and test code is attached): - import struct def int2binAsync(anInteger): def packStruct(i): #Packs an integer, result is 4 bytes return struct.pack("i", i) d = defer.Deferred() d.addCallback(packStruct) reactor.callLater(0, d.callback, anInteger) return d def bin2intAsync(aBin): def unpackStruct(p): #Unpacks a bytestring into an integer return struct.unpack("i", p)[0] d = defer.Deferred() d.addCallback(unpackStruct) reactor.callLater(0, d.callback, aBin) return d def int2binSync(anInteger): #Packs an integer, result is 4 bytes return struct.pack("i", anInteger) def bin2intSync(aBin): #Unpacks a bytestring into an integer return struct.unpack("i", aBin)[0] - While running the testcode I got the following results: (1 run = converting an integer to a byte string, converting that byte string back to an integer, and finally checking whether that last integer is the same as the input integer.) *** Starting Synchronous Benchmarks. (No Twisted => "blocking" code) -> Synchronous Benchmark (1 runs) Completed in 0.0 seconds. -> Synchronous Benchmark (10 runs) Completed in 0.0 seconds. -> Synchronous Benchmark (100 runs) Completed in 0.0 seconds. -> Synchronous Benchmark (1000 runs) Completed in 0.0034850159 seconds. -> Synchronous Benchmark (1 runs) Completed in 0.036408722 seconds. -> Synchronous Benchmark (10 runs) Completed in 0.36216077 seconds. *** Synchronous Benchmarks Completed in 0.406000137329 seconds. *** Starting Asynchronous Benchmarks . (Twisted => "non-blocking" code) -> Asynchronous Benchmark (1 runs) Completed in 34.509629 seconds. -> Asynchronous Benchmark (10 runs) Completed in 34.509905 seconds. -> Asynchronous Benchmark (100 runs) Completed in 34.513114 seconds. -> Asynchronous Benchmark (1000 runs) Completed in 34.585657 seconds. -> Asynchronous Benchmark (1 runs) Completed in 35.282924 seconds. -> Asynchronous Benchmark (10 runs) Completed in 41.492000103 seconds. *** Asynchronous Benchmarks Completed in 42.1460001469 seconds. Am I really seeing factor 100x?? I really hope that I made a huge reasoning error here but I just can't find it. If my results are correct then I really need to go and check my entire cloud platform for the places where I decided to split functions into atomic operations while thinking that it would actually improve the performance while on the contrary it did the opposit. I personaly suspect that I lose my cpu-cycles to the reactor scheduling the deferred-callbacks. Would that assumption make any sense? The part where I need these conversion functions is in marshalling/ protocol reading and writing throughout the cloud platform, which implies that these functions will be called constantly so I need them to be superfast. I always though I had to
Re: [Twisted-Python] Twisted Python vs. "Blocking" Python: Weird performance on small operations.
Hi Dirk, I took a look at your code sample and got the async benchmark to run with the following values: *** Starting Asynchronous Benchmarks. -> Asynchronous Benchmark (1 runs) Completed in 0.000181913375854 seconds. -> Asynchronous Benchmark (10 runs) Completed in 0.000736951828003 seconds. -> Asynchronous Benchmark (100 runs) Completed in 0.00641012191772 seconds. -> Asynchronous Benchmark (1000 runs) Completed in 0.0741751194 seconds. -> Asynchronous Benchmark (1 runs) Completed in 0.675071001053 seconds. -> Asynchronous Benchmark (10 runs) Completed in 7.29738497734 seconds. *** Asynchronous Benchmarks Completed in 8.16032314301 seconds. Which, though still quite a bit slower than the synchronous version, is still much better than the 40 sec. mark that you were experiencing. My modified version simply returned defer.succeed from your aync block-compute functions. i.e. Instead of your initial example: def int2binAsync(anInteger): def packStruct(i): #Packs an integer, result is 4 bytes return struct.pack("i", i) d = defer.Deferred() d.addCallback(packStruct) reactor.callLater(0, d.callback, anInteger) return d my version does: def int2binAsync(anInteger): return defer.succeed(struct.pack('i', anInteger)) A few things to note in general however: 1) Twisted shines for block I/O operations - i.e. networking. A compute intesive process will not necessarily yield any gains in performance by using Twisted since the Python GIL exists (a global lock). 2) If you are doing computations that use a C module (unforunately struct pre 2.6 I believe doesn't use a C module), there may be a chance that the C module releases the GIL, allowing you to do those computations in a thread. In this case you'd be better off using deferToThread as suggested earlier. 3) There is some (usually minimal but it exists) overhead to using Twisted. Instead of computing a bunch of stuff serially and returning your answer as in your sync example, you're wrapping everything up in deferreds and starting a reactor - it's definitely going to be a bit slower than the pure synchronous version for this case. Hope that makes sense. Cheers, Reza -- Reza Lotun mobile: +44 (0)7521 310 763 email: rlo...@gmail.com work: r...@tweetdeck.com twitter: @rlotun ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] Twisted-Python Digest, Vol 67, Issue 22
an integer > > return struct.unpack("i", p)[0] > > > > d = defer.Deferred() > > d.addCallback(unpackStruct) > > > > reactor.callLater(0, > > d.callback, > > aBin) > > return d > > > > def int2binSync(anInteger): > > #Packs an integer, result is 4 bytes > > return struct.pack("i", anInteger) > > > > def bin2intSync(aBin): > > #Unpacks a bytestring into an integer > > return struct.unpack("i", aBin)[0] > > > > > - > > > > While running the testcode I got the following results: > > > > (1 run = converting an integer to a byte string, converting that > > byte string back to an integer, and finally checking whether that > > last integer is the same as the input integer.) > > > > *** Starting Synchronous Benchmarks. (No Twisted => "blocking" code) > > -> Synchronous Benchmark (1 runs) Completed in 0.0 seconds. > > -> Synchronous Benchmark (10 runs) Completed in 0.0 seconds. > > -> Synchronous Benchmark (100 runs) Completed in 0.0 seconds. > > -> Synchronous Benchmark (1000 runs) Completed in 0.0034850159 > > seconds. > > -> Synchronous Benchmark (1 runs) Completed in 0.036408722 > > seconds. > > -> Synchronous Benchmark (10 runs) Completed in 0.36216077 > > seconds. > > *** Synchronous Benchmarks Completed in 0.406000137329 seconds. > > > > *** Starting Asynchronous Benchmarks . (Twisted => "non-blocking" > > code) > > -> Asynchronous Benchmark (1 runs) Completed in 34.509629 > > seconds. > > -> Asynchronous Benchmark (10 runs) Completed in 34.509905 > > seconds. > > -> Asynchronous Benchmark (100 runs) Completed in 34.513114 > > seconds. > > -> Asynchronous Benchmark (1000 runs) Completed in 34.585657 > > seconds. > > -> Asynchronous Benchmark (1 runs) Completed in 35.282924 > > seconds. > > -> Asynchronous Benchmark (10 runs) Completed in 41.492000103 > > seconds. > > *** Asynchronous Benchmarks Completed in 42.1460001469 seconds. > > > > Am I really seeing factor 100x?? > > > > I really hope that I made a huge reasoning error here but I just > > can't find it. If my results are correct then I really need to go > > and check my entire cloud platform for the places where I decided to > > split functions into atomic operations while thinking that it would > > actually improve the performance while on the contrary it did the > > opposit. > > > > I personaly suspect that I lose my cpu-cycles to the reactor > > scheduling the deferred-callbacks. Would that assumption make any > > sense? > > The part where I need these conversion functions is in marshalling/ > > protocol reading and writing throughout the cloud platform, which > > implies that these functions will be called constantly so I need > > them to be superfast. I always though I had to split the entire > > marshalling process into small atomic (deferred-callback) functions > > to be efficient, but these figures tell me otherwise. > > > > I really hope someone can help me out here. > > > > Thanks in advance, > > Best regards, > > Dirk Moors > > > > > > > > > > > > > > > > > > > > > > > > > > > > ___ > > Twisted-Python mailing list > > Twisted-Python@twistedmatrix.com > > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python > > -- next part -- > An HTML attachment was scrubbed... > URL: > http://twistedmatrix.com/pipermail/twisted-python/attachments/20091013/e9ae2546/attachment.htm > > -- > > ___ > Twisted-Python mailing list > Twisted-Python@twistedmatrix.com > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python > > > End of Twisted-Python Digest, Vol 67, Issue 22 > ** > twistedbenchmark.py Description: Binary data ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] Twisted-Python Digest, Vol 67, Issue 23
t; > > anInteger) > > > > > > return d > > > > > > def bin2intAsync(aBin): > > > def unpackStruct(p): > > > #Unpacks a bytestring into an integer > > > return struct.unpack("i", p)[0] > > > > > > d = defer.Deferred() > > > d.addCallback(unpackStruct) > > > > > > reactor.callLater(0, > > > d.callback, > > > aBin) > > > return d > > > > > > def int2binSync(anInteger): > > > #Packs an integer, result is 4 bytes > > > return struct.pack("i", anInteger) > > > > > > def bin2intSync(aBin): > > > #Unpacks a bytestring into an integer > > > return struct.unpack("i", aBin)[0] > > > > > > > > > - > > > > > > While running the testcode I got the following results: > > > > > > (1 run = converting an integer to a byte string, converting that > > > byte string back to an integer, and finally checking whether that > > > last integer is the same as the input integer.) > > > > > > *** Starting Synchronous Benchmarks. (No Twisted => "blocking" code) > > > -> Synchronous Benchmark (1 runs) Completed in 0.0 seconds. > > > -> Synchronous Benchmark (10 runs) Completed in 0.0 seconds. > > > -> Synchronous Benchmark (100 runs) Completed in 0.0 seconds. > > > -> Synchronous Benchmark (1000 runs) Completed in 0.0034850159 > > > seconds. > > > -> Synchronous Benchmark (1 runs) Completed in 0.036408722 > > > seconds. > > > -> Synchronous Benchmark (10 runs) Completed in 0.36216077 > > > seconds. > > > *** Synchronous Benchmarks Completed in 0.406000137329 seconds. > > > > > > *** Starting Asynchronous Benchmarks . (Twisted => "non-blocking" > > > code) > > > -> Asynchronous Benchmark (1 runs) Completed in 34.509629 > > > seconds. > > > -> Asynchronous Benchmark (10 runs) Completed in 34.509905 > > > seconds. > > > -> Asynchronous Benchmark (100 runs) Completed in 34.513114 > > > seconds. > > > -> Asynchronous Benchmark (1000 runs) Completed in 34.585657 > > > seconds. > > > -> Asynchronous Benchmark (1 runs) Completed in 35.282924 > > > seconds. > > > -> Asynchronous Benchmark (10 runs) Completed in 41.492000103 > > > seconds. > > > *** Asynchronous Benchmarks Completed in 42.1460001469 seconds. > > > > > > Am I really seeing factor 100x?? > > > > > > I really hope that I made a huge reasoning error here but I just > > > can't find it. If my results are correct then I really need to go > > > and check my entire cloud platform for the places where I decided to > > > split functions into atomic operations while thinking that it would > > > actually improve the performance while on the contrary it did the > > > opposit. > > > > > > I personaly suspect that I lose my cpu-cycles to the reactor > > > scheduling the deferred-callbacks. Would that assumption make any > > > sense? > > > The part where I need these conversion functions is in marshalling/ > > > protocol reading and writing throughout the cloud platform, which > > > implies that these functions will be called constantly so I need > > > them to be superfast. I always though I had to split the entire > > > marshalling process into small atomic (deferred-callback) functions > > > to be efficient, but these figures tell me otherwise. > > > > > > I really hope someone can help me out here. > > > > > > Thanks in advance, > > > Best regards, > > > Dirk Moors > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ___ > > > Twisted-Python mailing list > > > Twisted-Python@twistedmatrix.com > > > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python > > > > -- next part -- > > An HTML attachment was scrubbed... > > URL: > > > http://twistedmatrix.com/pipermail/twisted-python/attachments/20091013/e9ae2546/attachment.htm > > > > -- > > > > ___ > > Twisted-Python mailing list > > Twisted-Python@twistedmatrix.com > > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python > > > > > > End of Twisted-Python Digest, Vol 67, Issue 22 > > ** > > > -- next part -- > An HTML attachment was scrubbed... > URL: > http://twistedmatrix.com/pipermail/twisted-python/attachments/20091013/357ffe0c/attachment.htm > -- next part -- > A non-text attachment was scrubbed... > Name: twistedbenchmark.py > Type: application/octet-stream > Size: 7269 bytes > Desc: not available > Url : > http://twistedmatrix.com/pipermail/twisted-python/attachments/20091013/357ffe0c/attachment.obj > > -- > > ___ > Twisted-Python mailing list > Twisted-Python@twistedmatrix.com > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python > > > End of Twisted-Python Digest, Vol 67, Issue 23 > ** > ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] Twisted-Python Digest, Vol 67, Issue 22
ruct) > > reactor.callLater(0, > d.callback, > aBin) > return d > > def int2binSync(anInteger): > #Packs an integer, result is 4 bytes > return struct.pack("i", anInteger) > > def bin2intSync(aBin): > #Unpacks a bytestring into an integer > return struct.unpack("i", aBin)[0] > > - > > While running the testcode I got the following results: > > (1 run = converting an integer to a byte string, converting that > byte string back to an integer, and finally checking whether that > last integer is the same as the input integer.) > > *** Starting Synchronous Benchmarks. (No Twisted => "blocking" code) > -> Synchronous Benchmark (1 runs) Completed in 0.0 seconds. > -> Synchronous Benchmark (10 runs) Completed in 0.0 seconds. > -> Synchronous Benchmark (100 runs) Completed in 0.0 seconds. > -> Synchronous Benchmark (1000 runs) Completed in 0.0034850159 > seconds. > -> Synchronous Benchmark (1 runs) Completed in 0.036408722 > seconds. > -> Synchronous Benchmark (10 runs) Completed in 0.36216077 > seconds. > *** Synchronous Benchmarks Completed in 0.406000137329 seconds. > > *** Starting Asynchronous Benchmarks . (Twisted => "non-blocking" > code) > -> Asynchronous Benchmark (1 runs) Completed in 34.509629 > seconds. > -> Asynchronous Benchmark (10 runs) Completed in 34.509905 > seconds. > -> Asynchronous Benchmark (100 runs) Completed in 34.513114 > seconds. > -> Asynchronous Benchmark (1000 runs) Completed in 34.585657 > seconds. > -> Asynchronous Benchmark (1 runs) Completed in 35.282924 > seconds. > -> Asynchronous Benchmark (10 runs) Completed in 41.492000103 > seconds. > *** Asynchronous Benchmarks Completed in 42.1460001469 seconds. > > Am I really seeing factor 100x?? > > I really hope that I made a huge reasoning error here but I just > can't find it. If my results are correct then I really need to go > and check my entire cloud platform for the places where I decided to > split functions into atomic operations while thinking that it would > actually improve the performance while on the contrary it did the > opposit. > > I personaly suspect that I lose my cpu-cycles to the reactor > scheduling the deferred-callbacks. Would that assumption make any > sense? > The part where I need these conversion functions is in marshalling/ > protocol reading and writing throughout the cloud platform, which > implies that these functions will be called constantly so I need > them to be superfast. I always though I had to split the entire > marshalling process into small atomic (deferred-callback) functions > to be efficient, but these figures tell me otherwise. > > I really hope someone can help me out here. > > Thanks in advance, > Best regards, > Dirk Moors > > > > > > > > > > > > > > ___ > Twisted-Python mailing list > Twisted-Python@twistedmatrix.com > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python -- next part -- An HTML attachment was scrubbed... URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20091013/e9ae2546/attachment.htm -- ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python End of Twisted-Python Digest, Vol 67, Issue 22 ** ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
[Twisted-Python] python-twisted-akonadi and akonadi-gtk
Hi, Twisted is a framework for event driven applications. Typically client- server architectures can be implemented with Twisted. Existing servers and clients exist for a long list of protocols and communication devices including HTTP, SSH, and notably for my purpose, IMAP and UNIX sockets. It provides an event loop, and asynchronous Deferred objects which are similar to KJob objects. http://twistedmatrix.com/trac/ Akonadi is a cross platform PIM (personal information management) framework developed as part of the KDE4 platform. The goals of Akonadi include * Isolating user applications handing PIM data such as emails, contacts, notes etc from the protocols used to access or store that data (IMAP, POP, Maildir, Groupware, vcards, etc). * Provide a single point of storage (actually a cache) of PIM data accessible and manipulatable by any application written in any language on the target platform. http://pim.kde.org/akonadi/ Akonadi is designed as a client/server architecture. The server is written in Qt/C++, and we already have one client library for interfacing with the server written in C++ using the KDE platform. Notifications of changes to data are transmitted over D-Bus, and the actual data is transferred over a local socket (on Unix. On windows it's a named pipe). The protocol used for communication is IMAP with some non-standard extensions. And so the purpose emerges :). I have started an Akonadi client library written in python using twisted- imap with some extensions on top of it for Akonadi specific functionality. The code currently lives here: http://gitorious.org/python-twisted-akonadi There is the twisted-akonadi library a gtkAkonadi library containing some high level classes for PyGtk applications and a simple email reader and addressbook written in pygtk. Because Akonadi keeps everything in sync, you can change items in a KDE application, the gtk application, or the django application, and the other two will be instantly updated with the change. I've blogged about it twice already here: * http://steveire.wordpress.com/2009/10/09/holy-grail-no-thanks-weve- already-got-one/ * http://steveire.wordpress.com/2009/10/13/cross-platform-akonadi-video/ As you can see, this is only a proof of concept of the project. The aim is to create a library which feels pythonic and natural to use for twisted users. If you think I've started in the wrong way, or you have ideas for ways this API could be used, please let me know. I've just started with twisted, so I've probably not found some stuff which would make this task easier. Additionally, if you would like to contribute to the project, that would be very welcome. :) If you find the ideas here interesting and want to know more, the Akonadi developers are in #akonadi on Freenode and kde-pim at KDE.org, and I am already in #twisted. It isn't quite a twisted success story yet, but I think it has the potential to become one. All the best, Steve. ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] Twisted-Python Digest, Vol 67, Issue 23
On Oct 13, 2009, at 10:32 AM, Dirk Moors wrote: > Hello Reza, > > I tried the solution you provided and I have to say, that changed a > lot! > You gave me a better understanding of how things work with Twisted, > and I really appreciate your response! Can you show the new code and benchmark results? Sounds like there's an important lesson here... Thanks, S ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] CalendarServer
Fernando Ruza Rodriguez a écrit : > ~$ svn co svn://svn.twistedmatrix.com/svn/Twisted/trunk Twisted > svn: Cannot be possible to connect to 'svn.twistedmatrix.com': time > > Thanks. > > Fernando. > > I have the same problem at my office, svn port is closed by the firewall ( Port 3690 ) http://svnbook.red-bean.com/en/1.0/ch06s03.html mardiros ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
[Twisted-Python] How to find out if exceptions are being raised in your errBack?
I've been hunting down a problem that I've finally found the cause of and I'd like to know what's the Twisted way to catch this "error within the code handling the error" type of error. Basically, in one branch of the errBack, there was a typo. A simple typo that caused an unhandled NameError exception, but only once in a few thousand runs. The exception got caught and "displayed" by Twisted, but it wasn't going anyplace anyone was looking (buried under zillions of lines of logging) and the app continues on as if nothing went wrong. I've put up a simple app that demonstrates the issue: http://pastebin.com/m59217f60 If you put in a 404 error URL, let it run through, you'll see the 404 error printed out, the exception will occur in the background, and the program just keeps on going. If you then hit Ctrl-C, you can see the traceback showing that Twisted caught the NameError . What is the best way to handle programming errors like this in deferreds so they don't slip by, unnoticed? Thanks, S (~/twisted_err)# ./errs_away.py URL: http://www.yahoo.com line = http://www.yahoo.com Got data, len == 9490 URL: http://thereisnodomainnamedthis.com line = http://thereisnodomainnamedthis.com Error: DNS lookup failed: address 'thereisnodomainnamedthis.com' not found: [Errno 8] nodename nor servname provided, or not known. URL: http://www.yahoo.com/non-existent-page line = http://www.yahoo.com/non-existent-page Error: 400 Bad Request <== This triggers the code with the bad variable URL: ^C <= manually stop the program = Then, you get to see the traceback == Unhandled error in Deferred: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.6/ Extras/lib/python/twisted/web/client.py", line 143, in handleResponse self.status, self.message, response))) File "/System/Library/Frameworks/Python.framework/Versions/2.6/ Extras/lib/python/twisted/web/client.py", line 309, in noPage self.deferred.errback(reason) File "/System/Library/Frameworks/Python.framework/Versions/2.6/ Extras/lib/python/twisted/internet/defer.py", line 269, in errback self._startRunCallbacks(fail) File "/System/Library/Frameworks/Python.framework/Versions/2.6/ Extras/lib/python/twisted/internet/defer.py", line 312, in _startRunCallbacks self._runCallbacks() --- --- File "/System/Library/Frameworks/Python.framework/Versions/2.6/ Extras/lib/python/twisted/internet/defer.py", line 328, in _runCallbacks self.result = callback(self.result, *args, **kw) File "./errs_away.py", line 15, in printError print oops # variable's not defined... exceptions.NameError: global name 'oops' is not defined Thanks, S ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
[Twisted-Python] Twisted Performance
Hi, I am new to twisted and have been having trouble finding out information about twisted's performance. I have a fairly simple setup where I need to open a bunch of TCP connections that last for varying amounts of time but dont do much. I have tried using threads(got GILed to death) and Processes(even worse). Now I am looking at either making a system to start the connection and send info to have the remote point "phone home" when its done, then closing the connection or using something like Twisted. my socket conversation: my app -> send a message that triggers an action on the other end other end -> recv's message does action(can take any amount of time) other end - > sends results back to my app Can twisted handle up to several hundred connections like this? Is there a better approach? Is there anything I should avoid? Thanks, Dan ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] Twisted Performance
On Oct 13, 2009, at 10:44 PM, Daniel Griffin wrote: Hi, I am new to twisted and have been having trouble finding out information about twisted's performance. I have a fairly simple setup where I need to open a bunch of TCP connections that last for varying amounts of time but dont do much. I have tried using threads (got GILed to death) and Processes(even worse). Now I am looking at either making a system to start the connection and send info to have the remote point "phone home" when its done, then closing the connection or using something like Twisted. my socket conversation: my app -> send a message that triggers an action on the other end other end -> recv's message does action(can take any amount of time) other end - > sends results back to my app Twisted Documentation: Writing Clients I would suggest deferring worrying at this point. (;-b). Twisted can almost certainly handle it. Do the simplest thing possible, see how it performs, then worry as necessary. S ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Re: [Twisted-Python] How to find out if exceptions are being raised in your errBack?
On Tue, Oct 13, 2009 at 8:02 PM, Steve Steiner (listsin) < list...@integrateddevcorp.com> wrote: > I've been hunting down a problem that I've finally found the cause of > and I'd like to know what's the Twisted way to catch this "error > within the code handling the error" type of error. > The right way to catch this is to write tests for your code and run them before deploying it to production :). Trial will helpfully fail tests which cause exceptions to be logged, so you don't need to write any special extra test to make sure that nothing is blowing up; just test your error-handling case, and if it blows up you will see it. > Basically, in one branch of the errBack, there was a typo. A simple > typo that caused an unhandled NameError exception, but only once in a > few thousand runs. > If it's a NameError, you also could have used Pyflakes to catch it :). > The exception got caught and "displayed" by Twisted, but it wasn't > going anyplace anyone was looking (buried under zillions of lines of > logging) and the app continues on as if nothing went wrong. > The real lesson here is that you should be paying attention to logged tracebacks. There are many ways to do this. Many operations teams running Twisted servers will trawl the logs with regular expressions. Not my preferred way of doing it, but I'm not really an ops person :). If you want to handle logged exceptions specially, for example to put them in a separate file, or to e-mail them to somebody, consider writing a log observer that checks for the isError key and does something special there. You can find out more about writing log observers here: < http://twistedmatrix.com/projects/core/documentation/howto/logging.html>. > What is the best way to handle programming errors like this in > deferreds so they don't slip by, unnoticed? > I'm answering a question you didn't ask, about logged errors, because I think it's the one you meant to ask. The answer to the question you are actually asking here, i.e. "how do I handle errors in an errback", is quite simple: add another errback. This is sort of like asking how to handle exceptions in an 'except:' block in Python. For example, if you want to catch errors from this code: try: foo() except: oops() you could modify it to look like this: try: foo() except: try: oops() except: handleOopsOops() which is what adding another errback is like. But, as I said: I don't think this is what you want, since it will only let you handle un-handled errors in Deferreds (not unhandled errors in, for example, protocols) and you will have to attach your error-handling callbacks everywhere (not to mention trying to guess a sane return value for the error-handler-error-handler. ___ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python