"Chris Angelico" wrote in message news:CAPTjJmor8dMv2TDtq8RHQgWeSAaZgAmxK9gFth=oojhidwh...@mail.gmail.com...

So really, the question is: Is this complexity buying you enough
performance that it's worthwhile?


Indeed, that is the question.

Actually, in my case it is not quite the question.

Firstly, although it took me a little while to get AsyncCursor working, it does not feel unduly complex, and actually feels quite light-weight.

My tests show fairly consistently that my approach is slightly (5-10%) slower than run_in_executor(), so if that was the only issue I would not hesitate to abandon my approach.

However, my concern is not to maximise database performance, but to ensure that in an asynchronous environment, one task does not block the others from responding. My tests simulate a number of tasks running concurrently and trying to access the database. Among other measurements, I track the time that each database access commences. As I expected, tasks run with 'run_in_executor' run sequentially, i.e. the next one only starts when the previous one has finished. This is not because the tasks themselves are sequential, but because 'fetchall()' is (I think) a blocking operation. Conversely, with my approach, all the tasks start within a short time of each other. Because I can process the rows as they are received, it seems to give each task a fairer time allocation. Not to mention that there are very likely to be other non-database tasks running concurrently, and they should also be more responsive.

It would be quite difficult to simulate all of this, so I confess that I am relying on gut instinct at the moment.

Frank


--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to