On 11/6/2011 12:04 PM, Dennis Lee Bieber wrote:
On Sun, 6 Nov 2011 11:39:56 +0200, "Frank Millman"<fr...@chagford.com>
declaimed the following in gmane.comp.python.general:


So my analysis of the problem is correct, but my solution is wrong.

Instead of executing fetchall() and returning the connection, I should
retain the connection until I have exhausted the cursor.

That makes a lot of sense.

        Especially if all you are processing are read-only activities.

        If you have a connection/cursor doing write operations, you may not
be able to commit those writes until all reading cursors have closed.
(Read the documentation on the SQLite3 locking system -- though the
newest version has added a second type of locking which may complicate
the matter. The original/normal scheme has potential readers "outside"
SQLite3, active readers "inside" SQLite3 -- when an active reader cursor
advances to a pending write, it blocks all the potential readers from
entering, but is itself blocked until all other active readers have
exited)

   Right.  The scarce resource is database locks, not connections.
Especially with SQLite, which has, by necessity, a rather brutal
locking strategy.

   Realize that SQLite is not a high-performance multi-user database.
You use SQLite to store your browser preferences, not your customer
database.

   If you're doing enough transactions from multiple processes that
performance is an issue, you need to move up to MySQL or Postgres.
If almost all transactions are SELECTs, performance may not be
too bad, but if there are INSERT and UPDATE transactions on the
same table, performance will be awful.

                                John Nagle
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to