On Fri, Dec 03, 2004 at 06:06:11AM -0500, Kent Johnson wrote:
> If your data is (or can be) created by an iterator, you can use this recipe 
> to group the data into batches of whatever size you choose and write the 
> individual batches to the db.
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/303279

If your data is (or can be) created by an iterator, then you might find it
interesting that *pysqlite2*'s .executemany() not only works on lists, but also
on iterators.

Example:

import pysqlite2.dbapi2 as sqlite
...
# A generator function (which returns an iterator)
def gen():
    for i in xrange(5):
        yield (5, 'foo')

cu.executemany("insert into foo(x, y) values (?, ?)", gen())

So, in pysqlite2, .executemany() and iterators provide best
performance.  .executemany() reuses the compiled SQL statement (so the
engine only needs to parse it once), and the iterator, if used
smartly, reduces the amount of memory used because you don't need to
construct large lists any more.

I hope I don't create too much confusion here ;-)

-- Gerhard

Attachment: signature.asc
Description: Digital signature

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to