Peter Otten wrote:
Ethan Furman wrote:
SQLite has a neat feature where if you give it a the file-name of
':memory:' the resulting table is in memory and not on disk. I thought
it was a cool feature, but expanded it slightly: any name surrounded by
colons results in an in-memory table.
I'm looking at the same type of situation with indices, but now I'm
wondering if the :name: method is not pythonic and I should use a flag
(in_memory=True) when memory storage instead of disk storage is desired.
For SQLite it seems OK because you make the decision once per database. For
dbase it'd be once per table, so I would prefer the flag.
So far all feedback is for the flag, so that's what I'll do.
Random
Thoughts?
- Do you really want your users to work with multiple dbf files? I think I'd
rather convert to SQLite, perform the desired operations using sql, then
convert back.
Seems like that would be quite a slow-down (although if a user wants to
do that, s/he certainly could).
- Are names required to manipulate the table? If not you could just omit
them to make the table "in-memory".
At one point I had thought to make tables singletons (so only one copy
of /user/bob/scores.dbf) but that hasn't happened and is rather low
priority, so at this point the name is not required for anything beside
initial object creation.
- How about a connection object that may either correspond to a directory or
RAM:
db = dbf.connect(":memory:")
table = db.Table("foo", ...)
dbf.py does not support the DB-API interface, so no connection objects.
Tables are opened directly and dealt with directly.
All interesting thoughts that made me think. Thank you.
~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list