On 2019-12-02 16:49, Michael Torrie wrote:
> On 12/1/19 7:50 PM, Tim Chase wrote:
> > After sparring with it a while, I tweaked the existing job so
> > that it chunked things into dbm-appropriate sizes to limp
> > through; for the subsequent job (where I would have used dbm
&g
On 12/1/19 7:50 PM, Tim Chase wrote:
> After sparring with it a while, I tweaked the existing job so that it
> chunked things into dbm-appropriate sizes to limp through; for the
> subsequent job (where I would have used dbm again) I went ahead and
> switched to sqlite and had no fu
> Maybe port to SQLite? I would not choose dbm these days.
After sparring with it a while, I tweaked the existing job so that it
chunked things into dbm-appropriate sizes to limp through; for the
subsequent job (where I would have used dbm again) I went ahead and
switched to sqlite and had
Maybe port to SQLite? I would not choose dbm these days.
Barry
> On 27 Nov 2019, at 01:48, Tim Chase wrote:
>
> Working with the dbm module (using it as a cache), I've gotten the
> following error at least twice now:
>
> HASH: Out of overflow pages. Increase page
Tim Chase wrote:
> Working with the dbm module (using it as a cache), I've gotten the
> following error at least twice now:
>
> HASH: Out of overflow pages. Increase page size
> Traceback (most recent call last):
> [snip]
> File ".py", line
Working with the dbm module (using it as a cache), I've gotten the
following error at least twice now:
HASH: Out of overflow pages. Increase page size
Traceback (most recent call last):
[snip]
File ".py", line 83, in get_data
db[key] = data
_dbm.error: cannot add i
Type "help", "copyright", "credits" or "license" for more information.
import dbm
with dbm.open("mydb", 'c') as d:
... d["hello"] = "world"
...
Traceback (most recent call last):
File "", line 1, in
> >>> import dbm
> >>> with dbm.open("mydb", 'c') as d:
> ... d["hello"] = "world"
> ...
> Traceback (most recent call last):
> File "", line 1, in
> AttributeError: '_dbm.dbm' object has no
" or "license" for more information.
>>> import dbm
>>> with dbm.open("mydb", 'c') as d:
... d["hello"] = "world"
...
Traceback (most recent call last):
File "", line 1, in
AttributeError: '_dbm.dbm'
In message , Lacrima wrote:
> I don't understand how I can make a table in DBM database, or a row in
> a table. Or all data must be stored just as key-value pairs?
Maybe you should look at sqlite instead.
--
http://mail.python.org/mailman/listinfo/python-list
n be stored, retrieved, and deleted...
>
> > I don't understand how I can make a table in DBM database, or a row in
> > a table. Or all data must be stored just as key-value pairs?
>
> Yes, all data for the dbm variants is purely string->string
> mapping pairs.
table in DBM database, or a row in
a table. Or all data must be stored just as key-value pairs?
Yes, all data for the dbm variants is purely string->string
mapping pairs. Similarly, dictionaries don't natively allow you
to store columns in them...they are just key->value data-store
Hello!
I want to store some data, using anydbm module.
According to docs the object returned by anydbm.open() supports most
of the same functionality as dictionaries; keys and their
corresponding values can be stored, retrieved, and deleted...
I don't understand how I can make a table i
Opened a ticket for this and attached a patch. (experimental)
http://bugs.python.org/issue5736
On Fri, Apr 10, 2009 at 8:39 AM, "Martin v. Löwis" wrote:
I assumed there were some decisions behind this, rather than it's just
not implemented yet.
>>> I believe this assumption is wrong - i
>>> I assumed there were some decisions behind this, rather than it's just
>>> not implemented yet.
>> I believe this assumption is wrong - it's really that no code has been
>> contributed to do that.
>
> But doesn't the issue at http://bugs.python.org/issue662923 imply that
> there *was* suitable
"Martin v. Löwis" wrote:
>> I assumed there were some decisions behind this, rather than it's just
>> not implemented yet.
>
> I believe this assumption is wrong - it's really that no code has been
> contributed to do that.
But doesn't the issue at http://bugs.python.org/issue662923 imply that
the
> I assumed there were some decisions behind this, rather than it's just
> not implemented yet.
I believe this assumption is wrong - it's really that no code has been
contributed to do that.
For gdbm, you can also use the firstkey/nextkey methods.
Regards,
Martin
--
http://mail.python.org/mailma
Joshua> Why not
Joshua> for key in d.keys():
Joshua> print key
Joshua> That worked for me.
Time & space. One motivation for using dbm files is to write large (huge,
in fact) mappings to disk. Simply reconstituting the entire set of keys may
consume a l
keys() returns a list and my question was not about "how to" but more
like "why"...
I assumed there were some decisions behind this, rather than it's just
not implemented yet.
Best,
On Friday, April 10, 2009, Joshua Kugler wrote:
> Akira Kitada wrote:
>
>> The loop has to be:
>> """
> k = d.f
Akira Kitada wrote:
> The loop has to be:
> """
k = d.firstkey()
while k != None:
> ...print k
> ...k = d.nextkey(k)
> key2
> key1
> """
Why not
for key in d.keys():
print key
That worked for me.
j
--
http://mail.python.org/mailman/listinfo/python-list
Hi,
I was wondering why *dbm modules in Python do not give us an iterable interface?
Take a look at an example below
"""
# Python 2.6
>>> import gdbm
>>> d = gdbm.open("spam.db", "n")
>>> d["key1"] = "ham"
>>
gt; the keys that contain version/file/config information), BUT if I copy all
> the data over to a dict and dump the dict to a file using cPickle, that
> file is only 49MB. I'm using pickle protocol 2 in both cases.
>
> Is this expected? Is there really that much overhead to using
On Wednesday 01 August 2007 16:08, Thomas Jollans wrote:
> Have you considered a directory full of pickle files ? (In effect,
> replacing the dbm with the file system) i.e. something like (untested)
>
> class DirShelf(dict):
A very interesting idea. I'll have to see h
the data over to a dict and dump the dict to a file using cPickle, that
> file is only 49MB. I'm using pickle protocol 2 in both cases.
>
> Is this expected? Is there really that much overhead to using shelve and
> dbm files? Are there any similar solutions that are more space e
;m using pickle protocol 2 in both cases.
Is this expected? Is there really that much overhead to using shelve and dbm
files? Are there any similar solutions that are more space efficient? I'd
use straight pickle.dump, but loading requires pulling the entire thing
into memory, and I don&
Hello list,
I have a dbm "database" which needs to be accessed/writed by multiple
processes. At the moment I do something like :
@with_lock
def _save(self):
f = shelve.open(self.session_file, 'c')
try:
f[self.sid] = self.
I am writing a web application for mod_python that catalogs my home
(book) library. For now, I am using the Python dbm module to store
string representations of mod_python's req.form (using the
mod_python.publisher handler) using unique IDs as keys. In the .db
file, there is a key '
I am writing a web application for mod_python that catalogs my home
(book) library. For now, I am using the Python dbm module to store
string representations of mod_python's req.form (using the
mod_python.publisher handler) using unique IDs as keys. In the .db
file, there is a key '
"Paul Rubin" <http://[EMAIL PROTECTED]> wrote:
> "George Sakkis" <[EMAIL PROTECTED]> writes:
> > I'm trying to create a dbm database with around 4.5 million entries
> > but the existing dbm modules (dbhash, gdbm) don't seem to cut
> >
"George Sakkis" <[EMAIL PROTECTED]> writes:
> I'm trying to create a dbm database with around 4.5 million entries
> but the existing dbm modules (dbhash, gdbm) don't seem to cut
> it. What happens is that the more entries are added, the more time
> per ne
I'm trying to create a dbm database with around 4.5 million entries but the
existing dbm modules
(dbhash, gdbm) don't seem to cut it. What happens is that the more entries are
added, the more time
per new entry is required, so the complexity seems to be much worse than
linear. Is
[EMAIL PROTECTED] wrote:
> firstly i couldnt find the DBM module for python 2.3.5, trustix system and
> i386 hardware platform.
> so i downloaded gnu dbm for python 2.3.5 and i568( precisly,
> python-gdbm-2.3.5-4tr.i586), simply assuming it could just work.
>
> but trying to in
Hi, thanks for the reply.
firstly i couldnt find the DBM module for python 2.3.5, trustix system and
i386 hardware platform.
so i downloaded gnu dbm for python 2.3.5 and i568( precisly,
python-gdbm-2.3.5-4tr.i586), simply assuming it could just work.
but trying to install gives me the following
[EMAIL PROTECTED] wrote:
> Can you tell me how do I go about getting the dbm module and install
> it.??
http://www.google.com/search?q=trustix+python+dbm
--
http://mail.python.org/mailman/listinfo/python-list
Hi all,
Iam trying to get 'cvs2svn' to get to work on a Trustix Linux machine.
However is suspect that no dbm is installed because running cvs2svn
gives the following error:
ERROR: your installation of Python does not contain a suitable
DBM module -- cvs2svn cannot continue.
See http://
Hi all,
Iam trying to get 'cvs2svn' to get to work on a Trustix Linux machine.
However is suspect that no dbm is installed because running cvs2svn
gives the following error:
ERROR: your installation of Python does not contain a suitable
DBM module -- cvs2svn cannot continue
[EMAIL PROTECTED] wrote:
> I'm creating an persistant index of a large 63GB file
> containing millions of peices of data. For this I would
> naturally use one of python's dbm modules. But which is the
> best to use?
BDB4, but consider using sqlite - it's really simple,
I'm creating an persistant index of a large 63GB file
containing millions of peices of data. For this I would
naturally use one of python's dbm modules. But which is the
best to use?
The index would be created with something like this:
fh=open('file_to_index')
db=dbhash.ope
38 matches
Mail list logo