Hi all,
I'm working on a project in Python 2.7. I have a few large objects, and I want
to save them for later use, so that it will be possible to load them whole from
a file, instead of creating them every time anew. It is critical that they be
transportable between platforms. Problem is, when
You're probably right in general, for me the 3.3 and 2.7 pickles definitely
don't work the same:
3.3:
>>> type(pickle.dumps(1))
2.7:
>>> type(pickle.dumps(1, pickle.HIGHEST_PROTOCOL))
As you can see, in 2.7 when I try to dump something, I get useless string. Look
what I gen when I dump an N
I see. In that case, all I have to do is make sure NLTK is available when I
load the pickled objects. That pretty much solves my problem. Thanks!
So it means pickle doesn't ever save the object's values, only how it was
created?
Say I have a large object that requires a lot of time to train on
I am using the nltk.classify.MaxEntClassifier. This object has a set of labels,
and a set of probabilities: P(label | features). It modifies this probability
given data. SO for example, if you tell this object that the label L appears
60% of the time with the feature F, then P(L | F) = 0.6.
The
Yeah, right. I didn't think about that. I'll check in the source how the data
is stored.
Thanks for helping sort it all out.
--
http://mail.python.org/mailman/listinfo/python-list