Hello,
The problem I have relates to writing algorithmic code that can
handle types with a given API, but where some of the required
functionality is implemented as library functions rather than methods.
Specifically I have code that uses numpy arrays, but I want to adapt it
to use sparse
Andrew Robert wrote:
> Because I was lazy..
>
> The checksume_compare came from something else I wrote that had special
> logging and e-mailer calls in it.
>
> Should have ripped the reference to caller and file name out..
Aaaahh the subtle joys of cut-and-paste programming... :-D
(I've done it
Simon Forman wrote:
> Andrew Robert wrote:
>> Simon Forman wrote:
>>> Paul Rubin wrote:
"EP" <[EMAIL PROTECTED]> writes:
> Given that I am looking for matches of all files against all other
> files (of similar length) is there a better bet than using re.search?
> The initial applic
Andrew Robert wrote:
> Simon Forman wrote:
> > Paul Rubin wrote:
> >> "EP" <[EMAIL PROTECTED]> writes:
> >>> Given that I am looking for matches of all files against all other
> >>> files (of similar length) is there a better bet than using re.search?
> >>> The initial application concerns files in
Simon Forman wrote:
> Paul Rubin wrote:
>> "EP" <[EMAIL PROTECTED]> writes:
>>> Given that I am looking for matches of all files against all other
>>> files (of similar length) is there a better bet than using re.search?
>>> The initial application concerns files in the 1,000's, and I could use
>>>
Paul Rubin wrote:
> "EP" <[EMAIL PROTECTED]> writes:
> > Given that I am looking for matches of all files against all other
> > files (of similar length) is there a better bet than using re.search?
> > The initial application concerns files in the 1,000's, and I could use
> > a good solution for a
EP wrote:
> Hi,
>
> I'm a bit green in this area and wonder to what extent there may be
> some existing Python tools (or if I have to scratch my head real hard
> for an appropriate algorithm... ) I'd hate to build an inferior
> solution to that someone has painstakingly built before me.
>
> I hav
"EP" <[EMAIL PROTECTED]> writes:
> Given that I am looking for matches of all files against all other
> files (of similar length) is there a better bet than using re.search?
> The initial application concerns files in the 1,000's, and I could use
> a good solution for a number of files in the 100,0
If you want to avoid an O(n^2) algorithm, you may need to find a
signature for each file. Then you use such signatures to compute
hashes, and unique them with a dict (dict values may be the file
names). Later you can weed out the few wrong collisions produced by the
possibly approximated signature.
Hi,
I'm a bit green in this area and wonder to what extent there may be
some existing Python tools (or if I have to scratch my head real hard
for an appropriate algorithm... ) I'd hate to build an inferior
solution to that someone has painstakingly built before me.
I have some files which may ha
Tuvas wrote:
> How exactly do you do that? Just to get some kind of an idea, perhaps
> you could share bits of code? Thanks!
Did you check out the ctypes web site before asking? See
http://starship.python.net/crew/theller/ctypes/ and at least read
through the helpful tutorial before asking ques
How exactly do you do that? Just to get some kind of an idea, perhaps
you could share bits of code? Thanks!
--
http://mail.python.org/mailman/listinfo/python-list
On 2005-10-10, Tuvas <[EMAIL PROTECTED]> wrote:
> I am writing a program that mimics a program written in C, but using
> Python-supiorior techniques. The C program calles a library function,
> non-open source, I only know that it sends it a command
> LINUX_CAN_Open() as for a few others as well.
I am writing a program that mimics a program written in C, but using
Python-supiorior techniques. The C program calles a library function,
non-open source, I only know that it sends it a command
LINUX_CAN_Open() as for a few others as well. Is there a way I can call
this function from Python withou
14 matches
Mail list logo