JanC:
> In most "modern" Pascal dialects the overflow checks can be (locally)
> enabled or disabled with compiler directives in the source code,
I think that was possible in somewhat older versions of Pascal-like
languages too (like old Delphi versions, and maybe TurboPascals too).
>so the "spee
mattia:
> Now, some ideas (apart from the double loop to aggregate each element of
> l1 with each element of l2):
>>> from itertools import product
>>> list(product([1,2,3], [4,5]))
[(1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5)]
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-
Kottiyath:
> How do we decide whether a level of complexity is Ok or not?
I don't understand your question, but here are better ways to do what
you do:
>>> a = {'a': 2, 'c': 4, 'b': 3}
>>> for k, v in a.iteritems():
... a[k] = v + 1
...
>>> a
{'a': 3, 'c': 5, 'b': 4}
>>> b = dict((k, v+1) for k
Carl Banks:
> The slow performance is most likely due to the poor performance of
> Python 3's IO, which is caused by [...]
My suggestion for the Original Poster is just to try using Python 2.x,
if possible :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
CinnamonDonkey:
>what makes something a package?
If you don't know what a package is, then maybe you don't need
packages.
In your project is it possible to avoid using packages and just use
modules in the same directory?
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
CinnamonDonkey:
> It is neither constructive nor educational.
>
> It's a bit like saying "If you don't know what a function is, then
> maybe you don't need it. ... have you tried having a single block of
> code?"
>
> The point of people coming to these forums is to LEARN and share
> knowledge. Perh
Peter Waller:
> Is there any better way to attach code?
This is a widely used place (but read the "contract"/disclaimer
first):
http://code.activestate.com/recipes/langs/python/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
srinivasan srinivas:
> For ex: to check list 'A' is empty or not..
Empty collections are "false":
if somelist:
... # somelist isn't empty
else:
... # somelist is empty
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
An interval map maybe?
http://code.activestate.com/recipes/457411/
A programmer has to know the name of many data structures :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Kent:
> Now I just deal with my little application exactly in Java style:
> package: gui, service, dao, entity, util
If those things are made of a small enough number of sub things and
such sub things are small enough, then you may use a single module for
each of those Java packages (or even less)
Apollo:
> my question is how to use 'heapq' to extract the biggest item from the heap?
> is it possible?
This wrapper allows you to give a key function:
http://code.activestate.com/recipes/502295/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Lada Kugis:
> (you have 1 apple, you start counting from 1 ...<
To little children I now show how to count starting from zero: apple
number zero, apple number one, etc, and they find it natural
enough :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Ross:
> How should I go about starting this problem...I'm feel like this is a
> really simple problem, but I'm having writer's/coder's block. Can you
> guys help?
There are refined ways to design a program, but this sounds like a
simple and small one, so you probably don't need much formal things
Here an informal list in random order of things that I may like to add
or to remove to/from Python3.x+.
The things I list here don't come from fifty hours of thinking of
mine, and they may be often wrong. But I use Python2.x often enough,
so such things aren't totally random either.
To remove: ma
grkunt...:
> If I am writing in Python, since it is dynamically, but strongly
> typed, I really should check that each parameter is of the expected
> type, or at least can respond to the method I plan on calling ("duck"
> typing). Every call should be wrapped in a try/except statement to
> prevent
activescott:
> BTW: I decided to go with 'scottsappengineutil'.
scottsappengineutil is hard to read and understand. The name split
with underscores is more readable:
scott_appengine_util
Or just:
app_engine_util
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Hyunchul Kim:
> Following script do exactly what I want but I want to improve the speed.
This may be a bit faster, especially if sequences are long (code
untested):
import re
from collections import deque
def scanner1(deque=deque):
result_seq = deque()
cp_regular_expression = re.compile
bearophile:
> cp_regular_expression = re.compile("^a complex regular expression
> here$")
> for line in file(inputfile):
> if cp_regular_expression.match(line) and result_seq:
Sorry, you can replace that with:
cp_regular_expression = re.compile("^a complex regular expression
h
Ravi:
> Which is a better approach.
> My personal view is that I should create a module with functions.
When in doubt, use the simplest solution that works well enough. In
this case, module functions are simple and probably enough.
But there can be a situation where you want to keep functions eve
Paul McGuire:
>xrange is not really intended for "in" testing,<
Let's add the semantic of a good and fast "in" to xrange (and to the
range of Python3). It hurts no one, allows for a natural idiom
(especially when you have a stride you don't want to re-invent the
logic of skipping absent numbers),
zaheer.ag...:
> I am asking free advice,The program is not very complex it is around
> 500 lines with most the code being reused,
500 lines is not a small Python program :-)
If you don't want to show it, then you can write another program, a
smaller one, for the purpose of letting people review it
Emmanuel Surleau:
> On an unrelated note, it would be *really* nice to have a length property on
> strings. Even Java has that!
Once you have written a good amount of Python code you can understand
that a len() function, that calls the __len__ method of objects, is
better. It allows you to write:
per:
> in other words i want the list of random numbers to be arbitrarily
> different (which is why i am using rand()) but as different from other
> tuples in the list as possible.
This is more or less the problem of packing n equal spheres in a cube.
There is a lot of literature on this. You can
casevh:
> Testing 2 digits. This primarily measures the overhead for call GMP
> via an extension module.
> ...
Thank you for adding some actual data to the whole discussion :-)
If you perform similar benchmarks with Bigints of Java you will see
how much slower they are compared to the Python ones.
MRAB:
> I think I might have cracked it:
> ...
> print n, sums
Nice.
If you don't want to use dynamic programming, then add a @memoize
decoration before the function, using for example my one:
http://code.activestate.com/recipes/466320/
And you will see an interesting speed increase, even if
Esmail:
> oh, I forgot to mention that each list may contain duplicates.
Comparing the sorted lists is a possible O(n ln n) solution:
a.sort()
b.sort()
a == b
Another solution is to use frequency dicts, O(n):
from itertools import defaultdict
d1 = defaultdict(int)
for el in a:
d1[el] += 1
d
Arnaud Delobelle:
> Thanks to the power of negative numbers, you only need one dict:
>
> d = defaultdict(int)
> for x in a:
> d[x] += 1
> for x in b:
> d[x] -= 1
> # a and b are equal if d[x]==0 for all x in d:
> not any(d.itervalues())
Very nice, I'll keep this for future use.
Someday I'l
Ciprian Dorin, Craciun:
> Python way:
> -
> def eq (a, b) :
> return a == b
>
> def compare (a, b, comp = eq) :
> if len (a) != len (b) :
> return False
> for i in xrange (len (a)) :
> if not comp (a[i], b[i]) :
> return False
> return True
That'
Paul Rubin:
> Arnaud Delobelle:
> > Do you mean imap(comp, a, b)?
>
> Oh yes, I forgot you can do that. Thanks.
That works and is nice and readable:
import operator
from itertools import imap
def equal_sequences(a, b, comp=operator.eq):
"""
a and b must have __len__
>>> equal_sequ
You can also use quite less code, but this is less efficient:
def equal_items(iter1, iter2, key=lambda x: x):
class Guard(object): pass
try:
for x, y in izip_longest(iter1, iter2, fillvalue=Guard()):
if key(x) != key(y):
return False
except TypeError
Peter Otten:
> [...] I think Raymond Hettinger posted
> an implementation of this idea recently, but I can't find it at the moment.
> [...]
> class Grab:
> def __init__(self, value):
> self.search_value = value
> def __hash__(self):
> return hash(self.search_value)
> def
Arnaud Delobelle:
> You don't want to silence TypeErrors that may arise from with key() when
> x or y is not a Guard, as it could hide bugs in key(). So I would write
> something like this:
>
> def equal_items(iter1, iter2, key=lambda x: x, _fill = object()):
> for x, y in izip_longest(iter1, i
Some idioms are so common that I think they deserve to be written in C
into the itertools module.
1) leniter(iterator)
It returns the length of a given iterator, consuming it, without
creating a list. I have discussed this twice in the past.
Like itertools.izip_longest don't use it with infinite
Arnaud Delobelle:
> Some people would write it as:
>
> def leniter(iterable):
> if hasattr(iterable, '__len__'):
> return len(iteratble)
> return sum(1 for _ in iterable)
That's slower than my version.
> > def xpairwise(iterable):
> > return izip(iterable, islice(iterable,
On Apr 28, 2:54 pm, forrest yang wrote:
> i try to load a big file into a dict, which is about 9,000,000 lines,
> something like
> 1 2 3 4
> 2 2 3 4
> 3 4 5 6
>
> code
> for line in open(file)
> arr=line.strip().split('\t')
> dict[line.split(None, 1)[0]]=arr
>
> but, the dict is really slow
Sion Arrowsmith:
> The keys aren't integers, though, they're strings.
You are right, sorry. I need to add an int() there.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Dinesh:
> If you store a large number of integers (keys and values) in a
> dictionary, do the Python internals perform integer compression to
> save memory and enhance performance? Thanks
Define what you mean with "integer compression" please. It has several
meanings according to the context. For
dineshv:
> Yes, "integer compression" as in Unary, Golomb, and there are a few
> other schemes.
OK. Currently Python doesn't uses Golomb and similar compression
schemes.
But in Python3 all integers are multi-precision ones (I don't know yet
what's bad with the design of Python2.6 integers), and a
Zealalot, probably there are some ways to do that, but a simple one is
the following (not tested):
def function2(self, passed_function=None):
if passed_function is None:
passed_function = self.doNothing
...
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Esmail:
> Is there a Python construct to allow me to do something like
> this:
> for i in range(-10.5, 10.5, 0.1):
Sometimes I use an improved version of this:
http://code.activestate.com/recipes/66472/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
dineshv:
> Thanks for that about Python3. My integers range from 0 to 9,999,999
> and I have loads of them. Do you think Python3 will help?
Nope.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
mikefromvt:
> I am very very unfamiliar with Python and need to update a Python
> script. What I need to do is to replace three variables (already
> defined in the script) within a string. The present script correctly
> replaces two of the three variables. I am unable to add a third
> variable.
Sometimes I rename recursive functions, or I duplicate&modify them,
and they stop working because inside them there's one or more copy of
their old name.
This happens to me more than one time every year.
So I have written this:
from inspect import getframeinfo, currentframe
def SOMEVERYUGLYNAME(n
Arnaud Delobelle:
> >>> def bindfunc(f):
> ... def boundf(*args, **kwargs):
> ... return f(boundf, *args, **kwargs)
> ... return boundf
> ...>>> @bindfunc
> ... def fac(self, n):
> ... return 1 if n <= 1 else n * self(n - 1)
> ...>>> fac(5)
> 120
This is cute, now I have two n
Matthias Gallé:
> My problem is to replace all occurrences of a sublist with a new element.
> Example:
> Given ['a','c','a','c','c','g','a','c'] I want to replace all
> occurrences of ['a','c'] by 6 (result [6,6,'c','g',6]).
There are several ways to solve this problem. Representing a string as
a
Matthias Gallé:
>the int that can replace a sublist can be > 255,<
You didn't specify your integer ranges.
Probably there are many other solutions for your problem, but you have
to give more information. Like the typical array size, typical range
of the numbers, how much important is total memory
Steve Howell:
>two methods with almost identical names, where one function is the public
>interface and then another method that does most of the recursion.<
Thanks Guido & Walter both Python and D support nested functions, so
in such situations I put the recursive function inside the "public
int
Arnaud Delobelle:
> def fac(n):
> def rec(n, acc):
> if n <= 1:
> return acc
> else:
> return rec(n - 1, n*acc)
> return rec(n, 1)
Right, that's another way to partially solve the problem I was talking
about. (Unfortunately the performance in Python
Aahz:
> When have you ever had a binary tree a thousand levels deep?
Yesterday.
>Consider how big 2**1000 is...<
You are thinking just about complete binary trees.
But consider that a topology like a single linked list (every node has
1 child, and they are chained) is a true binary tree still.
Carl Banks:
>1. Singly-linked lists can and should be handled with iteration.<
I was talking about a binary tree with list-like topology, of course.
>All recursion does it make what you're doing a lot less readable for almost
>all programmers.<
I can't agree. If the data structure is recursiv
John O'Hagan:
> li=['a', 'c', 'a', 'c', 'c', 'g', 'a', 'c']
> for i in range(len(li)):
> if li[i:i + 2] == ['a', 'c']:
> li[i:i + 2] = ['6']
Oh well, I have done a mistake, it seems.
Another solution then:
>>> 'acaccgac'.replace("ac", chr(6))
'\x06\x06cg\x06'
Bye,
bearophile
--
http
wolfram.hinde...:
> It is easy to change all references of the function name, except for
> those in the function body itself? That needs some explantation.
I can answer this. If I have a recursive function, I may want to
create a similar function, so I copy and paste it, to later modify the
copied
Aaron Brady:
> >>> def auto( f ):
>
> ... def _inner( *ar, **kw ):
> ... return f( g, *ar, **kw )
> ... g= _inner
> ... return g
Looks nice, I'll try to the following variant to see if it's usable:
def thisfunc(fun):
"""Decorator to inject a default name of a
funct
I appreciate the tables "Infinite Iterators" and "Iterators
terminating on the shortest input sequence" at the top of the
itertools module, they are quite handy. I'd like to see similar
summary tables at the top of other docs pages too (such pages are
often quite long), for example the collections
Francis Carr:
I don't know who are you talking to, but I can give you few answers
anyway.
>collections of multiply-recursive functions (which get used very frequently --
>by no means is it an uncommon situation, as you suggest in your initial post),<
They may be frequent in Scheme (because it's
Terry Reedy:
bearophile:
> > Well, I'd like function call semantics pass-in keyword arguments to
> > use OrderedDicts then... :-)
[...]
> It would require a sufficiently fast C implementation.
Right.
Such dict is usually small, so if people want it ordered, it may be
better to just use an array o
Looking for this, Kevin D. Smith?
http://code.activestate.com/recipes/502295/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Martin Vilcans:
> Nice with a language with a new language designed for high
> performance. It seems like a direct competitor with D, i.e. a
> high-level language with low-level abilities.
> The Python-like syntax is a good idea.
There is Delight too:
http://delight.sourceforge.net/
But I agree,
noydb:
> I have not worked with the %.
> Can you provide a snippet of your idea in code form?
Then it's a very good moment to learn using it:
http://en.wikipedia.org/wiki/Modulo_operator
>>> 10 % 3
1
>>> 10 % 20
10
>>> -10 % 3
2
>>> -10 % -3
-1
>Something like that<
Implement your first v
godshorse, you may use the "shortestPaths" method of this graph class
of mine:
http://sourceforge.net/projects/pynetwork/
(It uses the same Dijkstra code by Eppstein).
(Once you have all distances from a node to the other ones, it's not
too much difficult to find the tree you talk about).
Also se
flam...@gmail.com:
> I am wondering if it's possible to get the return value of a method
> *without* calling it using introspection?
Python is dynamically typed, so you can create a function like this:
>>> foo = lambda x: "x" if x else 1
>>> foo(1)
'x'
>>> foo(0)
1
The return type of foo() chang
rump...@web.de:
> Eventually the "rope" data structure (that the compiler uses heavily)
> will become a proper part of the library:
Ropes are a complex data structure, that it has some downsides too.
Python tries to keep its implementation too simple, this avoids lot of
troubles (and is one of the
On the other hand, generally good programming practice suggests you to
write functions that have a constant return type. And in most programs
most functions are like this. This is why ShedSkin can indeed infer
the return type of functions in "good behaved" programs. To do this
ShedSkin uses a quite
Piet van Oostrum:
> You may not have seen it, but Fortran and Algol 60 belong to that
> category.
I see. It seems my ignorance is unbounded, even for the things I like.
I am very sorry.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
kk:
>I am sure I am missing something here.<
This instruction created a new dicky dict for every iteration:
diky={chr(a):a}
What you want is to add items to the same dict, that you later output.
The normal way to do it is:
diky[chr(a)] = a
Your fixed code:
def values(x):
diky = {}
for i
Gediminas Kregzde:
> map function is slower than
> for loop for about 5 times, when using huge amounts of data.
> It is needed to perform some operations, not to return data.
Then you are using map() for the wrong purpose. map() purpose is to
build a list of things. Use a for loop.
Bye,
bearophil
Jeremy Martin, nowadays a parallelfor can be useful, and in future
I'll try to introduce similar things in D too, but syntax isn't
enough. You need a way to run things in parallel. But Python has the
GIL.
To implement a good parallel for your language may also need more
immutable data structures (t
yadin, understanding what you want is probably 10 times harder than
writing down the code :-)
> I have a a table, from where I can extract a column.
You can extract it? Or do you want to extract it? Or do you want to
process it? Etc.
> I wanna go down trough that column made of numbers
> examin
yadin:
> How can I build up a program that tells me that this sequence
> 128706
> 128707
> 128708
> is repeated somewhere in the column, and how can i know where?
Can such patterns nest? That is, can you have a repeated pattern made
of an already seen pattern plus something else?
If yo
Sumitava Mukherjee:
>I need to randomly sample from a list where all choices have weights attached
>to them.<
Like this?
http://code.activestate.com/recipes/498229/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Terry Reedy:
> >>> a,b,*rest = list(range(10))
> >>> a,b,rest
> (0, 1, [2, 3, 4, 5, 6, 7, 8, 9])
> >>> a,*rest,b = 'abcdefgh'
> >>> a,rest,b
> ('a', ['b', 'c', 'd', 'e', 'f', 'g'], 'h')
For the next few years I generally suggest to specify the Python
version too (if it's 2.x or 3.x).
This is P
Marius Retegan:
>
> parameters1
> key1 value1
> key2 value2
> end
>
> parameters2
> key1 value1
> key2 value2
> end
>
> So I want to create two dictionaries parameters1={key1:value1,
> key2:value2} and the same for parameters2.
I have wasted some time trying to create a regex f
Jared.S., even if a regex doesn't look like a program, it's like a
small program written in a strange language. And you have to test and
comment your programs.
So I suggest you to program in a more tidy way, and add unit tests
(doctests may suffice here) to your regexes, you can also use the
verbos
This may be interesting for Python developers of the random module,
"SIMD-oriented Fast Mersenne Twister (SFMT): twice faster than
Mersenne Twister":
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/
One function may be useful to generate integers (randint, randrange,
choice, shuffle, etc), t
dave, few general comments to your code:
- Instead of using a comment that explains the meaning of a function,
add such things into docstrings.
- Your names can be improved, instead of f you can use file_name or
something like that, instead of convert_file you can use a name that
denotes that the c
dave:
>Can you have doctests on random functions?
Yes, you can add doctests to methods, functions, classes, module
docstrings, and in external text files.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Helmut Jarausch:
> I'd ask in comp.compression where the specialists are listening and who are
> very helpful.
Asking in comp.compression is a good starting point.
My suggestions (sorry if they look a bit unsorted): it depends on what
language you want to use, how much you want to compress the st
bearophile:
> So you need to store only this 11 byte long string to be able to
> decompress it.
Note that maybe there is a header, that may contain changing things,
like the length of the compressed text, etc.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
John Salerno:
> What does everyone think about this?
The Example 2 builds a list, that is then thrown away. It's just a
waste of memory (and time).
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Ben Finney:
>In Python, the philosophy "we're all consenting adults here" applies.<
Michael Foord:
> They will use whatever they find, whether it is the best way to
> achieve a goal or not. Once they start using it they will expect us to
> maintain it - and us telling them it wasn't intended to be
I V:
> You might instead want to
>wrap the lambdas in an object that will do the comparison you want:
This looks very nice, I haven't tried it yet, but if it works well
then it may deserve to be stored in the cookbook, or better, it may
become the built-in behavior of hashing functions.
Bye,
bear
This may have some bugs left, but it looks a bit better:
from inspect import getargspec
class HashableFunction(object):
"""Class that can be used to wrap functions, to allow their
hashing,
for example to create a set of unique functions.
>>> func_strings = ['x', 'x+1', 'x+2', 'x']
Dennis Lee Bieber, the ghost:
> I'd have to wonder why so many recursive calls?
Why not? Maybe the algorithm is written in a recursive style. A
language is good if allows you to use that style too.
On modern CPUs 5 levels don't look that many levels.
Bye,
bearophile
--
http://mail.python.org/
[EMAIL PROTECTED]:
Do you mean something like this? (notice the many formatting
differences, use a formatting similar to this one in your code)
coords = []
for i in xrange(1, 5):
for j in xrange(1, 5):
for k in xrange(1, 2):
coords.append( (i, j, k) )
coords *= 10
print
kj:
> I have some functions
> that require a very long docstring to document, and somehow I find
> it a bit disconcerting to stick a few screenfuls of text between
> the top line of a function definition and its body.
You may put the main function(s) documentation in the docstring of the
module, a
Nader:
> d = {('a' : 1), ('b' : 3), ('c' : 2),('d' : 3),('e' : 1),('f' : 4)}
> I will something as :
> d.keys(where their values are the same)
That's magic.
> With this statement I can get two lists for this example:
> l1= ['a','e']
> l2=['b','d']
> Would somebody tell me how I can do it?
You c
Andrea Gavana:
> Maybe. But I remember a nice quote made in the past by Roger Binns (4
> years ago):
> """
> The other thing I failed to mention is that the wxPython API isn't very
> Pythonic. (This doesn't matter to people like me who are used to GUI
> programming - the wxPython API is very much
Oh, very good, better late than never.
This is my pure Python version, it performs get, set and del
operations too in O(1):
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/498195
Working with C such data structure becomes much faster, because it can
use true pointers.
Then another data str
Martin v. L.:
> For this API, I think it's important to make some performance guarantees.
I may appreciate them for all Python collections :-)
> It seems fairly difficult to make byindex O(1), and
> simultaneously also make insertion/deletion better than O(n).
It may be possible to make both of
Martin v. L.:
> http://wiki.python.org/moin/TimeComplexity
Thank you, I think that's not a list of guarantees, while a list of
how things are now in CPython.
> If so, what's the advantage of using that method over d.items[n]?
I think I have lost the thread here, sorry. So I explain again what I
Kirk Strauser:
> Hint: recursion. Your general algorithm will be something like:
Another solution is to use a better (different) language, that has
built-in pattern matching, or allows to create one.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Martin v. L.:
> However, I think the PEP (author) is misguided in assuming that
> making byindex() a method of odict, you get better performance than
> directly doing .items()[n] - which, as you say, you won't.
In Python 2.5 .items()[n] creates a whole list, and then takes one
item of such list.
A
dbpoko...:
> Why keep the normal dict operations at the same speed? There is a
> substantial cost this entails.
I presume now we can create a list of possible odict usages, because I
think that despite everyone using it for different purposes, we may
find some main groups of its usage. I use odict
dbpoko...:
> Which should be 12 bytes on a 32-bit machine. I thought the space for
> growth factor for dicts was about 12% but it is really 100%.
(Please ignore the trailing ".2" in my number in my last post, such
precision is silly).
My memory value comes from experiments, I have created a little
Duncan Booth:
> What do you get if you change the output to exclude the integers from
> the memory calculation so you are only looking at the dictionary
> elements themselves? e.g.
The results:
318512 (kbytes)
712124 (kbytes)
20.1529344 (bytes)
Bye,
bearophile
--
http://mail.python.org/mailman/l
Knut Saua Mathiesen:
> Any help? :p
My faster suggestion is to try ShedSkin, it may help you produce a
fast enough extension.
If ShedSkin doesn't compile it, its author (Mark) may be quite willing
to help.
bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
[EMAIL PROTECTED]:
> My non-directed graph will have about 63,000 nodes
> and and probably close to 500,000 edges.
That's large, but today not very large anymore. Today very large
graphs probably have more than millions of nodes...
You have to try, but I think any Python graph lib may be fit for y
[EMAIL PROTECTED]:
> I believe Python 3k will (when out of beta) will have a speed
> similar to what it has currently in 2.5, possibly with speed ups
> in some locations.
Python 3 uses by default unicode strings and multiprecision integers,
so a little slowdown is possible.
Michele Simionato:
>
Kris Kennaway:
> I am trying to parse a bit-stream file format (bzip2) that does not have
> byte-aligned record boundaries, so I need to do efficient matching of
> bit substrings at arbitrary bit offsets.
> Is there a package that can do this?
You may take a look at Hachoir or some other modules:
eliben:
> Python's pack/unpack don't have the binary format for some reason, so
> custom solutions have to be developed. One suggested in the ASPN
> cookbook is:http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/111286
> However, it is very general and thus inefficient.
Try mine, it may be fa
901 - 1000 of 1196 matches
Mail list logo