Re: Concurrent writes to the same file
Dave Angel wrote: > On 07/11/2013 12:57 AM, Jason Friedman wrote: >> Other than using a database, what are my options for allowing two processes >> to edit the same file at the same time? When I say same time, I can accept >> delays. I considered lock files, but I cannot conceive of how I avoid race >> conditions. >> > > In general, no. That's what a database is for. > > Now, you presumably have some reason to avoid database, but in its stead > you have to specify some other limitations. To start with, what do you > mean by "the same time"? If each process can modify the entire file, > then there's no point in one process reading the file at all until it > has the lock. So the mechanism would be >1) wait till you can acquire the lock >2) open the file, read it, modify it, flush and close >3) release the lock > > To come up with an appropriate lock, it'd be nice to start by specifying > the entire environment. Which Python, which OS? Are the two processes > on the same CPU, what's the file system, and is it locally mounted? > > > > You can use a seperate server process to do file I/O, taking multiple inputs from clients. Like syslogd. -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Simulation Results Managment
moo...@yahoo.co.uk wrote: > Hi, > This is a general question, loosely related to python since it will be the > implementation language. I would like some suggestions as to manage simulation > results data from my ASIC design. > > For my design, > - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of > these test cases we have: > - a number of properties (P_AA_BB_CC) > - For each property, the following information is given > - Property name (P_NAME) > - Number of times it was checked (within the testcase) N_CHECKED > - Number of times if failed (within the testcase) N_FAILED > - A simulation runs a testcase with a set of parameters. > - Simple example, SLOW_CLOCK, FAST_CLOCK, etc > - For the design, I will run regression every night (at least), so I will have > results from multiple timestamps We have < 1000 TESTCASES, and < 1000 > PROPERTIES. > > At the moment, I have a script that extracts property information from > simulation logfile, and provides single PASS/FAIL and all logfiles stored in a > directory structure with timestamps/testnames and other parameters embedded in > paths > > I would like to be easily look at (visualize) the data and answer the > questions - When did this property last fail, and how many times was it > checked - Is this property checked in this test case. > > Initial question: How to organize the data within python? > For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED, > N_FAILED I then have to store multiple instances of testcase based on date > (and simulation parameters. > > Any comments, suggestions? > Thanks, > Steven One small suggestion, I used to store test conditions and results in log files, and then write parsers to read the results. The formats kept changing (add more conditions/results!) and maintenance was a pain. Now, in addition to a text log file, I write a file in pickle format containing a dict of all test conditions and results. Much more convenient. -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Simulation Results Managment
Dieter Maurer wrote: > moo...@yahoo.co.uk writes: >> ... >> Does pickle have any advantages over json/yaml? > > It can store and retrieve almost any Python object with almost no effort. > > Up to you whether you see it as an advantage to be able to store > objects rather than (almost) pure data with a rather limited type set. > > > Of course, "pickle" is a proprietary Python format. Not so easy to > decode it with something else than Python. In addition, when > you store objects, the retrieving application must know the classes > of those objects -- and its knowledge should not be too different > from how those classes looked when the objects have been stored. > > > I like very much to work with objects (rather than with pure data). > Therefore, I use "pickle" when I know that the storing and retrieving > applications all use Python. I use pure (and restricted) data formats > when non Python applications come into play. Typically what I want to do is post-process (e.g. plot) results using python scripts, so using pickle is great for that. -- http://mail.python.org/mailman/listinfo/python-list
Re: Google the video "9/11 Missing Links". 9/11 was a Jew Job!
Google the video "Go fuck yourself" -- http://mail.python.org/mailman/listinfo/python-list
equiv of perl regexp grammar?
I noticed this and thought it looked interesting: http://search.cpan.org/~dconway/Regexp- Grammars-1.021/lib/Regexp/Grammars.pm#DESCRIPTION I'm wondering if python has something equivalent? -- http://mail.python.org/mailman/listinfo/python-list
A little morning puzzle
I have a list of dictionaries. They all have the same keys. I want to find the set of keys where all the dictionaries have the same values. Suggestions? -- http://mail.python.org/mailman/listinfo/python-list
howto handle nested for
I know this should be a fairly basic question, but I'm drawing a blank. I have code that looks like: for s0 in xrange (n_syms): for s1 in xrange (n_syms): for s2 in xrange (n_syms): for s3 in xrange (n_syms): for s4 in range (n_syms): for s5 in range (n_syms): Now I need the level of nesting to vary dynamically. (e.g., maybe I need to add for s6 in range (n_syms)) Smells like a candidate for recursion. Also sounds like a use for yield. Any suggestions? -- http://mail.python.org/mailman/listinfo/python-list
Re: howto handle nested for
Neal Becker wrote: > I know this should be a fairly basic question, but I'm drawing a blank. > > I have code that looks like: > > for s0 in xrange (n_syms): > for s1 in xrange (n_syms): > for s2 in xrange (n_syms): > for s3 in xrange (n_syms): > for s4 in range (n_syms): > for s5 in range (n_syms): > > Now I need the level of nesting to vary dynamically. (e.g., maybe I need to > add > for s6 in range (n_syms)) > > Smells like a candidate for recursion. Also sounds like a use for yield. Any > suggestions? Thanks for the suggestions: I found itertools.product is just great for this. -- http://mail.python.org/mailman/listinfo/python-list
serialization and versioning
I wonder if there is a recommended approach to handle this issue. Suppose objects of a class C are serialized using python standard pickling. Later, suppose class C is changed, perhaps by adding a data member and a new constructor argument. It would see the pickling protocol does not directly provide for this - but is there a recommended method? I could imagine that a class could include a class __version__ property that might be useful - although I would further expect that it would not have been defined in the original version of class C (but only as an afterthought when it became necessary). -- http://mail.python.org/mailman/listinfo/python-list
Re: serialization and versioning
Etienne Robillard wrote: > On Fri, 12 Oct 2012 06:42:03 -0400 > Neal Becker wrote: > >> I wonder if there is a recommended approach to handle this issue. >> >> Suppose objects of a class C are serialized using python standard pickling. >> Later, suppose class C is changed, perhaps by adding a data member and a new >> constructor argument. >> >> It would see the pickling protocol does not directly provide for this - but >> is there a recommended method? >> >> I could imagine that a class could include a class __version__ property that >> might be useful - although I would further expect that it would not have been >> defined in the original version of class C (but only as an afterthought when >> it became necessary). >> >> -- >> http://mail.python.org/mailman/listinfo/python-list > > i guess a easy answer is to say to try python 3.3 but how would this translate > in python (2) code ? So are you saying python 3.3 has such a feature? Where is it described? -- http://mail.python.org/mailman/listinfo/python-list
simple string format question
Is there a way to specify to format I want a floating point written with no more than e.g., 2 digits after the decimal? I tried {:.2f}, but then I get all floats written with 2 digits, even if they are 0: 2.35 << yes, that's what I want 2.00 << no, I want just 2 or 2. -- http://mail.python.org/mailman/listinfo/python-list
Re: how to insert random error in a programming
Debashish Saha wrote: > how to insert random error in a programming? Apparently, giving it to Microsoft will work. -- http://mail.python.org/mailman/listinfo/python-list
Re: Immutability and Python
rusi wrote: > On Oct 29, 8:20 pm, andrea crotti wrote: > >> Any comments about this? What do you prefer and why? > > Im not sure how what the 'prefer' is about -- your specific num > wrapper or is it about the general question of choosing mutable or > immutable types? > > If the latter I would suggest you read > http://en.wikipedia.org/wiki/Alexander_Stepanov#Criticism_of_OOP > > [And remember that Stepanov is the author of C++ STL, he is arguably > as important in the C++ world as Stroustrup] The usual calls for immutability are not related to OO. They have to do with optimization, and specifically with parallel processing. -- http://mail.python.org/mailman/listinfo/python-list
Re: ANNOUNCE: Thesaurus - a recursive dictionary subclass using attributes
Did you intend to give anyone permission to use the code? I see only a copyright notice, but no permissions. -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: PyDTLS
A bit OT, but the widespread use of rfc 6347 could have a big impact on my work. I wonder if it's likely to see widespread use? What are likely/possible use cases? Thank. -- http://mail.python.org/mailman/listinfo/python-list
surprising result all (generator) (bug??)
I was just bitten by this unexpected behavior: In [24]: all ([i > 0 for i in xrange (10)]) Out[24]: False In [25]: all (i > 0 for i in xrange (10)) Out[25]: True -- http://mail.python.org/mailman/listinfo/python-list
Re: surprising result all (generator) (bug??)
Mark Dickinson wrote: > On Jan 31, 6:40 am, Neal Becker wrote: >> I was just bitten by this unexpected behavior: >> >> In [24]: all ([i > 0 for i in xrange (10)]) >> Out[24]: False >> >> In [25]: all (i > 0 for i in xrange (10)) >> Out[25]: True > > What does: > >>>> import numpy >>>> all is numpy.all > > give you? > > -- > Mark In [31]: all is numpy.all Out[31]: True Excellent detective work, Mark! But it still is unexpected, at least to me. -- http://mail.python.org/mailman/listinfo/python-list
Re: [Perl Golf] Round 1
Heiko Wundram wrote: > Am 05.02.2012 12:49, schrieb Alec Taylor: >> Solve this problem using as few lines of code as possible[1]. > > Pardon me, but where's "the problem"? If your intention is to propose "a > challenge", say so, and state the associated problem clearly. > But this really misses the point. Python is not about coming up with some clever, cryptic, one-liner to solve some problem. It's about clear code. If you want clever, cryptic, one-liner's stick with perl. -- http://mail.python.org/mailman/listinfo/python-list
pickle/unpickle class which has changed
What happens if I pickle a class, and later unpickle it where the class now has added some new attributes? -- http://mail.python.org/mailman/listinfo/python-list
Re: pickle/unpickle class which has changed
Peter Otten wrote: > Steven D'Aprano wrote: > >> On Tue, 06 Mar 2012 07:34:34 -0500, Neal Becker wrote: >> >>> What happens if I pickle a class, and later unpickle it where the class >>> now has added some new attributes? >> >> Why don't you try it? >> >> py> import pickle >> py> class C: >> ... a = 23 >> ... >> py> c = C() >> py> pickled = pickle.dumps(c) >> py> C.b = 42 # add a new class attribute >> py> d = pickle.loads(pickled) >> py> d.a >> 23 >> py> d.b >> 42 >> >> >> Unless you mean something different from this, adding attributes to the >> class is perfectly fine. >> >> But... why are you dynamically adding attributes to the class? Isn't that >> rather unusual? > > The way I understand the problem is that an apparently backwards-compatible > change like adding a third dimension to a point with an obvious default > breaks when you restore an "old" instance in a script with the "new" > implementation: > >>>> import pickle >>>> class P(object): > ... def __init__(self, x, y): > ... self.x = x > ... self.y = y > ... def r2(self): > ... return self.x*self.x + self.y*self.y > ... >>>> p = P(2, 3) >>>> p.r2() > 13 >>>> s = pickle.dumps(p) >>>> class P(object): > ... def __init__(self, x, y, z=0): > ... self.x = x > ... self.y = y > ... self.z = z > ... def r2(self): > ... return self.x*self.x + self.y*self.y + self.z*self.z > ... >>>> p = P(2, 3) >>>> p.r2() > 13 >>>> pickle.loads(s).r2() > Traceback (most recent call last): > File "", line 1, in > File "", line 7, in r2 > AttributeError: 'P' object has no attribute 'z' > > By default pickle doesn't invoke __init__() and updates __dict__ directly. > As pointed out in my previous post one way to fix the problem is to > implement a __setstate__() method: > >>>> class P(object): > ... def __init__(self, x, y, z=0): > ... self.x = x > ... self.y = y > ... self.z = z > ... def r2(self): > ... return self.x*self.x + self.y*self.y + self.z*self.z > ... def __setstate__(self, state): > ... self.__dict__["z"] = 42 # stupid default > ... self.__dict__.update(state) > ... >>>> pickle.loads(s).r2() > 1777 > > This keeps working with pickles of the new implementation of P: > >>>> q = P(3, 4, 5) >>>> pickle.loads(pickle.dumps(q)).r2() > 50 So if in my new class definition there are now some new attributes, and if I did not add a __setstate__ to set the new attributes, I guess then when unpickled the instance of the class will simply lack those attributes? -- http://mail.python.org/mailman/listinfo/python-list
cython + scons + c++
Is there a version of cython.py, pyext.py that will work with c++? I asked this question some time ago, but never got an answer. I tried the following code, but it doesn't work correctly. If the commented lines are uncommented, the gcc command is totally mangled. Although it did build my 1 test extension OK, I didn't use any libstdc++ - I suspect it won't link correctly in general because it doesn't seem to treat the code as c++ (treats it as c code). cyenv = Environment(PYEXT_USE_DISTUTILS=True) cyenv.Tool("pyext") cyenv.Tool("cython") import numpy cyenv.Append(PYEXTINCPATH=[numpy.get_include()]) cyenv.Replace(CYTHONFLAGS=['--cplus']) #cyenv.Replace(CXXFILESUFFIX='.cpp') #cyenv.Replace(CYTHONCFILESUFFIX='.cpp') -- http://mail.python.org/mailman/listinfo/python-list
argparse ConfigureAction problem
I've been using arparse with ConfigureAction (which is shown below). But, it doesn't play well with positional arguments. For example: ./plot_stuff2.py --plot stuff1 stuff2 [...] plot_stuff2.py: error: argument --plot/--with-plot/--enable-plot/--no-plot/-- without-plot/--disable-plot: invalid boolean value: 'stuff1' Problem is --plot takes an optional argument, and so the positional arg is assumed to be the arg to --plot. Not sure how to fix this. Here is the parser code: parser = argparse.ArgumentParser() [...] parser.add_argument ('--plot', action=ConfigureAction, default=False) parser.add_argument ('files', nargs='*') opt = parser.parse_args(cmdline[1:]) Here is ConfigureAction: - import argparse import re def boolean(string): string = string.lower() if string in ['0', 'f', 'false', 'no', 'off']: return False elif string in ['1', 't', 'true', 'yes', 'on']: return True else: raise ValueError() class ConfigureAction(argparse.Action): def __init__(self, option_strings, dest, default=None, required=False, help=None, metavar=None, positive_prefixes=['--', '--with-', '--enable-'], negative_prefixes=['--no-', '--without-', '--disable-']): strings = [] self.positive_strings = set() self.negative_strings = set() for string in option_strings: assert re.match(r'--[A-z]+', string) suffix = string[2:] for positive_prefix in positive_prefixes: self.positive_strings.add(positive_prefix + suffix) strings.append(positive_prefix + suffix) for negative_prefix in negative_prefixes: self.negative_strings.add(negative_prefix + suffix) strings.append(negative_prefix + suffix) super(ConfigureAction, self).__init__( option_strings=strings, dest=dest, nargs='?', const=None, default=default, type=boolean, choices=None, required=required, help=help, metavar=metavar) def __call__(self, parser, namespace, value, option_string=None): if value is None: value = option_string in self.positive_strings elif option_string in self.negative_strings: value = not value setattr(namespace, self.dest, value) -- http://mail.python.org/mailman/listinfo/python-list
set PYTHONPATH for a directory?
I'm testing some software I'm building against an alternative version of a library. So I have an alternative library in directory L. Then I have in an unrelated directory, the test software, which I need to use the library version from directory L. One approach is to set PYTHONPATH whenever I run this test software. Any suggestion on a more foolproof approach? -- http://mail.python.org/mailman/listinfo/python-list
Re: Good data structure for finding date intervals including a given date
Probably boost ITL (Interval Template Library) would serve as a good example. I noticed recently someone created an interface for python. -- http://mail.python.org/mailman/listinfo/python-list
Re: usenet reading
Jon Clements wrote: > Hi All, > > Normally use Google Groups but it's becoming absolutely frustrating - not only > has the interface changed to be frankly impractical, the posts are somewhat > random of what appears, is posted and whatnot. (Ironically posted from GG) > > Is there a server out there where I can get my news groups? I use to be with > an ISP that hosted usenet servers, but alas, it's no longer around... > > Only really interested in Python groups and C++. > > Any advice appreciated, > > Jon. Somewhat unrelated - any good news reader for Android? -- http://mail.python.org/mailman/listinfo/python-list
mode for file created by open
If a new file is created by open ('xxx', 'w') How can I control the file permission bits? Is my only choice to use chmod after opening, or use os.open? Wouldn't this be a good thing to have as a keyword for open? Too bad what python calls 'mode' is like what posix open calls 'flags', and what posix open calls 'mode' is what should go to chmod. -- http://mail.python.org/mailman/listinfo/python-list
Re: mode for file created by open
Cameron Simpson wrote: > On 08Jun2012 14:36, Neal Becker wrote: > | If a new file is created by open ('xxx', 'w') > | > | How can I control the file permission bits? Is my only choice to use chmod > | after opening, or use os.open? > | > | Wouldn't this be a good thing to have as a keyword for open? Too bad what > | python calls 'mode' is like what posix open calls 'flags', and what posix > | open calls 'mode' is what should go to chmod. > > Well, it does honour the umask, and will call the OS open with 0666 > mode so you'll get 0666-umask mode bits in the new file (if it is new). > > Last time I called os.open was to pass a mode of 0 (raceproof lockfile). > > I would advocate (untested): > > fd = os.open(...) > os.fchmod(fd, new_mode) > fp = os.fdopen(fd) > > If you need to constrain access in a raceless fashion (specificly, no > ealy window of _extra_ access) pass a restrictive mode to os.open and > open it up with fchmod. > > Cheers, Doesn't anyone else think it would be a good addition to open to specify a file creation mode? Like posix open? Avoid all these nasty workarounds? -- http://mail.python.org/mailman/listinfo/python-list
Re: mode for file created by open
Terry Reedy wrote: > On 6/9/2012 10:08 AM, Devin Jeanpierre wrote: >> On Sat, Jun 9, 2012 at 7:42 AM, Neal Becker wrote: >>> Doesn't anyone else think it would be a good addition to open to specify a >>> file >>> creation mode? Like posix open? Avoid all these nasty workarounds? >> >> I do, although I'm hesitant, because this only applies when mode == >> 'w', and open has a large and growing list of parameters. > > The buffer parameter (I believe it is) also does not always apply. > > The original open builtin was a thin wrapper around old C's stdio.open. > Open no longer has that constraint. After more discussion here, someone > could open a tracker issue with a specific proposal. Keep in mind that > 'mode' is already a parameter name for the mode of opening, as opposed > to the permission mode for subsequent users. > I haven't seen the current code - I'd guess it just uses posix open. So I would guess it wouldn't be difficult to add the creation mode argument. How about call it cr_mode? -- http://mail.python.org/mailman/listinfo/python-list
module name vs '.'
Am I correct that a module could never come from a file path with a '.' in the name? -- http://mail.python.org/mailman/listinfo/python-list
Re: module name vs '.'
I meant a module src.directory contains __init__.py neal.py becker.py from src.directory import neal On Mon, Jun 18, 2012 at 9:44 AM, Dave Angel wrote: > On 06/18/2012 09:19 AM, Neal Becker wrote: > > Am I correct that a module could never come from a file path with a '.' > in the > > name? > > > > No. > > Simple example: Create a directory called src.directory > In that directory, create two files > > ::neal.py:: > import becker > print becker.__file__ > print becker.hello() > > > ::becker.py:: > def hello(): >print "Inside hello" >return "returning" > > > Then run neal.py, from that directory; > > > davea@think:~/temppython/src.directory$ python neal.py > /mnt/data/davea/temppython/src.directory/becker.pyc > Inside hello > returning > davea@think:~/temppython/src.directory$ > > Observe the results of printing __file__ > > Other approaches include putting a directory path containing a period > into sys.path > > > > -- > > DaveA > > -- http://mail.python.org/mailman/listinfo/python-list
writable iterators?
AFAICT, the python iterator concept only supports readable iterators, not write. Is this true? for example: for e in sequence: do something that reads e e = blah # will do nothing I believe this is not a limitation on the for loop, but a limitation on the python iterator concept. Is this correct? -- http://mail.python.org/mailman/listinfo/python-list
Re: writable iterators?
Steven D'Aprano wrote: > On Wed, 22 Jun 2011 15:28:23 -0400, Neal Becker wrote: > >> AFAICT, the python iterator concept only supports readable iterators, >> not write. Is this true? >> >> for example: >> >> for e in sequence: >> do something that reads e >> e = blah # will do nothing >> >> I believe this is not a limitation on the for loop, but a limitation on >> the python iterator concept. Is this correct? > > Have you tried it? "e = blah" certainly does not "do nothing", regardless > of whether you are in a for loop or not. It binds the name e to the value > blah. > Yes, I understand that e = blah just rebinds e. I did not mean this as an example of working code. I meant to say, does Python have any idiom that allows iteration over a sequence such that the elements can be assigned? ... > * iterators are lazy sequences, and cannot be changed because there's > nothing to change (they don't store their values anywhere, but calculate > them one by one on demand and then immediately forget that value); > > * immutable sequences, like tuples, are immutable and cannot be changed > because that's what immutable means; > > * mutable sequences like lists can be changed. The standard idiom for > that is to use enumerate: > > for i, e in enumerate(seq): > seq[i] = e + 42 > > AFAIK, the above is the only python idiom that allows iteration over a sequence such that you can write to the sequence. And THAT is the problem. In many cases, indexing is much less efficient than iteration. -- http://mail.python.org/mailman/listinfo/python-list
Re: writable iterators?
Ian Kelly wrote: > On Wed, Jun 22, 2011 at 3:54 PM, Steven D'Aprano > wrote: >> Fortunately, that's not how it works, and far from being a "limitation", >> it would be *disastrous* if iterables worked that way. I can't imagine >> how many bugs would occur from people reassigning to the loop variable, >> forgetting that it had a side-effect of also reassigning to the iterable. >> Fortunately, Python is not that badly designed. > > The example syntax is a non-starter, but there's nothing wrong with > the basic idea. The STL of C++ uses output iterators and a quick > Google search doesn't turn up any "harmful"-style rants about those. > > Of course, there are a couple of major differences between C++ > iterators and Python iterators. FIrst, C++ iterators have an explicit > dereference step, which keeps the iterator variable separate from the > value that it accesses and also provides a possible target for > assignment. You could say that next(iterator) is the corresponding > dereference step in Python, but it is not accessible in a for loop and > it does not provide an assignment target in any case. > > Second, C++ iterators separate out the dereference step from the > iterator advancement step. In Python, both next(iterator) and > generator.send() are expected to advance the iterator, which would be > problematic for creating an iterator that does both input and output. > > I don't think that output iterators would be a "disaster" in Python, > but I also don't see a clean way to add them to the existing iterator > protocol. > >> If you want to change the source iterable, you have to explicitly do so. >> Whether you can or not depends on the source: >> >> * iterators are lazy sequences, and cannot be changed because there's >> nothing to change (they don't store their values anywhere, but calculate >> them one by one on demand and then immediately forget that value); > > No, an iterator is an object that allows traversal over a collection > in a manner independent of the implementation of that collection. In > many instances, especially in Python and similar languages, the > "collection" is abstracted to an operation over another collection, or > even to the results of a serial computation where there is no actual > "collection" in memory. > > Iterators are not lazy sequences, because they do not behave like > sequences. You can't index them, you can't reiterate them, you can't > get their length (and before you point out that there are ways of > doing each of these things -- yes, but none of those ways use > sequence-like syntax). For true lazy sequences, consider the concept > of streams and promises in the functional languages. > > In any case, the desired behavior of an output iterator on a source > iterator is clear enough to me. If the source iterator is also an > output iterator, then it propagates the write to it. If the source > iterator is not an output iterator, then it raises a TypeError. > >> * mutable sequences like lists can be changed. The standard idiom for >> that is to use enumerate: >> >> for i, e in enumerate(seq): >> seq[i] = e + 42 > > Unless the underlying collection is a dict, in which case I need to do: > > for k, v in d.items(): > d[k] = v + 42 > > Or a file: > > for line in f: > # I'm not even sure whether this actually works. > f.seek(-len(line)) > f.write(line.upper()) > > As I said above, iterators are supposed to provide > implementation-independent traversal over a collection. For writing, > enumerate fails in this regard. While python may not have output iterators, interestingly numpy has just added this capability. It is part of nditer. So, this may suggest a syntax. There have been a number of responses to my question that suggest using indexing (maybe with enumerate). Once again, this is not suitable for many data structures. c++ and stl teach that iteration is often far more efficient than indexing. Think of a linked-list. Even for a dense multi-dim array, index calculations are much slower than iteration. I believe the lack of output iterators is a defienciency in the python iterator concept. -- http://mail.python.org/mailman/listinfo/python-list
Re: writable iterators?
Chris Torek wrote: > In article I wrote, in part: >>Another possible syntax: >> >>for item in container with key: >> >>which translates roughly to "bind both key and item to the value >>for lists, but bind key to the key and value for the value for >>dictionary-ish items". Then ... the OP would write, e.g.: >> >>for elem in sequence with index: >>... >>sequence[index] = newvalue >> >>which of course calls the usual container.__setitem__. In this >>case the "new protocol" is to have iterators define a function >>that returns not just the next value in the sequence, but also >>an appropriate "key" argument to __setitem__. For lists, this >>is just the index; for dictionaries, it is the key; for other >>containers, it is whatever they use for their keys. > > I note I seem to have switched halfway through thinking about > this from "value" to "index" for lists, and not written that. :-) > > Here's a sample of a simple generator that does the trick for > list, buffer, and dict: > > def indexed_seq(seq): > """ > produce a pair > > such that seq[key_or_index] is initially; you can > write on seq[key_or_index] to set a new value while this > operates. Note that we don't allow tuple and string here > since they are not writeable. > """ > if isinstance(seq, (list, buffer)): > for i, v in enumerate(seq): > yield i, v > elif isinstance(seq, dict): > for k in seq: > yield k, seq[k] > else: > raise TypeError("don't know how to index %s" % type(seq)) > > which shows that there is no need for a new syntax. (Turning the > above into an iterator, and handling container classes that have > an __iter__ callable that produces an iterator that defines an > appropriate index-and-value-getter, is left as an exercise. :-) ) Here is what numpy nditer does: for item in np.nditer(u, [], ['readwrite'], order='C'): ... item[...] = 10 Notice that the slice syntax is used to 'dereference' the iterator. This seems like reasonably pythonic syntax, to my eye. -- http://mail.python.org/mailman/listinfo/python-list
'Use-Once' Variables and Linear Objects
I thought this was an interesting article http://www.pipeline.com/~hbaker1/Use1Var.html -- http://mail.python.org/mailman/listinfo/python-list
argparse, tell if arg was defaulted
Is there any way to tell if an arg value was defaulted vs. set on command line? -- http://mail.python.org/mailman/listinfo/python-list
Re: argparse, tell if arg was defaulted
Robert Kern wrote: > On 3/15/11 9:54 AM, Neal Becker wrote: >> Is there any way to tell if an arg value was defaulted vs. set on command >> line? > > No. If you need to determine that, don't set a default value in the > add_argument() method. Then just check for None and replace it with the > default value and do whatever other processing for the case where the user > does not specify that argument. > > parser.add_argument('-f', '--foo', help="the foo argument [default: bar]") > > args = parser.parse_args() > if args.foo is None: > args.foo = 'bar' > print 'I'm warning you that you did not specify a --foo argument.' > print 'Using default=bar.' > Not a completely silly use case, actually. What I need here is a combined command line / config file parser. Here is my current idea: - parser = OptionParser() parser.add_option ('--opt1', default=default1) (opt,args) = parser.parse_args() import json, sys for arg in args: print 'arg:', arg d = json.load(open (arg, 'r')) parser.set_defaults (**d) (opt,args) = parser.parse_args() --- parse_args() is called 2 times. First time is just to find the non-option args, which are assumed to be the name(s) of config file(s) to read. This is used to set_defaults. Then run parse_args() again. -- http://mail.python.org/mailman/listinfo/python-list
Re: argparse, tell if arg was defaulted
Robert Kern wrote: > On 3/15/11 12:46 PM, Neal Becker wrote: >> Robert Kern wrote: >> >>> On 3/15/11 9:54 AM, Neal Becker wrote: >>>> Is there any way to tell if an arg value was defaulted vs. set on command >>>> line? >>> >>> No. If you need to determine that, don't set a default value in the >>> add_argument() method. Then just check for None and replace it with the >>> default value and do whatever other processing for the case where the user >>> does not specify that argument. >>> >>> parser.add_argument('-f', '--foo', help="the foo argument [default: bar]") >>> >>> args = parser.parse_args() >>> if args.foo is None: >>> args.foo = 'bar' >>> print 'I'm warning you that you did not specify a --foo argument.' >>> print 'Using default=bar.' >>> >> >> Not a completely silly use case, actually. What I need here is a combined >> command line / config file parser. >> >> Here is my current idea: >> - >> >> parser = OptionParser() >> parser.add_option ('--opt1', default=default1) >> >> (opt,args) = parser.parse_args() >> >> import json, sys >> >> for arg in args: >> print 'arg:', arg >> d = json.load(open (arg, 'r')) >> parser.set_defaults (**d) >> >> (opt,args) = parser.parse_args() >> --- >> >> parse_args() is called 2 times. First time is just to find the non-option >> args, >> which are assumed to be the name(s) of config file(s) to read. This is used >> to >> set_defaults. Then run parse_args() again. > > I think that would work fine for most cases. Just be careful with the argument > types that may consume resources. E.g. type=argparse.FileType(). > > You could also make a secondary parser that just extracts the config-file > argument: > > [~] > |25> import argparse > > [~] > |26> config_parser = argparse.ArgumentParser(add_help=False) > > [~] > |27> config_parser.add_argument('-c', '--config', action='append') > _AppendAction(option_strings=['-c', '--config'], dest='config', nargs=None, > const=None, default=None, type=None, choices=None, help=None, metavar=None) > > [~] > |28> parser = argparse.ArgumentParser() > > [~] > |29> parser.add_argument('-c', '--config', action='append') # For the --help > string. > _AppendAction(option_strings=['-c', '--config'], dest='config', nargs=None, > const=None, default=None, type=None, choices=None, help=None, metavar=None) > > [~] > |30> parser.add_argument('-o', '--output') > _StoreAction(option_strings=['-o', '--output'], dest='output', nargs=None, > const=None, default=None, type=None, choices=None, help=None, metavar=None) > > [~] > |31> parser.add_argument('other', nargs='*') > _StoreAction(option_strings=[], dest='other', nargs='*', const=None, > default=None, type=None, choices=None, help=None, metavar=None) > > [~] > |32> argv = ['-c', 'config-file.json', '-o', 'output.txt', 'other', > |'arguments'] > > [~] > |33> known, unknown = config_parser.parse_known_args(argv) > > [~] > |34> known > Namespace(config=['config-file.json']) > > [~] > |35> unknown > ['-o', 'output.txt', 'other', 'arguments'] > > [~] > |36> for cf in known.config: > ...> # Load d from file. > ...> parser.set_defaults(**d) > ...> > > [~] > |37> parser.parse_args(unknown) > Namespace(config=None, other=['other', 'arguments'], output='output.txt') > > nice! -- http://mail.python.org/mailman/listinfo/python-list
argparse csv + choices
I'm trying to combine 'choices' with a comma-seperated list of options, so I could do e.g., --cheat=a,b parser.add_argument ('--cheat', choices=('a','b','c'), type=lambda x: x.split(','), default=[]) test.py --cheat a error: argument --cheat: invalid choice: ['a'] (choose from 'a', 'b', 'c') The validation of choice is failing, because parse returns a list, not an item. Suggestions? -- http://mail.python.org/mailman/listinfo/python-list
Re: argparse csv + choices
Robert Kern wrote: > On 3/30/11 10:32 AM, Neal Becker wrote: >> I'm trying to combine 'choices' with a comma-seperated list of options, so I >> could do e.g., >> >> --cheat=a,b >> >> parser.add_argument ('--cheat', choices=('a','b','c'), type=lambda x: >> x.split(','), default=[]) >> >> test.py --cheat a >> error: argument --cheat: invalid choice: ['a'] (choose from 'a', 'b', 'c') >> >> The validation of choice is failing, because parse returns a list, not an >> item. Suggestions? > > Do the validation in the type function. > > > import argparse > > class ChoiceList(object): > def __init__(self, choices): > self.choices = choices > > def __repr__(self): > return '%s(%r)' % (type(self).__name__, self.choices) > > def __call__(self, csv): > args = csv.split(',') > remainder = sorted(set(args) - set(self.choices)) > if remainder: > raise ValueError("invalid choices: %r (choose from %r)" % > (remainder, self.choices)) > return args > > > parser = argparse.ArgumentParser() > parser.add_argument('--cheat', type=ChoiceList(['a','b','c']), default=[]) > print parser.parse_args(['--cheat=a,b']) > parser.parse_args(['--cheat=a,b,d']) > Excellent! Thanks! -- http://mail.python.org/mailman/listinfo/python-list
Re: python ioctl
Nitish Sharma wrote: > Hi PyPpl, > For my current project I have a kernel device driver and a user-space > application. This user-space application is already provided to me, and > written in python. I have to extend this application with some addition > features, which involves communicating with kernel device driver through > ioctl() interface. > I am fairly new with Python and not able to grok how to provide "op" in > ioctl syntax - fcntl.ioctl (fd, op[, arg[, mutate_flag]]). Operations > supported by device driver, through ioctl, are of the form: IOCTL_SET_MSG > _IOR(MAGIC_NUMBER, 0, char*). > It'd be great if some help can be provided about how to "encode" these > operations in python to implement the desired functionality. > > Regards > Nitish Here's some of my stuff. Specific to my device, but maybe you get some ideas eioctl.py from ctypes import * libc = CDLL ('/lib/libc.so.6') #print libc.ioctl def set_ioctl_argtype (arg_type): libc.ioctl.argtypes = (c_int, c_int, arg_type) IOC_WRITE = 0x1 _IOC_NRBITS=8 _IOC_TYPEBITS= 8 _IOC_SIZEBITS= 14 _IOC_DIRBITS= 2 _IOC_NRSHIFT= 0 _IOC_TYPESHIFT= (_IOC_NRSHIFT+_IOC_NRBITS) _IOC_SIZESHIFT= (_IOC_TYPESHIFT+_IOC_TYPEBITS) _IOC_DIRSHIFT= (_IOC_SIZESHIFT+_IOC_SIZEBITS) def IOC (dir, type, nr, size): return (((dir) << _IOC_DIRSHIFT) | \ ((type) << _IOC_TYPESHIFT) | \ ((nr) << _IOC_NRSHIFT) | \ ((size) << _IOC_SIZESHIFT)) def ioctl (fd, request, args): return libc.ioctl (fd, request, args) -- example of usage: # Enable byte swap in driver from eioctl import IOC, IOC_WRITE EOS_IOC_MAGIC = 0xF4 request = IOC(IOC_WRITE, EOS_IOC_MAGIC, 1, struct.calcsize ('i')) err = fcntl.ioctl(eos_fd, request, 1) -- http://mail.python.org/mailman/listinfo/python-list
Re: Get the IP address of WIFI interface
Far.Runner wrote: > Hi python experts: > There are two network interfaces on my laptop: one is 100M Ethernet > interface, the other is wifi interface, both are connected and has an ip > address. > The question is: How to get the ip address of the wifi interface in a python > script without parsing the output of a shell command like "ipconfig" or > "ifconfig"? > > OS: Windows or Linux > > F.R Here's some useful snippits for linux: def get_default_if(): f = open('/proc/net/route') for i in csv.DictReader(f, delimiter="\t"): if long(i['Destination'], 16) == 0: return i['Iface'] return None def get_ip_address(ifname): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) return socket.inet_ntoa(fcntl.ioctl( s.fileno(), 0x8915, # SIOCGIFADDR struct.pack('256s', ifname[:15]) )[20:24]) -- http://mail.python.org/mailman/listinfo/python-list
cPickle -> invalid signature
What does it mean when cPickle.load says: RuntimeError: invalid signature Is binary format not portable? -- http://mail.python.org/mailman/listinfo/python-list
Re: cPickle -> invalid signature
Gabriel Genellina wrote: > En Tue, 17 May 2011 08:41:41 -0300, Neal Becker > escribió: > >> What does it mean when cPickle.load says: >> RuntimeError: invalid signature >> >> Is binary format not portable? > > Are you sure that's the actual error message? > I cannot find such message anywhere in the sources. > The pickle format is quite portable, even cross-version. As a generic > answer, make sure you open the file in binary mode, both when writing and > reading. > Yes, that's the message. Part of what is pickled is a numpy array. I am writing on a 32-bit linux system and reading on a 64-bit system. Reading on the 64-bit system is no problem. Maybe the message comes from numpy's unpickling? -- http://mail.python.org/mailman/listinfo/python-list
when is filter test applied?
In the following code (python3): for rb in filter (lambda b : b in some_seq, seq): ... some code that might modify some_seq I'm assuming that the test 'b in some_seq' is applied late, at the start of each iteration (but it doesn't seem to be working that way in my real code), so that if 'some_seq' is modified during a previous iteration the test is correctly performed on the latest version of 'some_seq' at the start of each iteration. Is this correct, and is this guaranteed? -- https://mail.python.org/mailman/listinfo/python-list
Re: when is filter test applied?
I'm not certain that it isn't behaving as expected - my code is quite complicated. On Tue, Oct 3, 2017 at 11:35 AM Paul Moore wrote: > My intuition is that the lambda creates a closure that captures the > value of some_seq. If that value is mutable, and "modify some_seq" > means "mutate the value", then I'd expect each element of seq to be > tested against the value of some_seq that is current at the time the > test occurs, i.e. when the entry is generated from the filter. > > You say that doesn't happen, so my intuition (and yours) seems to be > wrong. Can you provide a reproducible test case? I'd be inclined to > run that through dis.dis to see what bytecode was produced. > > Paul > > On 3 October 2017 at 16:08, Neal Becker wrote: > > In the following code (python3): > > > > for rb in filter (lambda b : b in some_seq, seq): > > ... some code that might modify some_seq > > > > I'm assuming that the test 'b in some_seq' is applied late, at the start > of > > each iteration (but it doesn't seem to be working that way in my real > code), > > so that if 'some_seq' is modified during a previous iteration the test is > > correctly performed on the latest version of 'some_seq' at the start of > each > > iteration. Is this correct, and is this guaranteed? > > > > > > -- > > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
f-string syntax deficiency?
The following f-string does not parse and gives syntax error on 3.11.3: f'thruput/{"user" if opt.return else "cell"} vs. elevation\n' However this expression, which is similar does parse correctly: f'thruput/{"user" if True else "cell"} vs. elevation\n' I don't see any workaround. Parenthesizing doesn't help: f'thruput/{"user" if (opt.return) else "cell"} vs. elevation\n' also gives a syntax error -- https://mail.python.org/mailman/listinfo/python-list
Re: Recommendation for drawing graphs and creating tables, saving as PDF
Jan Erik Moström wrote: > I'm doing something that I've never done before and need some advise for > suitable libraries. > > I want to > > a) create diagrams similar to this one > https://www.dropbox.com/s/kyh7rxbcogvecs1/graph.png?dl=0 (but with more > nodes) and save them as PDFs or some format that can easily be converted > to PDFs > > b) generate documents that contains text, lists, and tables with some > styling. Here my idea was to save the info as markdown and create PDFs > from those files, but if there is some other tools that gives me better > control over the tables I'm interested in knowing about them. > > I looked around around but could only find two types of libraries for a) > libraries for creating histograms, bar charts, etc, b) very basic > drawing tools that requires me to figure out the layout etc. I would > prefer a library that would allow me to state "connect A to B", "connect > C to B", "connect B to D", and the library would do the whole layout. > > The closest I've found it to use markdown and mermaid or graphviz but > ... PDFs (perhaps I should just forget about PDFs, then it should be > enough to send people to a web page) > > (and yes, I could obviously use LaTeX ...) > > = jem Like this? https://pypi.org/project/blockdiag/ -- https://mail.python.org/mailman/listinfo/python-list
best way to ensure './' is at beginning of sys.path?
I want to make sure any modules I build in the current directory overide any others. To do this, I'd like sys.path to always have './' at the beginning. What's the best way to ensure this is always true whenever I run python3? -- https://mail.python.org/mailman/listinfo/python-list
Re: best way to ensure './' is at beginning of sys.path?
Neal Becker wrote: > I want to make sure any modules I build in the current directory overide > any > others. To do this, I'd like sys.path to always have './' at the > beginning. > > What's the best way to ensure this is always true whenever I run python3? Sorry if I was unclear, let me try to describe the problem more precisely. I have a library of modules I have written using boost::python. They are all in a directory under my home directory called 'sigproc'. In ~/.local/lib/python3.5/site-packages, I have --- sigproc.pth /home/nbecker /home/nbecker/sigproc --- The reason I have 2 here is so I could use either import modA or import sigproc.modA although I almost always just use import modA . Now I have started experimenting with porting to pybind11 to replace boost::python. I am working in a directory called pybind11-test. I built modules there, with the same names as ones in sigproc. What I observed, I believe, is that when I try in that directory, import modA it imported the old one in sigproc, not the new one in "./". This behavior I found surprising. I examined sys.path, and found it did not contain "./". Then I prepended "./" to sys.path and found import modA appeared to correctly import the module in the current directory. I think I want this behavior always, and was asking how to ensure it. Thanks. -- https://mail.python.org/mailman/listinfo/python-list
profile guided optimization of loadable python modules?
Has anyone tried to optimize shared libraries (for loadable python modules) using gcc with profile guided optimization? Is it possible? Thanks, Neal -- https://mail.python.org/mailman/listinfo/python-list
Re: clever exit of nested loops
Christian Gollwitzer wrote: > Am 26.09.18 um 12:28 schrieb Bart: >> On 26/09/2018 10:10, Peter Otten wrote: >>> class Break(Exception): >>> pass >>> >>> try: >>> for i in range(10): >>> print(f'i: {i}') >>> for j in range(10): >>> print(f'\tj: {j}') >>> for k in range(10): >>> print(f'\t\tk: {k}') >>> >>> if condition(i, j, k): >>> raise Break >>> except Break: >>> pass >>> >> >> For all such 'solutions', the words 'sledgehammer' and 'nut' spring to >> mind. >> >> Remember the requirement is very simple, to 'break out of a nested loop' >> (and usually this will be to break out of the outermost loop). What >> you're looking is a statement which is a minor variation on 'break'. > > Which is exactly what it does. "raise Break" is a minor variation on > "break". > >> Not >> to have to exercise your imagination in devising the most convoluted >> code possible. > > To the contrary, I do think this solution looks not "convoluted" but > rather clear. Also, in Python some other "exceptions" are used for a > similar purpose - for example "StopIteration" to signal that an iterator > is exhausted. One might consider to call these "signals" instead of > "exceptions", because there is nothing exceptional, apart from the > control flow. > > Christian > > I've done the same before myself (exit from nested blocks to a containing block using exception), but it does violate the principle "Exceptions should be used for exceptional conditions). -- https://mail.python.org/mailman/listinfo/python-list
I'd like to add -march=native to my pip builds
I'd like to add -march=native to my pip builds. How can I do this? -- https://mail.python.org/mailman/listinfo/python-list
Re: I'd like to add -march=native to my pip builds
Stefan Behnel wrote: > CFLAGS="-O3 -march=native" pip install --no-use-wheel Thanks, not bad. But no way to put this in a config file so I don't have to remember it, I guess? -- https://mail.python.org/mailman/listinfo/python-list
Just-in-Time Static Type Checking for Dynamic Languages
I saw this article, which might interest some of you. It discusses application to ruby, but perhaps might have ideas useful for python. https://arxiv.org/abs/1604.03641 -- https://mail.python.org/mailman/listinfo/python-list
pickle and module versioning
I find pickle really handy for saving results from my (simulation) experiments. But recently I realized there is an issue. Reading the saved results requires loading the pickle, which in turn will load any referenced modules. Problem is, what if the modules have changed? For example, I just re-implemented a python module in C++, in a not quite compatible way. AFAIK, my only choice to not break my setup is to choose a different name for the new module. Has anyone else run into this issue and have any ideas? I can imagine perhaps some kind of module versioning could be used (although haven't really thought through the details). Thanks, Neal -- https://mail.python.org/mailman/listinfo/python-list
Re: How to force the path of a lib ?
dieter wrote: > Vincent Vande Vyvre writes: >> I am working on a python3 binding of a C++ lib. This lib is installed >> in my system but the latest version of this lib introduce several >> incompatibilities. So I need to update my python binding. >> >> I'm working into a virtual environment (py370_venv) python-3.7.0 is >> installed into ./localpythons/lib/python3.7 >> >> So, the paths are: >> # python-3.7.0 >> ~/./localpythons/lib/python3.7/ >> # my binding python -> libexiv2 >> ~/./localpythons/lib/python3.7/site-packages/pyexiv2/*.py >> ~/./localpythons/lib/python3.7/site- packages/pyexiv2/libexiv2python.cpython-37m-x86_64-linux-gnu.so >> >> # and the latest version of libexiv2 >> ~/CPython/py370_venv/lib/libexiv2.so.0.27.0 >> >> All theses path are in the sys.path >> >> Now I test my binding: > import pyexiv2 >> Traceback (most recent call last): >> File "", line 1, in >> File >> "/home/vincent/CPython/py370_venv/lib/python3.7/site- packages/py3exiv2-0.1.0-py3.7-linux-x86_64.egg/pyexiv2/__init__.py", >> line 60, in >> import libexiv2python >> ImportError: >> /home/vincent/CPython/py370_venv/lib/python3.7/site- packages/py3exiv2-0.1.0-py3.7-linux-x86_64.egg/libexiv2python.cpython-37m- x86_64-linux-gnu.so: >> undefined symbol: _ZN5Exiv27DataBufC1ERKNS_10DataBufRefE > >> >> Checking the libexiv2.so the symbol exists >> ~/CPython/py370_venv/lib$ objdump -T libexiv2.so.0.27.0 >> >> 0012c8d0 gDF .text000f Base >> _ZN5Exiv27DataBufC1ERKNS_10DataBufRefE >> >> >> But it is not present into my old libexiv2 system, so I presume python >> use /usr/lib/x86_64-linux-gnu/libexiv2.so.14.0.0 (The old 0.25) instead >> of ~/CPython/py370_venv/lib/libexiv2.so.0.27.0 (The latest 0.27) >> >> How can I solve that ? > > To load external C/C++ shared objects, the dynamic lickage loader > (ldd) is used. "ldd" does not look at Pthon's "sys.path". > Unless configured differently, it looks at standard places > (such as "/usr/lib/x86_64-linux-gnu"). > > You have several options to tell "ldd" where to look for > shared objects: > > * use the envvar "LD_LIBRARY_PATH" >This is a "path variable" similar to the shell's "PATH", >telling the dynamic loader in which directories (before >the standard ones) to look for shared objects > > * use special linker options (when you link your Python >extension shared object) to tell where dependent shared >object can be found. > To follow up on that last point, look up --rpath and related. -- https://mail.python.org/mailman/listinfo/python-list
exit 2 levels of if/else and execute common code
I have code with structure: ``` if cond1: [some code] if cond2: #where cond2 depends on the above [some code] [ more code] else: [ do xxyy ] else: [ do the same xxyy as above ] ``` So what's the best style to handle this? As coded, it violates DRY. Try/except could be used with a custom exception, but that seems a bit heavy handed. Suggestions? -- https://mail.python.org/mailman/listinfo/python-list
Re: exit 2 levels of if/else and execute common code
Rhodri James wrote: > On 11/02/2019 15:25, Neal Becker wrote: >> I have code with structure: >> ``` >> if cond1: >>[some code] >>if cond2: #where cond2 depends on the above [some code] >> [ more code] >> >>else: >> [ do xxyy ] >> else: >>[ do the same xxyy as above ] >> ``` >> >> So what's the best style to handle this? As coded, it violates DRY. >> Try/except could be used with a custom exception, but that seems a bit >> heavy >> handed. Suggestions? > > If it's trivial, ignore DRY. That's making work for the sake of making > work in such a situation. > > If it isn't trivial, is there any reason not to put the common code in a > function? > Well the common code is 2 lines. -- https://mail.python.org/mailman/listinfo/python-list
Re: exit 2 levels of if/else and execute common code
Chris Angelico wrote: > On Tue, Feb 12, 2019 at 2:27 AM Neal Becker wrote: >> >> I have code with structure: >> ``` >> if cond1: >> [some code] >> if cond2: #where cond2 depends on the above [some code] >> [ more code] >> >> else: >> [ do xxyy ] >> else: >> [ do the same xxyy as above ] >> ``` >> >> So what's the best style to handle this? As coded, it violates DRY. >> Try/except could be used with a custom exception, but that seems a bit >> heavy >> handed. Suggestions? > > One common way to do this is to toss a "return" after the cond2 block. > Means this has to be the end of a function, but that's usually not > hard. Or, as Rhodri suggested, refactor xxyy into a function, which > you then call twice. > > ChrisA Not bad, but turns out it would be the same return statement for both the normal return path (cond1 and cond2 satisfied) as well as the abnormal return, so not really much of an improvement. -- https://mail.python.org/mailman/listinfo/python-list
Re: exit 2 levels of if/else and execute common code
Chris Angelico wrote: > On Tue, Feb 12, 2019 at 3:21 AM Neal Becker wrote: >> >> Chris Angelico wrote: >> >> > On Tue, Feb 12, 2019 at 2:27 AM Neal Becker >> > wrote: >> >> >> >> I have code with structure: >> >> ``` >> >> if cond1: >> >> [some code] >> >> if cond2: #where cond2 depends on the above [some code] >> >> [ more code] >> >> >> >> else: >> >> [ do xxyy ] >> >> else: >> >> [ do the same xxyy as above ] >> >> ``` >> >> >> >> So what's the best style to handle this? As coded, it violates DRY. >> >> Try/except could be used with a custom exception, but that seems a bit >> >> heavy >> >> handed. Suggestions? >> > >> > One common way to do this is to toss a "return" after the cond2 block. >> > Means this has to be the end of a function, but that's usually not >> > hard. Or, as Rhodri suggested, refactor xxyy into a function, which >> > you then call twice. >> > >> > ChrisA >> >> Not bad, but turns out it would be the same return statement for both the >> normal return path (cond1 and cond2 satisfied) as well as the abnormal >> return, so not really much of an improvement. > > Not sure what you mean there. The result would be something like this: > > def frobnicate(): > if cond1: > do_stuff() > if cond2: > do_more_stuff() > return > do_other_stuff() > > ChrisA sorry, I left out the return: if cond1: [some code] if cond2: #where cond2 depends on the above [some code] [ more code] else: [ do xxyy ] else: [ do the same xxyy as above ] return a, b, c So if we return normally, or return via some other path, the return statement is the same, and would be duplicated. -- https://mail.python.org/mailman/listinfo/python-list
@staticmethod, backward compatibility?
How can I write code to take advantage of new decorator syntax, while allowing backward compatibility? I almost want a preprocessor. #if PYTHON_VERSION >= 2.4 @staticmethod ... Since python < 2.4 will just choke on @staticmethod, how can I do this? -- http://mail.python.org/mailman/listinfo/python-list
Compile fails on x86_64
In file included from scipy/base/src/multiarraymodule.c:44: scipy/base/src/arrayobject.c: In function 'array_frominterface': scipy/base/src/arrayobject.c:5151: warning: passing argument 3 of 'PyArray_New' from incompatible pointer type error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -mtune=nocona -D_GNU_SOURCE -fPIC -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -mtune=nocona -fPIC -Ibuild/src/scipy/base/src -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/include/python2.4 -c scipy/base/src/multiarraymodule.c -o build/temp.linux-x86_64-2.4/scipy/base/src/multiarraymodule.o" failed with exit status 1 error: Bad exit status from /var/tmp/rpm/rpm-tmp.96024 (%build) -- http://mail.python.org/mailman/listinfo/python-list
compile fails on x86_64 (more)
In file included from scipy/base/src/multiarraymodule.c:44: scipy/base/src/arrayobject.c:41: error: conflicting types for 'PyArray_PyIntAsIntp' build/src/scipy/base/__multiarray_api.h:147: error: previous declaration of 'PyArray_PyIntAsIntp' was here -- http://mail.python.org/mailman/listinfo/python-list
Can module access global from __main__?
Suppose I have a main program, e.g., A.py. In A.py we have: X = 2 import B Now B is a module B.py. In B, how can we access the value of X? -- http://mail.python.org/mailman/listinfo/python-list
Re: Can module access global from __main__?
Everything you said is absolutely correct. I was being lazy. I had a main program in module, and wanted to reorganize it, putting most of it into a new module. Being python, it actually only took a small effort to fix this properly, so that in B.py, what were global variables are now passed as arguments to class constructors and functions. Still curious about the answer. If I know that I am imported from __main__, then I can do access X as sys.modules[__main__].X. In general, I don't know how to determine who is importing me. -- http://mail.python.org/mailman/listinfo/python-list
1-liner to iterate over infinite sequence of integers?
I can do this with a generator: def integers(): x = 1 while (True): yield x x += 1 for i in integers(): Is there a more elegant/concise way? -- http://mail.python.org/mailman/listinfo/python-list
Re: shared library search path
Stefan Arentz wrote: > > Hi. I've wrapped a C++ class with Boost.Python and that works great. But, > I am now packaging my application so that it can be distributed. The > structure is basically this: > > .../bin/foo.py > .../lib/foo.so > .../lib/bar.py > > In foo.py I do the following: > > sys.path.append(os.path.dirname(sys.path[0]) + '/lib') > > and this allows foo.py to import bar. Great. > > But, the foo.so cannot be imported. The import only succeeds if I place > foo.so next to foo.py in the bin directory. > > I searched through the 2.4.2 documentation on python.org but I can't find > a proper explanation on how the shared library loader works. > > Does anyone understand what it going on here? > > S. > No, but here is the code I use: import sys sys.path.append ('../wrap') -- http://mail.python.org/mailman/listinfo/python-list
Re: efficient 'tail' implementation
[EMAIL PROTECTED] wrote: > hi > > I have a file which is very large eg over 200Mb , and i am going to use > python to code a "tail" > command to get the last few lines of the file. What is a good algorithm > for this type of task in python for very big files? > Initially, i thought of reading everything into an array from the file > and just get the last few elements (lines) but since it's a very big > file, don't think is efficient. > thanks > You should look at pyinotify. I assume we're talking linux here. -- http://mail.python.org/mailman/listinfo/python-list
Re: Recommendations for CVS systems
[EMAIL PROTECTED] wrote: > I was wondering if anyone could make recomendations/comments about CVS > systems, their experiences and what perhaps the strengths of each. > > Currently we have 2 developers but expect to grow to perhaps 5. > > Most of the developement is Python, but some C, Javascript, HTML, etc. > > The IDE what have been using/experimenting with are drPython and > eclipse with PyDev. > > For a python newsgroup, you are required to consider mercurial. It's not ready for production use yet, but is making rapid progress, and many (including myself) are using it. -- http://mail.python.org/mailman/listinfo/python-list
python optimization
I use cpython. I'm accustomed (from c++/gcc) to a style of coding that is highly readable, making the assumption that the compiler will do good things to optimize the code despite the style in which it's written. For example, I assume constants are removed from loops. In general, an entity is defined as close to the point of usage as possible. I don't know to what extent these kind of optimizations are available to cpython. For example, are constant calculations removed from loops? How about functions? Is there a significant cost to putting a function def inside a loop rather than outside? -- http://mail.python.org/mailman/listinfo/python-list
Re: python optimization
Reinhold Birkenfeld wrote: > David Wilson wrote: >> For the most part, CPython performs few optimisations by itself. You >> may be interested in psyco, which performs several heavy optimisations >> on running Python code. >> >> http://psyco.sf.net/ >> I might be, if it supported x86_64, but AFAICT, it doesn't. -- http://mail.python.org/mailman/listinfo/python-list
RE: [Python-Dev] python optimization
One possible way to improve the situation is, that if we really believe python cannot easily support such optimizations because the code is too "dynamic", is to allow manual annotation of functions. For example, gcc has allowed such annotations using __attribute__ for quite a while. This would allow the programmer to specify that a variable is constant, or that a function is pure (having no side effects). -- http://mail.python.org/mailman/listinfo/python-list
Re: Python:C++ interfacing. Tool selection recommendations
[EMAIL PROTECTED] wrote: > Hi, > > I am embedding Python with a C++ app and need to provide the Python > world with access to objects & data with the C++ world. > > I am aware or SWIG, BOOST, SIP. Are there more? > > I welcome comments of the pros/cons of each and recommendations on when > it appropriate to select one over the others. > boost::python is alien technology. It is amazingly powerful. Once you learn how to use it it's wonderful, but unless you are comfortable with modern c++ you may find the learning curve steep. -- http://mail.python.org/mailman/listinfo/python-list
unusual exponential formatting puzzle
Like a puzzle? I need to interface python output to some strange old program. It wants to see numbers formatted as: e.g.: 0.23456789E01 That is, the leading digit is always 0, instead of the first significant digit. It is fixed width. I can almost get it with '% 16.9E', but not quite. My solution is to print to a string with the '% 16.9E' format, then parse it with re to pick off the pieces and fix it up. Pretty ugly. Any better ideas? -- http://mail.python.org/mailman/listinfo/python-list
Re: unusual exponential formatting puzzle
[EMAIL PROTECTED] wrote: > > [EMAIL PROTECTED] wrote: >> Neal Becker wrote: >> > Like a puzzle? I need to interface python output to some strange old >> > program. It wants to see numbers formatted as: >> > >> > e.g.: 0.23456789E01 >> > >> > That is, the leading digit is always 0, instead of the first >> > significant >> > digit. It is fixed width. I can almost get it with '% 16.9E', but not >> > quite. >> > >> > My solution is to print to a string with the '% 16.9E' format, then >> > parse it >> > with re to pick off the pieces and fix it up. Pretty ugly. Any better >> > ideas? >> >> If you have gmpy available... >> >> >>> import gmpy >> >> ...and your floats are mpf's... >> >> >>> s = gmpy.pi(64) >> >>> s >> mpf('3.14159265358979323846e0',64) >> >> ...you can use the fdigits function >> >> >>> t = gmpy.fdigits(s,10,8,0,0,2) >> >> ...to create a seperate digit string and exponent... >> >> >>> print t >> ('31415927', 1, 64) >> >> ...which can then be printed in the desired format. >> >> >>> print "0.%sE%02d" % (t[0],t[1]) >> 0.31415927E01 > > Unless your numbers are negative. > >>>> print "0.%sE%02d" % (t[0],t[1]) > 0.-31415927E03 > > Drat. Needs work. > > > > And does the format permit large negative exponents (2 digits + sign)? > I think the abs (exponent) < 10 for now >>>> print "0.%sE%02d" % (t[0],t[1]) > 0.31415927E-13 > -- http://mail.python.org/mailman/listinfo/python-list
Re: unusual exponential formatting puzzle
Paul Rubin wrote: > Neal Becker <[EMAIL PROTECTED]> writes: >> Like a puzzle? I need to interface python output to some strange old >> program. It wants to see numbers formatted as: >> >> e.g.: 0.23456789E01 > > Yeah, that was normal with FORTRAN. > >> My solution is to print to a string with the '% 16.9E' format, then >> parse it with re to pick off the pieces and fix it up. Pretty ugly. >> Any better ideas? > > That's probably the simplest. Acutally, I found a good solution using the new decimal module: def Format(x): """Produce strange exponential format with leading 0""" s = '%.9E' % x d = decimal.Decimal (s) (sign, digits, exp) = d.as_tuple() s = '' if (sign == 0): s += ' ' else: s += '-' s += '0.' e = len (digits) + exp for x in digits: s += str (x) s += 'E' s += '%+03d' % e return s -- http://mail.python.org/mailman/listinfo/python-list
redirect stdout
I'd like to build a module that would redirect stdout to send it to a logging module. I want to be able to use a python module that expects to print results using "print" or "sys.stdout.write()" and without modifying that module, be able to redirect it's stdout to a logger which will send the messages via datagrams to a server. Any ideas? -- http://mail.python.org/mailman/listinfo/python-list
Oh look, another language (ceylon)
http://ceylon-lang.org/documentation/1.0/introduction/ -- https://mail.python.org/mailman/listinfo/python-list
argparse feature request
I use arparse all the time and find it serves my needs well. One thing I'd like to see. In the help message, I'd like to automatically add the default values. For example, here's one of my programs: python3 test_freq3.py --help usage: test_freq3.py [-h] [--size SIZE] [--esnodB ESNODB] [--tau TAU] [--trials TRIALS] [--training TRAINING] [--sps SPS] [--si SI] [--alpha ALPHA] [--range RANGE] [--dfunc {gradient,delay}] [--mod {gaussian,qpsk,8psk,16apsk,32apsk,32dlr,64apsk,256apsk}] [--sym-freq-err SYM_FREQ_ERR] [--calibrate [CALIBRATE]] optional arguments: -h, --helpshow this help message and exit --size SIZE --esnodB ESNODB, -e ESNODB --tau TAU, -t TAU --trials TRIALS --training TRAINING --sps SPS --si SI --alpha ALPHA --range RANGE --dfunc {gradient,delay} --mod {gaussian,qpsk,8psk,16apsk,32apsk,32dlr,64apsk,256apsk} --sym-freq-err SYM_FREQ_ERR --calibrate [CALIBRATE], --with-calibrate [CALIBRATE], --enable-calibrate [CALIBRATE], --no-calibrate [CALIBRATE], --without-calibrate [CALIBRATE], -- disable-calibrate [CALIBRATE] What I'd like to see is: --size SIZE [2000] <<< the default value is displayed -- https://mail.python.org/mailman/listinfo/python-list
Re: argparse feature request
Robert Kern wrote: > On 2013-11-22 14:56, Neal Becker wrote: >> I use arparse all the time and find it serves my needs well. One thing I'd >> like >> to see. In the help message, I'd like to automatically add the default >> values. >> >> For example, here's one of my programs: >> >> python3 test_freq3.py --help >> usage: test_freq3.py [-h] [--size SIZE] [--esnodB ESNODB] [--tau TAU] >> [--trials TRIALS] >> [--training TRAINING] [--sps SPS] [--si SI] [--alpha >> [ALPHA] --range RANGE] [--dfunc {gradient,delay}] >> [--mod >> {gaussian,qpsk,8psk,16apsk,32apsk,32dlr,64apsk,256apsk}] >> [--sym-freq-err SYM_FREQ_ERR] [--calibrate [CALIBRATE]] >> >> optional arguments: >>-h, --helpshow this help message and exit >>--size SIZE >>--esnodB ESNODB, -e ESNODB >>--tau TAU, -t TAU >>--trials TRIALS >>--training TRAINING >>--sps SPS >>--si SI >>--alpha ALPHA >>--range RANGE >>--dfunc {gradient,delay} >>--mod {gaussian,qpsk,8psk,16apsk,32apsk,32dlr,64apsk,256apsk} >>--sym-freq-err SYM_FREQ_ERR >>--calibrate [CALIBRATE], --with-calibrate [CALIBRATE], --enable-calibrate >> [CALIBRATE], --no-calibrate [CALIBRATE], --without-calibrate [CALIBRATE], -- >> disable-calibrate [CALIBRATE] >> >> What I'd like to see is: >> >> --size SIZE [2000] <<< the default value is displayed > > Use formatter_class=argparse.ArgumentDefaultsHelpFormatter > > http://docs.python.org/2/library/argparse#argparse.ArgumentDefaultsHelpFormatter > > E.g. > > [git/mpstack]$ cat print_stacks.py > ... > def main(): > import argparse > parser = argparse.ArgumentParser( > formatter_class=argparse.ArgumentDefaultsHelpFormatter) > parser.add_argument('-p', '--percent', action='store_true', help='Show > percentages.') > parser.add_argument('file', help='The sample file.') > ... > > [git/mpstack]$ python print_stacks.py -h > usage: print_stacks.py [-h] [-p] file > > positional arguments: >file The sample file. > > optional arguments: >-h, --help show this help message and exit >-p, --percent Show percentages. (default: False) > Thanks! Almost perfect. Problem is, I don't usually bother to add help='help me' options. It seems that ArgumentDefaultsHelpFormatter doesn't do anything unless help='blah' option is used. Not sure what to do about that. Since python test_freq3.py -h produces useful messages without my adding help=... everywhere, it'd be nice if ArgumentDefaultsHelpFormatter would work here. -- https://mail.python.org/mailman/listinfo/python-list
Re: argparse feature request
Robert Kern wrote: > On 2013-11-22 16:52, Neal Becker wrote: >> Robert Kern wrote: >> >>> On 2013-11-22 14:56, Neal Becker wrote: >>>> I use arparse all the time and find it serves my needs well. One thing I'd >>>> like >>>> to see. In the help message, I'd like to automatically add the default >>>> values. > >>>> What I'd like to see is: >>>> >>>> --size SIZE [2000] <<< the default value is displayed >>> >>> Use formatter_class=argparse.ArgumentDefaultsHelpFormatter >>> >>> >>> http://docs.python.org/2/library/argparse#argparse.ArgumentDefaultsHelpFormatter > >> Thanks! Almost perfect. Problem is, I don't usually bother to add >> help='help >> me' options. It seems that ArgumentDefaultsHelpFormatter doesn't do anything >> unless help='blah' option is used. Not sure what to do about that. Since >> python test_freq3.py -h >> produces useful messages without my adding help=... everywhere, it'd be nice >> if ArgumentDefaultsHelpFormatter would work here. > > Patches are welcome, I am sure. Implement a HelpFormatter that does what you > want. _format_action() is where the relevant logic is. Try something like > this, and modify to suit. > > > class BeckerDefaultFormatter(argparse.ArgumentDefaultsHelpFormatter): > > def _format_action(self, action): > monkeypatched = False > if action.default is not None and action.help is None: > # Trick the default _format_action() method into writing out > # the defaults. > action.help = ' ' > monkeypatched = True > formatted = super(BeckerDefaultFormatter, > self)._format_action(action) if monkeypatched: > action.help = None > return formatted > Thanks! Seems to work great. It gave reasonable output for both case where I include help=... and also without. I have no idea how that above code works, but I guess as long as it works... -- https://mail.python.org/mailman/listinfo/python-list
proposal: bring nonlocal to py2.x
py3 includes a fairly compelling feature: nonlocal keywork But backward compatibility is lost. It would be very helpful if this was available on py2.x. -- https://mail.python.org/mailman/listinfo/python-list
object() can't have attributes
I'm a bit surprised that an object() can't have attributes: In [30]: o = object() In [31]: o.x = 2 --- AttributeErrorTraceback (most recent call last) in () > 1 o.x = 2 AttributeError: 'object' object has no attribute 'x' Sometimes I want to collect attributes on an object. Usually I would make an empty class for this. But it seems unnecessarily verbose to do this. So I thought, why not just use an Object? But no, an instance of Object apparantly can't have an attribute. Is this intentional? -- https://mail.python.org/mailman/listinfo/python-list
context managers inline?
Is there a way to ensure resource cleanup with a construct such as: x = load (open ('my file', 'rb)) Is there a way to ensure this file gets closed? -- https://mail.python.org/mailman/listinfo/python-list
Re: context managers inline?
sohcahto...@gmail.com wrote: > On Thursday, March 10, 2016 at 10:33:47 AM UTC-8, Neal Becker wrote: >> Is there a way to ensure resource cleanup with a construct such as: >> >> x = load (open ('my file', 'rb)) >> >> Is there a way to ensure this file gets closed? > > with open('my file', 'rb') as f: > x = load(f) But not in a 1-line, composable manner? -- https://mail.python.org/mailman/listinfo/python-list
Is there a more elegant way to spell this?
Is there a more elegant way to spell this? for x in [_ for _ in seq if some_predicate]: -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re: Is there a more elegant way to spell this?
Jussi Piitulainen wrote: > Neal Becker writes: > >> Is there a more elegant way to spell this? >> >> for x in [_ for _ in seq if some_predicate]: > > If you mean some_predicate(_), then possibly this. > > for x in filter(some_predicate, seq): >handle(x) > I like this best, except probably even better: for x in ifilter (some_predicate, seq): -- https://mail.python.org/mailman/listinfo/python-list
Re: Is there a more elegant way to spell this?
Jussi Piitulainen wrote: > Neal Becker writes: > >> Is there a more elegant way to spell this? >> >> for x in [_ for _ in seq if some_predicate]: > > If you mean some_predicate(_), then possibly this. > > for x in filter(some_predicate, seq): >handle(x) > > If you mean literally some_predicate, then perhaps this. > > if some_predicate: >for x in seq: > handle(x) > > Unless you also have in mind an interesting arrangement where > some_predicate might change during the loop, like this. > > for x in [_ for _ in seq if some_predicate]: > ... > some_predicate = fubar(x) > ... > > Then I have nothing to say. To clarify, I meant some_predicate(_), and then ifilter looks like a nice solution. -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
basic generator question
I have an object that expects to call a callable to get a value: class obj: def __init__ (self, gen): self.gen = gen def __call__ (self): return self.gen() Now I want gen to be a callable that repeats N times. I'm thinking, this sounds perfect for yield class rpt: def __init__ (self, value, rpt): self.value = value; self.rpt = rpt def __call__ (self): for i in range (self.rpt): yield self.value so I would do: my_rpt_obj = obj (rpt ('hello', 5)) to repeat 'hello' 5 times (for example). But this doesn't work. when obj calls self.gen(), that returns a generator, not the next value. How can I make this work? I can't change the interface of the existing class obj, which expects a callable to get the next value. -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
help with pypeg2?
Trying out pypeg2. The below grammar is recursive. A 'Gen' is an ident followed by parenthesized args. args is a csl of alphanum or Gen. The tests 'p' and 'p2' are fine, but 'p3' fails SyntaxError: expecting u')' from __future__ import unicode_literals, print_function from pypeg2 import * ident = re.compile (r'[a-z]+') alphanum = re.compile (r'[a-z0-9]+') num = re.compile (r'[0-9]+') class args (List): grammar = maybe_some ( csl ([alphanum, Gen])) class Gen (List): grammar = attr ('type', ident), '(', attr ('args', args), ')' p = parse ('abc,123', args) p2 = parse ('abc(123,456)', Gen) p3 = parse ('abc(123,def(456))', Gen) -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re: help with pypeg2?
Ian Kelly wrote: > On Fri, Feb 6, 2015 at 7:55 AM, Neal Becker wrote: >> Trying out pypeg2. The below grammar is recursive. A 'Gen' is an ident >> followed by parenthesized args. args is a csl of alphanum or Gen. >> >> The tests 'p' and 'p2' are fine, but 'p3' fails >> SyntaxError: expecting u')' >> >> >> from __future__ import unicode_literals, print_function >> from pypeg2 import * >> >> ident = re.compile (r'[a-z]+') >> alphanum = re.compile (r'[a-z0-9]+') >> num = re.compile (r'[0-9]+') >> >> class args (List): >> grammar = maybe_some ( csl ([alphanum, Gen])) > > I'm not familiar with pypeg2, but should this use optional instead of > maybe_some? The csl function already produces a list, so the result of > maybe_some on that would be one or more consecutive lists. > > Also, it looks from the docs like it should just be "csi(alphanum, > Gen)" (no list in the arguments). It didn't work without [list..] (even trivial examples did not work without it). I don't know if it's broken, I though csl (*args) should mean a comma-sep-list of _any_ of the *args, but it doesn't work. [list] means alternatives, and does seem to work (for simple examples). -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
line_profiler: what am I doing wrong?
I inserted @profile def run(...) into a module-level global function called 'run'. Something is very wrong here. 1. profile results were written before anything even ran 2. profile is not defined? kernprof -l ./test_unframed.py --lots --of --args ... Wrote profile results to test_unframed.py.lprof Traceback (most recent call last): File "/home/nbecker/.local/bin/kernprof", line 9, in load_entry_point('line-profiler==1.0', 'console_scripts', 'kernprof')() File "/home/nbecker/.local/lib/python2.7/site-packages/kernprof.py", line 221, in main execfile(script_file, ns, ns) File "./test_unframed.py", line 721, in @profile NameError: name 'profile' is not defined -- https://mail.python.org/mailman/listinfo/python-list
Re: line_profiler: what am I doing wrong?
Ethan Furman wrote: > On 02/10/2015 04:06 PM, Neal Becker wrote: >> I inserted >> @profile >> def run(...) >> >> into a module-level global function called 'run'. Something is very wrong >> here. 1. profile results were written before anything even ran >> 2. profile is not defined? >> >> kernprof -l ./test_unframed.py --lots --of --args ... >> >> Wrote profile results to test_unframed.py.lprof >> Traceback (most recent call last): >> File "/home/nbecker/.local/bin/kernprof", line 9, in >> load_entry_point('line-profiler==1.0', 'console_scripts', 'kernprof')() >> File "/home/nbecker/.local/lib/python2.7/site-packages/kernprof.py", line >> 221, >> in main >> execfile(script_file, ns, ns) >> File "./test_unframed.py", line 721, in >> @profile >> NameError: name 'profile' is not defined > > I'm going to guess that writing the profile results is in a try/finally -- so > first you see the results being written, then the exception that triggered. > > -- > ~Ethan~ I believe you are suggesting the apparent out-of-order is due to try/finally, but kernprof is supposed to inject 'profile' into the global namespace, so @profile should be defined - I don't know why it isn't working. -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re: line_profiler: what am I doing wrong?
Steven D'Aprano wrote: > Neal Becker wrote: > >> I inserted >> @profile >> def run(...) >> >> into a module-level global function called 'run'. Something is very wrong >> here. 1. profile results were written before anything even ran >> 2. profile is not defined? > > Well, is it defined? Where does it come from? > > If you defined it yourself, it needs to be defined before you can use it. > This won't work: > > > @profile > def run(...) > > def profile(func): ... > > > Swap the order of profile and run and it should work. (Give or take any > additional bugs in your code.) > > > If you've imported it from an external module, how did you import it? > > > import some_module > > @some_module.profile > def run(...) > > > should work. So will this: > > > from some_module import profile > > @profile > def run(...) > > > But this won't: > > > import some_module > > @profile > def run(...) > > > and will fail with NameError, exactly as you are experiencing. > > > > To quote from https://pypi.python.org/pypi/line_profiler/ $ kernprof -l script_to_profile.py kernprof will create an instance of LineProfiler and insert it into the __builtins__ namespace with the name profile. It has been written to be used as a decorator, so in your script, you decorate the functions you want to profile with @profile. @profile def slow_function(a, b, c): ... I've used it before (maybe 1 year ago), don't know why it isn't working now. -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re: line_profiler: what am I doing wrong?
Robert Kern wrote: > On 2015-02-11 01:17, Steven D'Aprano wrote: >> Neal Becker wrote: >> >> >>> To quote from https://pypi.python.org/pypi/line_profiler/ >>> >>> $ kernprof -l script_to_profile.py >>> kernprof will create an instance of LineProfiler and insert it into the >>> __builtins__ namespace with the name profile. >> >> Ewww What a Ruby-esque interface, that makes me sad :-( > > This is not a production library. It's a development tool designed to > help developers shorten the cycle time for investigating these kinds of > issues. Well, *a* developer; i.e. me. If it helps anyone else, yahtzee! > >> And what if you >> have your own profile global name? > > Then you can pull it out from __builtin__ with a different name and use that > other name. > >> And *wrong* too. `__builtins__` is a private CPython implementation detail. >> The way to monkey-patch the built-ins in Python 2 is to inject the object >> into `__builtin__` (no s), or `builtins` in Python 3. > > And indeed that is how it is implemented. Referring to that namespace as the > "`__builtins__` namespace" isn't *wrong*. It may mislead you into thinking > I've implemented it one particular way, if you are desperate to find a nit to > pick. > >> Seeing as >> line_profiler is written in C, perhaps the author (Robert Kern) doesn't >> care about supporting Jython or IronPython, but there may be Python >> implementations (PyPy perhaps?) which can run C code but don't have >> __builtins__. > > Indeed, I do not care about any of them. PyPy does not implement CPython's > tracing API: > > https://bitbucket.org/pypy/pypy/src/2b2163d65ee437646194a1ceb2a3153db24c5f7e/pypy/module/cpyext/stubs.py?at=default#cl-1286 > Hi Robert, any idea why line_profiler is not working? I've used it fine in the past. -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re: line_profiler: what am I doing wrong?
Robert Kern wrote: > @profile > def run(): > pass > > run() No, this doesn't work either. Same failure kernprof -l test_prof.py Wrote profile results to test_prof.py.lprof Traceback (most recent call last): File "/home/nbecker/.local/bin/kernprof", line 9, in load_entry_point('line-profiler==1.0', 'console_scripts', 'kernprof')() File "/home/nbecker/.local/lib/python2.7/site-packages/kernprof.py", line 221, in main execfile(script_file, ns, ns) File "test_prof.py", line 1, in @profile NameError: name 'profile' is not defined -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re: line_profiler: what am I doing wrong?
Robert Kern wrote: > On 2015-02-13 13:35, Neal Becker wrote: >> Robert Kern wrote: >> >>> @profile >>> def run(): >>> pass >>> >>> run() >> >> No, this doesn't work either. Same failure >> >> kernprof -l test_prof.py >> Wrote profile results to test_prof.py.lprof >> Traceback (most recent call last): >>File "/home/nbecker/.local/bin/kernprof", line 9, in >> load_entry_point('line-profiler==1.0', 'console_scripts', 'kernprof')() >>File "/home/nbecker/.local/lib/python2.7/site-packages/kernprof.py", line >>221, >> in main >> execfile(script_file, ns, ns) >>File "test_prof.py", line 1, in >> @profile >> NameError: name 'profile' is not defined > > Ah, do you have the package `future` installed? > > https://github.com/rkern/line_profiler/issues/12 > Yes, I do. What do you suggest as a workaround? -- -- Those who don't understand recursion are doomed to repeat it -- https://mail.python.org/mailman/listinfo/python-list
Re:How security holes happen
Charles R Harris Wrote in message: > ___ > NumPy-Discussion mailing list > numpy-discuss...@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Imo the lesson here is never write in low level c. Use modern languages with well designed exception handling. -- Android NewsGroup Reader http://www.piaohong.tk/newsgroup -- https://mail.python.org/mailman/listinfo/python-list
Re: gdb unable to read python frame information
dieter wrote: > Wesley writes: > >> I wanna use gdb to attach my running python scripts. >> Successfully import libpython in gdb, but seems all py operations failed to >> read python information. >> >> Here is the snippet: >> (gdb) python >>>import libpython >>>end >> (gdb) py-bt >> #3 (unable to read python frame information) >> #5 (unable to read python frame information) > > The simplest possible interpretation would be that your > Python lacks debugging symbols. That often happens with > system installed Python installations (which usually are stripped > to the bare minimal symbol set - as "normal" users do not need > debugging). > > Try with a Python that you have generated yourself. You probably need to install the python-debuginfo package -- https://mail.python.org/mailman/listinfo/python-list