Re: mysterious buggy behavior
On 8 Jan 2005 20:40:37 -0800, [EMAIL PROTECTED] (Sean McIlroy) wrote: >def newGame(): >BOARD = [blank]*9 >for x in topButtons+midButtons+botButtons: x['text'] = '' Do you know that "BOARD" here is a local variable and has nothing to do with the global BOARD ? You can change that by doing either BOARD[:] = [blank]*9 or global BOARD BOARD = [blank]*9 HTH Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed revisited
On 9 Jan 2005 12:39:32 -0800, "John Machin" <[EMAIL PROTECTED]> wrote: >Tip 1: Once you have data in memory, don't move it, move a pointer or >index over the parts you are inspecting. > >Tip 2: Develop an abhorrence of deleting data. I've to admit that I also found strange that deleting the first element from a list is not O(1) in python. My wild guess was that the extra addition and normalization required to have insertion in amortized O(1) and deletion in O(1) at both ends of a random access sequence was going to have basically a negligible cost for normal access (given the overhead that is already present in python). But I'm sure this idea is too obvious for not having been proposed, and so there must reasons for refusing it (may be the cost to pay for random access once measured was found being far from negligible, or that the extra memory overhead per list - one int for remembering where the live data starts - was also going to be a problem). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed revisited
On 9 Jan 2005 16:03:34 -0800, "John Machin" <[EMAIL PROTECTED]> wrote: >My wild guess: Not a common use case. Double-ended queue is a special >purpose structure. > >Note that the OP could have implemented the 3-tape update simulation >efficiently by reading backwards i.e. del alist[-1] Note that I was just trying to say that it's not obvious that list insertion at the first element is O(n); because there are "less naive" implementation that can do better. For a lower level language O(n) is probably what 99% of programmers indeed would expect, but for a VHLL like python this is IMO not the case. I remember when a few years ago working with PowerBuilder (a RAD environment for client-server applications) to my great surprise I found that even adding at the end of a list was O(n) in that language... where is the line ? After all "smart" reallocation is still a tradeoff (extra "wasted" space traded for diminshed copying) ... Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed revisited
On Mon, 10 Jan 2005 17:52:42 +0100, Bulba! <[EMAIL PROTECTED]> wrote: >I don't see why should deleting element from a list be O(n), while >saying L[0]='spam' when L[0] previously were, say, 's', not have the >O(n) cost, if a list in Python is just an array containing the >objects itself? > >Why should JUST deletion have an O(n) cost? Because after deletion L[1] moved to L[0], L[2] moved to L[1], L[3] moved to L[2] and so on. To delete the first element you have to move n-1 pointers and this is where O(n) comes from. When you reassign any element there is no need to move the others around, so that's why you have O(1) complexity. With a data structure slightly more complex than an array you can have random access in O(1), deletion of elements O(1) at *both ends* and insertion in amortized O(1) at *both ends*. This data structure is called doubly-ended queque (nickname "deque") and is available in python. The decision was that for the basic list object the overhead added by deques for element access (it's still O(1), but a bit more complex that just bare pointer arithmetic) and, I guess, the hassle of changing a lot of working code and breaking compatibility with extensions manipulating directly lists (no idea if such a thing exists) was not worth the gain. The gain would have been that who doesn't know what O(n) means and that uses lists for long FIFOs would get fast programs anyway without understanding why. With current solution they just have to use deques instead of lists. After thinking to it for a while I agree that this is a reasonable choice. The gain is anyway IMO very little because if a programmer desn't understand what O(n) is then the probability that any reasonably complex program written is going to be fast is anyway zero... time would just be wasted somewhere else for no reason. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: references/addrresses in imperative languages
On Sun, 19 Jun 2005 22:25:13 -0500, Terry Hancock <[EMAIL PROTECTED]> wrote: >> PS is there any difference between >> t=t+[li] >> t.append(li) > >No, but Yes, a big one. In the first you're creating a new list and binding the name t to it, in the second you're extending a list by adding one more element at the end. To see the difference: >>> a = [1,2,3] >>> b = a >>> a = a + [4] >>> print a [1, 2, 3, 4] >>> print b [1, 2, 3] >>> >>> a = [1,2,3] >>> b = a >>> a.append(4) >>> print a [1, 2, 3, 4] >>> print b [1, 2, 3, 4] >>> Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: references/addrresses in imperative languages
On 20 Jun 2005 23:30:40 -0700, "Xah Lee" <[EMAIL PROTECTED]> wrote: >Dear Andrea Griffini, > >Thanks for explaning this tricky underneath stuff. Actually it's the very logical consequence of the most basic rule about python. Variables are just pointers to values; so every time you assign to a variable you're always changing just that pointer, you're not touching the object pointed to by the variable. Even when x is pointing to an integer with x=x+1 you are computing a new integer (x+1) and making x to point to this new one instead of the old one. You are NOT touching the old integer. Surely this is different from C/C++/Java, but it's IMO all but tricky or underneath. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code
On Sat, 2 Jul 2005 03:04:09 -0700 (PDT), "Ralf W. Grosse-Kunstleve" <[EMAIL PROTECTED]> wrote: >Hi fellow Python coders, > >I often find myself writing:: > >class grouping: > >def __init__(self, x, y, z): >self.x = x >self.y = y >self.z = z ># real code, finally > >This becomes a serious nuisance in complex applications with long >argument lists, especially if long variable names are essential for >managing the complexity. Therefore I propose that Python includes >built-in support for reducing the ``self.x=x`` clutter. With some help from new-style classes you can get more than just removing the "self.x = x" clutter. I'm not an expert of these low-level python tricks, but you can download from http://www.gripho.it/objs.py a small example that allows you to write class MyClass(Object): x = Float(default = 0.0, max = 1E20) y = Float(min = 1.0) and you can get in addition of redudancy removal also parameter checking. You can also have an __init__ method that gets called with attributes already set up. HTH Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Folding in vim
On Sun, 3 Jul 2005 22:42:17 -0500, Terry Hancock <[EMAIL PROTECTED]> wrote: >It seems to be that it isn't robust against files >with lots of mixed tabs and spaces. My suggestion is: - never ever use tabs; tabs were nice when they had a de-facto meaning (tabbing to next 8-space boundary) nowdays they're just noise as the meaning depends on the phase of the moon. Making tabs meaning anything had the pretty obvious implication of making tabs meaning nothing. - stick to 4-space indent I've even run in the past in editors that damaged my python sources because they were indented with two spaces (I'm used to an indent size of 2 when working in C/C++). With python IMO 4 spaces is perfectly adequate anyway; onced I tried it I never had the temptation of looking back. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Wheel-reinvention with Python
On Sun, 31 Jul 2005 02:23:39 -0700, Robert Kern <[EMAIL PROTECTED]> wrote: >Like PyGUI, more or less? > >http://www.cosc.canterbury.ac.nz/~greg/python_gui/ We ended up using (py)Qt, and it's a nice library but to my eyes is a lot un-pythonic. In many cases there are convoluted solutions that seem to me good ideas for a C++ library, but that just do not make any sense in Python where the problem they solve simply do not exist. My impression about PyGUI is that it would be (would have been?) a nice plug for a hole in the python offer, unfortunately I also perceive the clear impression the authors don't (didn't?) actually want it to be used in the real world. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Help with generators outside of loops.
David Eppstein wrote: In article <[EMAIL PROTECTED]>, "Robert Brewer" <[EMAIL PROTECTED]> wrote: But I'm guessing that you can't index into a generator as if it is a list. row = obj.ExecSQLQuery(sql, args).next() I've made it a policy in my own code to always surround explicit calls to next() with try ... except StopIteration ... guards. Otherwise if you don't guard the call and you get an unexpected exception from the next(), within a call chain that includes a for-loop over another generator, then that other for-loop will terminate without any error messages and the cause of its termination can be very difficult to track down. Isn't the handling of StopIteration confined in the very moment of calling .next() ? This was what I expected... and from a simple test looks also what is happening... >>> for x in xrange(10): if x == 8: raise StopIteration() print x 0 1 2 3 4 5 6 7 Traceback (most recent call last): File "", line 3, in -toplevel- raise StopIteration() StopIteration i.e. the loop didn't stop silently Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Arrays? (Or lists if you prefer)
Neil Cerutti wrote: > >>> b =[range(2), range(2)] I often happened to use b = [[0] * N for i in xrange(N)] an approach that can also scale up in dimensions; for example for a cubic NxNxN matrix: b = [[[0] * N for i in xrange(N)] for j in xrange(N)] Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Where do nested functions live?
Fredrik Lundh wrote: > Ben Finney wrote: > >> If you want something that can be called *and* define its attributes, >> you want something more complex than the default function type. Define >> a class that has a '__call__' attribute, make an instance of that, and >> you'll be able to access attributes and call it like a function. > > I turned Steven's question and portions of the answers into a Python FAQ > entry: > > http://effbot.org/pyfaq/where-do-nested-functions-live.htm > > Hope none of the contributors mind. I'd add that while in some respect "def x" is like an assigment to x ... >>> def f(): global g def g(): return "Yoo!" >>> f() >>> g() 'Yoo!' in some other respect (unfortunately) it's not a regular assignment >>> x = object() >>> def x.g(): SyntaxError: invalid syntax >>> Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Name bindings for inner functions.
[EMAIL PROTECTED] wrote: > The following code: > > def functions(): > l=list() > for i in range(5): > def inner(): > return i > l.append(inner) > return l > > > print [f() for f in functions()] > > > returns [4,4,4,4,4], rather than the hoped for [0,1,2,3,4]. I presume > this is something to do with the variable i getting re-bound every time > we go through the loop, or something, but I'm not sure how to fix this. The problem is that "i" inside the function is indeed the same variable for all the functions (the one you're using for looping). If you want a different variable for each function you can use the somewhat ugly but idiomatic def functions(): l=list() for i in range(5): def inner(i=i): return i l.append(inner) return l this way every function will have its own "i" variable, that is initialized with the value of the loop variable when executing the "def" statement. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Unicode/ascii encoding nightmare
John Machin wrote: > The fact that C3 and C2 are both present, plus the fact that one > non-ASCII byte has morphoploded into 4 bytes indicate a double whammy. Indeed... >>> x = u"fødselsdag" >>> x.encode('utf-8').decode('iso-8859-1').encode('utf-8') 'f\xc3\x83\xc2\xb8dselsdag' Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Unicode/ascii encoding nightmare
John Machin wrote: > Indeed yourself. What does the above mean ? > Have you ever considered reading posts in > chronological order, or reading all posts in a thread? I do no think people read posts in chronological order; it simply doesn't make sense. I also don't think many do read threads completely, but only until the issue is clear or boredom kicks in. Your nice "double whammy" post was enough to clarify what happened to the OP, I just wanted to make a bit more explicit what you meant; my poor english also made me understand that you were just "suspecting" such an error, so I verified and posted the result. That your "suspect" was a sarcastic remark could be clear only when reading the timewise "former" reply that however happened to be lower in the thread tree in my newsreader; fact that pushed it into the "not worth reading" area. > It might help > you avoid writing posts with non-zero information content. Why should I *avoid* writing posts with *non-zero* information content ? Double whammy on negation or still my poor english kicking in ? :-) Suppose you didn't post the double whammy message, and suppose someone else made it seven minutes later than your other post. I suppose that in this case the message would be a zero content noise (and not the precious pearl of wisdom it is because it comes from you). > Cheers, > John Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Alex Mizrahi wrote: ... > so we can see PyDict access. moreover, it's inlined, since it's very > performance-critical function. > but even inlined PyDict access is not fast at all. ma_lookup is a long and > hairy function containing the loop. I once had a crazy idea about the lookup speed problem; can't the lookup result be cached in the bytecode ? I am thinking to something like saving a naked pointer to the value together with a timestamp of the dictionary. With timestamp I mean an integer (may be 64 bit) that is incremented and stamped in the dictionary every time the dictionary is modified; this counter can be shared among all dictionaries. The use of a naked pointer would be IMO safe because to invalidate the object you would also need to touch the dictionary. Using this approach the lookup for a constant string could be if (bytecode_timestamp == dict->timestamp) { // just use the stored result } else { // do standard lookup and store // result and dict->timestamp } I'd expect that this would be a big win for a lot of lookup as the problem with python speed is the *potential* dynamism... hopefully people don't keep changing what math.sin is during the execution so the vast majority of lookups at module level will find the timestamp being valid. This invalidation is not "optimal" as changing math.sin would also invalidate any lookup on math, but IMO a lot of lookups happen in *fixed* dictionaries and the the overhead of checking the cached result first should be small. What it would break is code that actually dynamically changes the string being looked up in the dictionary in the bytecode, but I hope those places are few if the exist at all. Is this worth investigation or it has already been suggested/tried ? Andrea -- http://mail.python.org/mailman/listinfo/python-list
Lookup caching
Hello, I implemented that crazy idea and seems working... in its current hacked state can still pass the test suite (exluding the tests that don't like self generated output on stdout from python) and the stats after the quicktest are IMO impressing: LOAD_GLOBAL = 13666473 globals miss = 58988 builtins = 8246184 builtins miss = 32001 LOAD_GLOBAL is the total number of times the pseudocode instruction was executed. globals miss is the number of time the actual lookup on globals had to be perfomed. Note that if the lookup wasn't done because the name was known to be absent from globals still it's considered a cache hit, not miss. builtins is the total number of times the builtins dict has to be searched (because the name was not in globals) builtin miss is the number of real builtin searches that were performed (in other cases the lookup cache for builtins found the answer - either positive or negative). To me seems a promising idea, i've still to clean up the code and to make serious speed tests (the "make test" itself seems a lot more "fluid", but it could be just self hypnotization :-D ). The LOAD_GLOBAL code is actually simpler because I resorted to a regular non-inlined lookup in case of a cache miss. There's no reason to do that however... Also the same approach could be used for other lookups that get the name from co->co_names. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Lookup caching
Gabriel Genellina wrote: > At Saturday 9/12/2006 23:04, Andrea Griffini wrote: > >> I implemented that crazy idea and seems working... in its >> current hacked state can still pass the test suite (exluding > > What crazy idea? And what is this supposed to do? > The idea is to avoid looking up constants several times into dictionaries that didn't change (e.g. builtins). Reading a bit about other optimization proposals I didn't find a similar one so I decided to invest some spare time in it. The idea is 1) Add a "timestamp" to dictionaries, so when a dictionary is changed the timestamp gets updated 2) Store a cached lookup for constants; the cached lookup is stored as a timestamp value and a naked pointer to the result. The algorithm for the lookup of a given constant is: if ( <> == d->timestamp) { x = <>; } else { x = PyDict_GetItem(d, key); <> = d->timestamp; <> = x; } using a naked pointer is safe because it will be used only if the dictionary wasn't touched, hence the value is surely still alive. The original place I thought about where to store the cached lookup was the bytecode, however after reading python sources I resorted instead to a dedicated space inside the code object. The code for LOAD_GLOBAL uses something like if (co->co_cachedtstamps[oparg] == d->timestamp) ... i.e. I used an array indexed by the index of the co_name used for lookups. The patched code is currently working, however I found that while the hit/miss ratios are impressive (as I expected) the speedup is simply absent. Moreover there is no difference at all between paying for the timestamp handling and NOT using the cached lookups or instead paying AND using the cached lookups (!). Absurdely python on my PC runs faster if I in addition to the cached lookup code also leave in place the hit/miss statistics (a few static ints increment and a static atexit-ed output function). Also it made a lot of difference about where the timestamp was placed inside the dictobject structure... In addition to the not impressive results (my patched python now is just a bit *slower* than original one :-D) there is also another complication. The LOAD_GLOBAL actually needs TWO lookups, so I used two cached results (one for globals, one for builtins). The ideal solution however IMO would be in this case to have two timestamps and one cached value instead... (if neither dict was touched since last lookup then the result will be the cached one). The complication is that a lot of lookups are done by the LOAD_ATTR instead and thinking to the ideal solution for new classes made my brain explode (mro, descriptor and stuff...). It would be simple to do something for classic classes, but would that be worth (i mean... aren't those going to disappear ?). Probably something can be done for using caches for LOAD_ATTR for modules (to speed up a bit things like math.sin or mod1.mod2.mod3.func). Any suggestion is welcome... Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Lookup caching
MRAB wrote: ... > What are you using for the timestamp? Are you calling a function to > read a timer? For timestamp I used a static variable; to update the timestamp for a dictionary I used d->timestamp = ++global_dict_timestamp; I'm using a single counter for all dicts so that when doing the check for cached value validity I'm checking at the same time that the dict is the same dict was used and that it wasn't touched since when the lookup result was stored. Using this approach and tweaking the LOAD_GLOBAL double lookup for using a "two timestamps one value" cache I got an 8%-10% speedup (depending on the compiler options when building python) in a real application of mines and about 5% in a small test. I've yet to try more complex applications (the biggest real application I have however requires a lot of external modules so testing the speed gain with that will require a lot more work or just downgrading the python version to 2.4). Also I'm using an 32 bit int for timestamp... I wonder if I should listen to the paranoid in my head that is crying for 64 instead. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Inconsistency in dictionary behaviour: dict(dict) not calling __setitem__
Mitja Trampus wrote: ... > At least, I know it surprised me when I first met this behavior. Or is > my reasoning incorrect? Why len() doesn't call iteritems() ? :-) Kidding apart for example it would be ok for __setitem__ to call either an internal "insert_new_item" or "update_existing_item" depending on if the key is already present in the dictionary. In this case I suppose you agree it would make a lot of sense to go directly for "insert_new_item" in the constructor from a dict instead of calling the public __setitem__... The key point is that you're not authorized to assume constructing a dictionary from a dictionary will use __setitem__ unless this is explicitly stated in the interface. ... > What I find an even nastier surprise is that dict.update behaves this > way as well: ... > The docstring is, at best, misguiding on this particular point: > > >>> print d.update.__doc__ > D.update(E, **F) -> None. Update D from E and F: for k in E: D[k] = E[k] > (if E has keys else: for (k, v) in E: D[k] = v) then: for k in F: D[k] = > F[k] I cannot understand this doc string at all. The explanation in the manual however just talks about "updating", with no reference to assignments. The manual of 2.3 instead was using a code example and I'd say this would qualify as a binding to actually implement calls to __setitem__. This kind of error (i.e. over-specifying by providing actual code that implies specific side-effects) was also present in the C++ standard, and in at least one case an implementation would have to be very inefficient to comply on the issue (this fortunately is not what happened, the standard was "fixed" instead). If there is a bug in this case is IMO a docstring bug. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Pythonic style involves lots of lightweight classes (for me)
metaperl wrote: > The above program started out as a list of dictionaries, but I > like the current approach much better. There is even a common idiom for this... class Record(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) This way you can use user = Record(name="Andrea Griffini", email="[EMAIL PROTECTED]") and then access the fields using user.name syntax HTH Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Why less emphasis on private data?
Paul Rubin wrote: > Yes I've had plenty of > pointer related bugs in C programs that don't happen in GC'd > languages, so GC in that sense saves my ass all the time. My experience is different, I never suffered a lot for leaking or dangling pointers in C++ programs; and on the opposite I didn't expect that fighting with object leaking in complex python applications was that difficult (I've heard of zope applications that just gave up and resorted to the "reboot every now and then" solution). With a GC if you just don't plan ownership and disposal carefully and everything works as expected then you're saving some thinking and code, but if something goes wrong then you're totally busted. The GC "leaky abstraction" requires you to be lucky to work well, but unfortunately IMO as code complexity increases one is never lucky enough. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Why less emphasis on private data?
Bruno Desthuilliers wrote: >> ... and on >> the opposite I didn't expect that fighting with object >> leaking in complex python applications was that difficult >> (I've heard of zope applications that just gave up and >> resorted to the "reboot every now and then" solution). >> > Zope is a special case here, since it relies on an object database... Just to clarify my post... I found by being punched myself in the nose what does it mean to have a complex python application that suffers from object leaking; it's not something I only read about zope programs. But why zope applications would be a special case ? Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Why less emphasis on private data?
Steven D'Aprano wrote: > That is total and utter nonsense and displays the most appalling > misunderstanding of probability, not to mention a shocking lack of common > sense. While I agree that the programming job itself is not a program and hence the "consider any possibility" simply doesn't make any sense I can find a bit of truth in the general idea that *in programs* it is dangerous to be deceived by probability. When talking about correctness (that should be the main concern) for a programmer "almost never" means "yes" and "almost always" means "not" (probability of course for example kicks in about efficency). Like I said however this reasoning doesn't work well applied to the programming process itself (that is not a program... as programmers are not CPUs; no matter what bigots of software engineering approaches are hoping for). Private variables are about the programming process, not the program itself; and in my experience the added value of C++ private machinery is very low (and the added cost not invisible). When working in C++ I like much more using all-public abstract interfaces and module-level all-public concrete class definitions (the so called "compiler firewall" idiom). Another thing on the same "line of though" of private members (that should "help programmers") but for which I never ever saw *anything but costs* is the broken idea of "const correctness" of C++. Unfortunately that is not something that can be avoided completely in C++, as it roots in the core of the language. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiple assignment and the expression on the right side
While I think that the paragraph is correct still there is IMO indeed the (low) risk of such a misunderstanding. The problem is that "the statement executes" can IMO easily be understood as "the statements execute" (especially if your background includes only languages where there's no multiple assignment) and the world "single" is also frequently used in phrases like "every single time" where can indeed denote a context of plurality. Adding "just" or "only" would be IMO an great improvement by focusing the attention on the key point, as it would be IMO better using "rightmost" instead of "right-hand" (in this case even saving a char ;-) ) Just two foreign cents -- http://mail.python.org/mailman/listinfo/python-list
Re: Augmented assignment
I think it heavily depends on what is "x". If x is bound to a mutable x=x+1 and x+=1 can not only have different speed but indeed can do two very unrelate things (the former probably binding to a new object, the latter probably modifying the same object). For example consider what happens with lists and [1] instead of 1... >>> s = [] >>> t = s >>> t = t + [1] >>> t [1] >>> s [] >>> s2 = [] >>> t2 = s2 >>> t2 += [1] >>> t2 [1] >>> s2 [1] >>> Also if x is not a single name but a more convoluted expression with += that expression is evaluated once and even in this case there can be differences in speed and not only in speed. -- http://mail.python.org/mailman/listinfo/python-list
Re: How can I find the remainder when dividing 2 integers
Writing a while loop with ++x to increment the index was the first mistake i made with python. "++x" unfortunately is valid, it's not a single operator but a double "unary plus" Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Python a Zen language?
I think that the classification has some meaning, even if of course any language has different shades of both sides. I'd say that with python is difficult to choose one of the two categories because it's good both as a pratical language and as a mind-opener language. IMO another language that would be hard to classify is COBOL ... but for other reasons :-) Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: empty lists vs empty generators
On 2 May 2005 21:49:33 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >Starting from Python 2.4 we have tee in the itertools >module, so you can define the following: > >from itertools import tee > >def is_empty(it): >it_copy = tee(it)[1] >try: >it_copy.next() >except StopIteration: >return True >else: >return False > >It works with generic iterables too. Are you sure this is going to do the right thing ? seems to me it would drop the first element of "it"... (the yielded element entered the tee twins, but already got out of "it"). I would say that unless you use the second twin after calling is_empty that code wouldn't work... Am I correct or instead "tee" uses black magic to just peek at the yielded value without starting a continuation ? Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What are OOP's Jargons and Complexities?
On Wed, 01 Jun 2005 16:07:58 +0200, Matthias Buelow <[EMAIL PROTECTED]> wrote: >With a few relaxations and extensions, you can get a surprisingly useful >language out of the rigid Pascal, as evidenced by Turbo Pascal, one of >the most popular (and practical) programming languages in the late 80ies >/ start of the 90ies. It was not a language. It was a product in the hand of a single company. The difference is that a product can die at the snaps of a marketroid, no matter how nice or how diffuse it is. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What are OOP's Jargons and Complexities?
On Wed, 01 Jun 2005 23:25:00 +0200, Matthias Buelow <[EMAIL PROTECTED]> wrote: >Of course it is a language, just not a standardized one (if you include >Borland's extensions that make it practical). The history of "runtime error 200" and its handling from borland is a clear example of what I mean with a product. You are of course free to call even Microsoft Access a language (and make long term investment on it) if you want. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What are OOP's Jargons and Complexities?
On Sun, 05 Jun 2005 16:30:18 +0200, Matthias Buelow <[EMAIL PROTECTED]> wrote: >Quite embarrassing, but it's a runtime bug and got nothing to do with >the language per se. And it certainly manifests itself after the >hey-days of Turbo Pascal (when Borland seems to have lost interest in >maintaining it.) The point is not the bug, of course, but how borland handled it. It appeared when the user community of borland pascal was well alive and kicking, but borland didn't even invest 5 seconds for the issue. The users had to fix the library themselves (possible because at that time with Borland Pascal you were getting the whole source code of the library; but note that it was a 100% genuine bug due to misprogramming, fixing it even on a dead product would have been the a nice move from borland). The user community went even further, as so many executables were written witn borland pascal that a special tool for binary patching executables was built (actually a few of them, as being unofficial it wasn't that simple to get to know that such a tool existed, so different people independently resorted to the same solution). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: time consuming loops over lists
On Tue, 07 Jun 2005 18:13:01 +0200, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: >Another optimization im too lazy now would be to do sort of a "tree >search" of data[i] in rngs - as the ranges are ordered, you could find >the proper one in log_2(len(rngs)) instead of len(rngs)/2. I don't see a "break" so why the "/2" ? also IIUC the ranges are more than just ordered... they're all equal and computed by for i in xrange(no_of_bins+1): rngs[i] = dmin + (rng*i) so my guess is that instead of searching with for j in xrange(len(rngs)-1): if rngs[j] <= data[i] < rngs[j+1]: one could just do j = int((data[i] - dmin)/rng) The code with this change gets roughly about twice faster. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: programmnig advise needed
On 7 Jun 2005 12:14:45 -0700, [EMAIL PROTECTED] wrote: >I am writing a Python program that needs to read XML files and contruct >a tree object from the XML file (using wxTree). Supposing your XML file has a single top level node (so that it's a legal XML file) then the following code should be able to read it... # from elementtree import ElementTree as ET class Node: def __init__(self, xmlnode): self.xmlnode = xmlnode self.children = [] def loadtree(f): root = ET.parse(f).getroot() nodes = {} for x in root: if x.tag.startswith("element"): nodes[x.tag] = Node(x) for x in root.findall("association"): name = x.find("Name").text parent = x.find("Parent").text nodes[parent].children.append(nodes.pop(name)) assert len(nodes) == 1 return nodes.popitem()[1] ## The idea is to create a node with an empty list of logical children and then for every association removing the child node from the global pool (a dict indexed by node name) to place it in its parent. I assumed that the nodes are in random order, but that the associations are sorted bottom-up in the tree. If this is not the case then you should keep TWO dicts, removing from only one of them the children when you process an association, and looking up the parent in the *other* dict that is not changed during processing of associations. The dict from who you are removing the children will allow you to detect logical errors in the file (i.e. a node having two parents - you won't find that node in the dict the second time - and absence of a single root - you won't end up with a single element in the dict after processing all associations -). HTH Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: time consuming loops over lists
On Tue, 07 Jun 2005 23:38:29 +0200, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: >> I don't see a "break" so why the "/2" ? also IIUC the > >That was the assumption of an equal distribution of the data. In >O-notationn this would be O(n) of course. It was a joke ... the issue is that there was no break statement :-) i.e. your code kept searching even after finding the proper range! Actually that break (with 10 bins) is not terribly important because the cost of the comparision is small compared to the cost of append. The timings I got are.. your code 1.26 sec adding break 0.98 sec direct index computation 0.56 sec 10 bins are so few that with just low-level speedups (i.e. precomputing a list of ranges and str(j)) the code that does a linear scan requires just 0.60 seconds. Hand optimizing the direct computation code the execution time gets down to 0.3 seconds; the inner loop i used is: for i, x in enumerate(data): j = int((x - dmin)/rng) tkns[i] = tks + js[j] with data = range(20, 123120) Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Abstract and concrete syntax
On Thu, 09 Jun 2005 03:32:12 +0200, David Baelde <[EMAIL PROTECTED]> wrote: >I tried python, and do like it. Easy to learn and read This is a key point. How easy is to *read* is considered more important than how easy is to *write*. Re-read the absence of a ternary operator or the limitations of lambda in python under this light (about lambdas note that you can define a named function in a local scope if you need it; and those are full-blown functions and not just a single expression). HTH Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Sat, 11 Jun 2005 21:52:57 -0400, Peter Hansen <[EMAIL PROTECTED]> wrote: >I think new CS students have more than enough to learn with their >*first* language without having to discover the trials and tribulations >of memory management (or those other things that Python hides so well). I'm not sure that postponing learning what memory is, what a pointer is and others "bare metal" problems is a good idea. Those concept are not "more complex" at all, they're just more *concrete* than the abstract concept of "variable". Human mind work best moving from the concrete to the abstract, we first learn counting, and only later we learn rings (or even set theory). Unless you think a programmer may live happy without understanding concrete issues then IMO the best is to learn concrete facts first, and only later abstractions. I think that for a programmer skipping the understanding of the implementation is just impossible: if you don't understand how a computer works you're going to write pretty silly programs. Note that I'm not saying that one should understand every possible implementation down to the bit (that's of course nonsense), but there should be no room for "magic" in a computer for a professional programmer. Also concrete->abstract shows a clear path; starting in the middle and looking both up (to higher abstractions) and down (to the implementation details) is IMO much more confusing. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Sun, 12 Jun 2005 20:22:28 -0400, Roy Smith <[EMAIL PROTECTED]> wrote: >How far down do you have to go? What makes bytes of memory, data busses, >and CPUs the right level of abstraction? They're things that can be IMO genuinely accept as "obvious". Even "counting" is not the lowest level in mathematic... there is the mathematic philosohy direction. From "counting" you can go "up" in the construction direction (rationals, reals, functions, continuity and the whole analysis area) building on the counting concept or you can go "down" asking yourself what it does really mean counting, what do you mean with a "proof", what really is a "set". However the "counting" is naturally considered obvious for our minds and you can build the whole life without the need to look at lower levels and without getting bitten too badly for that simplification. Also lower than memory and data bus there is of course more stuff (in our universe looks like there is *always* more stuff no mattere where you look :-) ), but I would say it's more about electronic than computer science. >Why shouldn't first-year CS students study "how a computer works" at the >level of individual logic gates? After all, if you don't know how gates >work, things like address bus decoders, ALUs, register files, and the like >are all just magic (which you claim there is no room for). It's magic if I'm curious but you can't answer my questions. It's magic if I've to memorize because I'm not *allowed* to understand. It's not magic if I can (and naturally do) just ignore it because I can accept it. It's not magic if I don't have questions because it's for me "obvious" enough. >> Also concrete->abstract shows a clear path; starting >> in the middle and looking both up (to higher >> abstractions) and down (to the implementation >> details) is IMO much more confusing. > >At some point, you need to draw a line in the sand (so to speak) and say, >"I understand everything down to *here* and can do cool stuff with that >knowledge. Below that, I'm willing to take on faith". I suspect you would >agree that's true, even if we don't agree just where the line should be >drawn. You seem to feel that the level of abstraction exposed by a >language like C is the right level. I'm not convinced you need to go that >far down. I'm certainly not convinced you need to start there. I think that if you don't understand memory, addresses and allocation and deallocation, or (roughly) how an hard disk works and what's the difference between hard disks and RAM then you're going to be a horrible programmer. There's no way you will remember what is O(n), what O(1) and what is O(log(n)) among containers unless you roughly understand how it works. If those are magic formulas you'll just forget them and you'll end up writing code that is thousands times slower than necessary. If you don't understand *why* "C" needs malloc then you'll forget about allocating objects. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Sun, 12 Jun 2005 19:53:29 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote: >Andrea Griffini <[EMAIL PROTECTED]> writes: >> On Sat, 11 Jun 2005 21:52:57 -0400, Peter Hansen <[EMAIL PROTECTED]> >> wrote: >> Also concrete->abstract shows a clear path; starting >> in the middle and looking both up (to higher >> abstractions) and down (to the implementation >> details) is IMO much more confusing. > >So you're arguing that a CS major should start by learning electronics >fundamentals, how gates work, and how to design hardware(*)? Because >that's what the concrete level *really* is. Start anywhere above that, >and you wind up needing to look both ways. Not really. Long ago I've drawn a line that starts at software. I think you can be a reasonable programmer even without the knowledge about how to design hardware. I do not think you can be a reasonable programmer if you never saw assembler. >Admittedly, at some level the details simply stop mattering. But where >that level is depends on what level you're working on. Writing Python, >I really don't need to understand the behavior of hardware >gates. Writing horizontal microcode, I'm totally f*cked if I don't >understand the behavior of hardware gates. But you better understand how, more or less, your computer or language works, otherwise your code will be needless thousand times slower and will require thousand times more memory than is necessary. Look a recent thread where someone was asking why python was so slow (and the code contained stuff like "if x in range(low, high):" in an inner loop that was itself pointless). >In short, you're going to start in the middle. I've got "bad" news for you. You're always in the middle :-D. Apparently it looks like this is a constant in our universe. Even counting (i.e. 1, 2, 3, ...) is not the "start" of math (you can go at "lower" levels). Actually I think this is a "nice" property of our universe, but discussing this would bring the discussion a bit OT. >Is it really justified to confuse them all >by introducing what are really extraneous details early on? I simply say that you will not able to avoid introducing them. If they're going to write software those are not "details" that you'll be able to hide behind a nice and perfect virtual world (this is much less true about bus cycles... at least for many programmers). But if you need to introduce them, then IMO is way better doing it *first*, because that is the way that our brain works. You cannot build on loosely placed bricks. >You've stated your opinion. Personally, I agree with Abelson, Sussman >and Sussman, whose text "The Structure and Interpretation of Computer >Programs" was the standard text at one of the premiere engineering >schools in the world, and is widely regarded as a classic in the >field: they decided to start with the abstract, and deal with concrete >issues - like assignment(!) later. Sure. I know that many think that starting from higher levels is better. However no explanation is given about *why* this should work better, and I didn't even see objective studies about how this approach pays off. This is of course not a field that I've investigated a lot. What I know is that every single competent programmer I know (not many... just *EVERY SINGLE ONE*) started by placing firmly concrete concepts first, and then moved on higher abstractions (for example like structured programming, OOP, functional languages ...). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Sun, 12 Jun 2005 21:52:12 -0400, Peter Hansen <[EMAIL PROTECTED]> wrote: >I'm curious how you learned to program. An HP RPN calculator, later TI-57. Later Apple ][. With Apple ][ after about one afternoon spent typing in a basic program from a magazine I gave up with basic and started with 6502 assembler ("call -151" was always how I started my computer sessions). >What path worked for you, and do you think it was >a wrong approach, or the right one? I was a fourteen with no instructor, when home computers in my city could be counted on the fingers of one hand. Having an instructor I suppose would have made me going incredibly faster. Knowing better the english language at that time would have made my life also a lot easier. I think that anyway it was the right approach in terms of "path", not the (minimal energy) approach in terms of method. Surely a lower energy one in the long run comparing to those that started with basic and never looked at lower levels. >In my case, I started with BASIC. Good old BASIC, with no memory >management to worry about, no pointers, no "concrete" details, just FOR >loops and variables and lots of PRINT statements. That's good as an appetizer. >A while (some months) later I stumbled across some assembly language and >-- typing it into the computer like a monkey, with no idea what I was >dealing with -- began learning about some of the more concrete aspects >of computers. That is IMO a very good starting point. Basically it was the same I used. >This worked very well in my case, and I strongly doubt I would have >stayed interested in an approach that started with talk of memory >addressing, bits and bytes, registers and opcodes and such. I think that getting interested in *programming* is important... it's like building with LEGOs, but at a logical level. However that is just to get interest... and a few months with basic is IMO probably too much. But after you've a target (making computers do what you want) then you've to start placing solid bricks, and that is IMO assembler. Note that I think that any simple assembler is OK... even if you'll end up using a different processor when working in C it will be roughly ok. But I see a difference between those that never (really) saw assembler and those that did. >I won't say that I'm certain about any of this, but I have a very strong >suspicion that the *best* first step in learning programming is a >program very much like the following, which I'm pretty sure was mine: > >10 FOR A=1 TO 10: PRINT"Peter is great!": END Just as a motivation. After that *FORGETTING* that (for and the "next" you missed) is IMO perfectly ok. >More importantly by far, *I made the computer do something*. Yes, I agree. But starting from basic and never looking lower is quit a different idea. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Mon, 13 Jun 2005 09:22:55 +0200, Andreas Kostyrka <[EMAIL PROTECTED]> wrote: >Yep. Probably. Without a basic understanding of hardware design, one cannot >many of todays artifacts: Like longer pipelines and what does this >mean to the relative performance of different solutions. I think that pipeline stalls, CPU/FPU parallel computations and cache access optimization is the lowest level I ever had to swim in (it was when I was working in the videogame industry, on software 3D rendering with early pentiums). Something simpler but somewhat similar was writing on floppy disks on the Apple ][ where there was no timer at all in the computer excluding the CPU clock and the code for writing was required to output a new nibble for the writing latch exactly every 40 CPU cycles (on the Apple ][ the CPU was doing everything, including controlling the stepper motor for disk seek). However I do not think that going this low (that's is still IMO just a bit below assembler and still quite higher than HW design) is very common for programmers. >Or how does one explain that a "stupid and slow" algorithm can be in >effect faster than a "clever and fast" algorithm, without explaining >how a cache works. And what kinds of caches there are. (I've seen >documented cases where a stupid search was faster because all hot data >fit into the L1 cache of the CPU, while more clever algorithms where >slower). Caching is indeed very important, and sometimes the difference is huge. I think anyway that it's probably something confined in a few cases (processing big quantity of data with simple algorithms, e.g. pixel processing). It's also a field where if you care about the details the specific architecture plays an important role, and anything you learned about say the Pentium III could be completely pointless on the Pentium 4. Except by general locality rules I would say that everything else should be checked only if necessary and on a case-by-case approach. I'm way a too timid investor to throw in neurons on such a volatile knowledge. >Or you get perfect abstract designs, that are horrible when >implemented. Current trend is that you don't even need to do a clear design. Just draw some bubbles and arrows on a white board with a marker, throw in some buzzword and presto! you basically completed the new killing app. Real design and implementation are minutiae for bozos. Even the mighty python is incredibly less productive than powerpoint ;-) >Yes. But for example to understand the memory behaviour of Python >understanding C + malloc + OS APIs involved is helpful. This is a key issue. If you've the basis firmly placed most of what follows will be obvious. If someone tells you that inserting an element at the beginning of an array is O(n) in the number of elements then you think "uh... ok, sounds reasonable", if they say that it's amortized O(1) instead then you say "wow..." and after some thinking "ok, i think i understand how it could be done" and in both cases you'll remember it. It's a clear *concrete* fact that I think just cannot be forgot. If O(1) and O(n) and how dynamic arrays could be possibly be implemented is just black magic and a few words in a text for you then IMO you'll never be able to remember what that implies, and you'll do soon or late something really really stupid about it in your programs. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Mon, 13 Jun 2005 13:35:00 +0200, Peter Maas <[EMAIL PROTECTED]> wrote: >I think Peter is right. Proceeding top-down is the natural way of >learning. Depends if you wanna build or investigate. To build top down is the wrong approach (basically because there's no top). Top down is however great for *explaining* what you already built or know. >(first learn about plants, then proceed to cells, molecules, >atoms and elementary particles). This is investigating. Programming is more similar to building instead (with a very few exceptions). CS is not like physics or chemistry or biology where you're given a result (the world) and you're looking for the unknown laws. In programming *we* are building the world. This is a huge fundamental difference! >If you learn a computer language you have to know about variables, >of course. There are no user defined variables in assembler. Registers of a CPU or of a programmable calculator are easier to understand because they're objectively simpler concepts. Even things like locality of scope will be appreciated and understood better once you try to live with just a global scope for a while. >The concepts of memory, data and addresses can easily be demonstrated >in high level languages including python e.g. by using a large string >as a memory model. Proceeding to bare metal will follow driven by >curiosity. Hehehe... a large python string is a nice idea for modelling memory. This shows clearly what I mean with that without firm understanding of the basis you can do pretty huge and stupid mistakes (hint: strings are immutable in python... ever wondered what does that fancy word mean ?) Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Mon, 13 Jun 2005 01:54:53 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote: >Andrea Griffini <[EMAIL PROTECTED]> writes: >>>In short, you're going to start in the middle. >> >> I've got "bad" news for you. You're always in the >> middle :-D. > >That's what I just said. Yeah. I should stop replying before breakfast. >I disagree. If you're going to make competent programmers of them, >they need to know the *cost* of those details, but not necessarily the >actual details themselves. It's enough to know that malloc may lead to >a context switch; you don't need to know how malloc actually works. Unless those words have a real meaning for you then you'll forget them... I've seen this a jillion times with C++. Unless you really understand how an std::vector is implemented you'll end up doing stupid things like looping erasing the first element. Actually I cannot blame someone for forgetting that insert at the beginning is O(n) and at the end is amortized O(1) if s/he never understood how a vector is implemented and was told to just learn those two little facts. Those little facts are obvious and can easily be remembered only if you've a conceptual model where they fit. If they're just random notions then the very day after the C++ exam you'll forget everything. >That's the way *your* brain works. I'd not agree that mine works that >way. Then again, proving either statement is an interesting >proposition. Are you genuinely saying that abelian groups are easier to understand than relative integers ? >The explanation has been stated a number of times: because you're >letting them worry about learning how to program, before they worry >about learning how to evaluate the cost of a particular >construct. Especially since the latter depends on implementation >details, which are liable to have to be relearned for every different >platform. You'll get programmers that do not understand how their programs work. This unavoidably will be a show stopper when their programs will not work (and it's when, not if...). >I don't normally ask how people learned to program, but I will observe >that most of the CS courses I've been involved with put aside concrete >issues - like memory management - until later in the course, when it >was taught as part of an OS internals course. The exception would be >those who were learning programming as part of an engineering (but not >software engineering) curriculum. The least readable code examples >almost uniformly came from the latter group. I suppose that over there who is caught reading TAOCP is slammed in jail ... Placing memory allocation in the "OS internals" course is very funny. Let's hope you're just joking. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Mon, 13 Jun 2005 22:23:39 +0200, Bruno Desthuilliers <[EMAIL PROTECTED]> wrote: >Being familiar with >fondamental *programming* concepts like vars, branching, looping and >functions proved to be helpful when learning C, since I only had then to >focus on pointers and memory management. If you're a good programmer (no idea, I don't know you and you avoided the issue) then I think you wasted a lot of energy and neurons learning that way. Even high-level scripting languages are quite far from a perfect virtualization, and either the code you wrote in them was terrible *OR* you were able to memorize an impressive quantity of black magic details (or you were just incredibly lucky ;-) ). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Mon, 13 Jun 2005 21:33:50 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote: >But this same logic applies to why you want to teach abstract things >before concrete things. Since you like concrete examples, let's look >at a simple one: > > a = b + c > ... >In a very >few languages (BCPL being one), this means exactly one thing. But >until you know the underlying architecture, you still can't say how >many operations it is. That's exactly why mov eax, a add eax, b mov c, eax or, even more concrete and like what I learned first lda $300 clc adc $301 sta $302 is simpler to understand. Yes... for some time I even worked with the computer in machine language without using a symbolic assembler; I unfortunately paid a price to it and now I've a few neurons burnt for memorizing irrelevant details like that the above code is (IIRC) AD 00 03 18 6D 01 03 8D 02 03... but I think it wasn't a complete waste of energy. Writing programs in assembler takes longer exactly beacuse the language is *simpler*. Assembler has less implicit semantic because it's closer to the limited brain of our stupid silicon friend. Programming in assembler also really teaches (deeply to your soul) who is the terrible "undefined behaviour" monster you'll meet when programming in C. >Anything beyond the abstract statement "a gets the result >of adding b to c" is wasted on them. But saying for example that del v[0] just "removes the first element from v" you will end up with programs that do that in a stupid way, actually you can easily get unusable programs, and programmers that go around saying "python is slow" for that reason. >It's true that in some cases, it's easier to remember the >implementation details and work out the cost than to >remember the cost directly. I'm saying something different i.e. that unless you understand (you have a least a rough picture, you don't really need all the details... but there must be no "magic" in it) how the standard C++ library is implemented there is no way at all you have any chance to remember all the quite important implications for your program. It's just IMO impossible to memorize such a big quantity of unrelated quirks. Things like for example big O, but also undefined behaviours risks like having iterators invalidated when you add an element to a vector. >> Are you genuinely saying that abelian groups are >> easier to understand than relative integers ? > >Yup. Then again, my formal training is as a mathematician. I *like* >working in the problem space - with the abstact. I tend to design >top-down. The problem with designing top down is that when building (for example applications) there is no top. I found this very simple and very powerful rationalization about my gut feeling on building complex systems in Meyer's "Object Oriented Software Construction" and it's one to which I completely agree. Top down is a nice way for *explaining* what you already know, or for *RE*-writing, not for creating or for learning. IMO no one can really think that teaching abelian groups to kids first and only later introducing them to relative numbers is the correct path. Human brain simply doesn't work like that. You are saying this, but I think here it's more your love for discussion than really what you think. >The same is true of programmers who started with concrete details on a >different platform - unless they relearn those details for that >platform. No. This is another very important key point. Humans are quite smart at finding general rules from details, you don't have to burn your hand on every possible fire. Unfortunately sometimes there is the OPPOSITE problem... we infer general rules that do not apply from just too few observations. >The critical things a good programmer knows about those >concrete details is which ones are platform specific and which aren't, >and how to go about learning those details when they go to a new >platform. I never observed this problem. You really did ? That is such not a problem that Knuth for example decided to use an assembler language for a processor that doesn't even exist (!). >If you confuse the issue by teaching the concrete details at the same >time as you're teaching programming, you get people who can't make >that distinction. Such people regularly show up with horrid Python >code because they were used to the details for C, or Java, or >whatever. Writing C code with python is indeed a problem that is present. But I think this is a minor price to pay. Also it's something that with time and experience it will be fixed. >> I suppose that over there who is caught reading >> TAOCP is slammed in jail ... > >Those taught the concrete method would never have been exposed to >anything so abstract. Hmmm; TACOP is The Art Of Computer Programming, what is the abstract part of it ? The code presented is only MIX assembler. There are math prerequisites for a few parts, but I think no one could call it "abstract"
Re: What is different with Python ?
On Mon, 13 Jun 2005 22:19:19 -0500, D H <[EMAIL PROTECTED]> wrote: >The best race driver doesn't necessarily know the most about their car's >engine. The best baseball pitcher isn't the one who should be teaching >a class in physics and aerodynamics. Yes, both can improve their >abilities by learning about the fundamentals of engines, aerodynamics, >etc., but they aren't "bad" at what they do if they do not know the >underlying principles operating. And when you've a problem writing your software who is your mechanic ? Who are you calling on the phone for help ? Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Tue, 14 Jun 2005 04:18:06 GMT, Andrew Dalke <[EMAIL PROTECTED]> wrote: >In programming you're often given a result ("an inventory >management system") and you're looking for a solution which >combines models of how people, computers, and the given domain work. Yes, at this higher level I agree. But not on how a computer works. One thing is applied math, another thing is math itself. When you're trying to find a solution of a problem it's often the fine art of compromise. >Science also has its purely observational domains. I agree that "applied CS" is one of them (I mean the art of helping people by using computers). But not about the language or explaining how computers work. I know that looking at the art of installing (or uninstalling!) windows applications seems that this is a completely irrational world where no rule indeed exists... but this is just an illusion; there are clear rules behind it and, believe it or not, we know *all* of them. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On 14 Jun 2005 00:37:00 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >It looks like you do not have a background in Physics research. >We *do* build the world! ;) > > Michele Simionato Wow... I always get surprises from physics. For example I thought that no one could drop confutability requirement for a theory in an experimental science... I mean that I always agreed with the logic principle that unless you tell me an experiment whose result could be a confutation of your theory or otherwise you're not saying anything really interesting. In other words if there is no means by which the theory could be proved wrong by an experiment then that theory is just babbling without any added content. A friend of mine however told me that this principle that I thought was fundamental for talking about science has indeed been sacrified to get unification. I was told that in physics there are current theories for which there is no hypotetical experiment that could prove them wrong... (superstrings may be ? it was a name like that but I don't really remember). To me looks like e.g. saying that objects are moved around by invisible beings with long beards and tennis shoes and that those spirits like to move them under apparent laws we know because they're having a lot of fun fooling us. However every now and then they move things a bit differently just to watch at our surprised faces while we try to see where is the problem in our measuring instrument. My temptation is to react for this dropping of such a logical requirement with a good laugh... what could be the result of a theory that refuses basic logic ? On a second thought however laughing at strange physics theories is not a good idea. Especially if you live in Hiroshima. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Tue, 14 Jun 2005 16:40:42 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote: >Um, you didn't do the translation right. Whoops. So you know assembler, no other possibility as it's such a complex language that unless someone already knows it (and in the specific architecture) what i wrote is pure line noise. You studied it after python, I suppose. >> or, even more concrete and like what I learned first >> >> lda $300 >> clc >> adc $301 >> sta $302 >> >> is simpler to understand. > >No, it isn't - because you have to worry about more details. In assembler details are simply more explicit. Unfortunately with computers you just cannot avoid details, otherwise your programs will suck bad. When I wrote in an high level language or even a very high level one the details are understood even if I'm not writing down them. After a while a programmer will even be able to put them at a subconscius level and e.g. by just looking at O(N^2) code that could be easily rewritten as O(N) or O(1) a little bell will ring in you brain telling you "this is ugly". But you cannot know if something is O(1) or O(N) or O(N^2) unless you know some detail. If you don't like details then programming is just not the correct field. In math when I write down the derivative of a complex function it doesn't mean I don't know what is the definition of derivative in terms of limits, or what are the conditions that must be met to be able to write down it. Yet I'm not writing them every time (some times I'll write them when they're not obvious, but when they're obvious it doesn't mean I'm not considering them or, worse, I don't know or understand them). If you don't really understand what a derivative is and when it makes sense and when it doesn't, your equations risk to be good just for after dinner pub jokes. >In particular, when programming in an HLL the compiler will take care of >allocating storage for the variables. In assembler, the programmer has >to deal with it. These extra details make the code more complicated. Just more explicit. So explicit that it can become boring. After a while certain operations are so clearly understood that you are able to write a program to do them to save us some time (and to preventing us to get bored). That's what HLL are for... to save you from doing, not to save you from understanding. What the HLL is doing for you is something that you don't do in the details, but that you better not taking without any critic or even comphrension, because, and this is anoter very important point, *YOU* will be responsible of the final result, and the final result will depend a lot (almost totally, actually) on what you call details. Just to make another example programming without the faintes idea of what's happening is not really different from using those "wizards" to generate a plethora of code you do not understand. When the wizard will also take the *responsability* for that code we may discuss it again, but until then if you don't understand what the wizard does and you just accept its code then you're not going to go very far. Just resaying it if you don't understand why it works there is just no possibility at all you'll understand why it doesn't work. Think that "a = b + c" in computes the sum of two real numbers and your program will fail (expecting, how fool, that adding ten times 0.1 you get 1.0) and you'll spend some time wondering why the plane crashed... your code was "correct" after all. >For instance, whitesmith had a z80 assembler that let you write: > > a = b + c > >and it would generate the proper instructions via direct >translation. To use that I've to understand what registers will be affected and how ugly (i.e. inefficient) the code could get. Programmin in assembler using such an high level feature without knowing those little details woul be just suicidal. >Also, you can only claim that being closer to the chip is "simpler" >because you haven't dealt with a sufficiently complicated chip yet. True that one should start with something reasonable. I started with 6502 and just love its semplicity. Now at work we've boards based on DSP TMSC320 and, believe me, that assembler gives new meanings to the word ugly. >> But saying for example that >> >> del v[0] >> >> just "removes the first element from v" you will end up >> with programs that do that in a stupid way, actually you >> can easily get unusable programs, and programmers that >> go around saying "python is slow" for that reason. > >That's an implementation detail. It's true in Python, but isn't >necessarily true in other languages. Yeah. And you must know which is which. Otherwise you'll write programs that just do not give the expected result (because the user killed them earlier). >Yes, good programmers need to know that information - or, >as I said before, they need to know that they need to know >that information, and where to get it. I think that a *decent* programmer must understand if the
Re: What is different with Python ?
On Wed, 15 Jun 2005 10:27:19 +0100, James <[EMAIL PROTECTED]> wrote: >If you're thinking of things like superstrings, loop quantum gravity >and other "theories of everything" then your friend has gotten >confused somewhere. More likely I was the one that didn't understand. Reading what wikipedia tells about it i understood better what is the situation. From a philosophical point of view I think however looks like there is indeed a problem. Is just a theorical - but infeasible - experiment for confutation sufficient ? I think I triggered my friend telling him that I found on the web a discussion about a certain book of theorical physics was considered just a joke made up by piling physics buzzwords by some, and a real theory by others. Finding that even this was a non-obvious problem amused me, and he told me that advanced physics now has this kind of problems with current theories that are so complex and so impossible to check that it's even questionable they're not breaking logic and the scientific method and are instead a question of faith and opinions. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Tue, 14 Jun 2005 12:49:27 +0200, Peter Maas <[EMAIL PROTECTED]> wrote: > > Depends if you wanna build or investigate. > >Learning is investigating. Yeah, after thinking to this phrase I've to agree. Sometimes learning is investigating, sometimes it's building. Since I discovered programming I've spent most time just building, but I started on a quite concrete floor (assembler). Sometimes, for example when I recently dedicated some time to functional languages, it's more investigating. Functional languages is not something I've used extensively, I've no real experience with them... and at least at the application level for me is pure magic. I mean that I can understand how the language is implemented itself, what is just amazing for me is how can you build a program for a computer - an incredibly complex state based machine - with a language where state is not your guide but your enemy. >Don't nail me down on that stupid string, I know it's immutable but >didn't think about it when answering your post. Take replacement> instead. Forgive me, I didn't resist :-)... it was such a juicy hit. But this very fact shows that when programming you cannot just skim over the details. You can only avoid using the conscious mind to check the details if you are so confident with them to leave important checks (like actual praticality of a solution) to your subconscius mind. That strings in python are immutable it's surely just a detail, and it's implementation specific, but this doesn't means it's not something you can ignore for a while. If you use python this is a *fundamental* property. That deleting the first element of a list in python is a slow operation is also a detail and very implementation specific, but ignore it and your programs will be just horrible. Often when I think to a problem I find myself saying "oh... and for that I know I'll find a good solution" without actually thinking one. I *know* I can solve that problem decently because I've been around there. I do not need to check because my experience tells me I can first check where I want to go from there first because getting in that direction is not going to be a problem. Without this experience and conscious or subconscious attention to details you'll find yourself saying "oh... and we could go over there" pointing at the sun and then blaming your "technicians" because there are a few little "details" that you didn't consider. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Thu, 16 Jun 2005 10:30:04 -0400, "Jeffrey Maitland" <[EMAIL PROTECTED]> wrote: >Also I think the fact that you think your were diteriating just goes to show >how dedicated you are to detail, and making sure you give the right advice >or ask the right question. [totally-OT] Not really, unfortunately. I found not long ago that I used the very same word eight times in two consecutive sentences, plus similar words and words with similar endings. Re-reading that phrase the day later it seemed like something got stuck in my brain while I was writing that. Sure more or less the idea was there, and IMO clear enough to be understood, but the form and the choice of words seemed incredibly poor. I was curious of this strange fact and I checked out other text I was writing in that period. What really scared me is that this word repeating seemed a quite evident problem. This *both* in italian (my native language) and in english. Googling for old posts I however found that years ago my english was even worse than it is now... but this repetition was not present (not that evident, that is). Needless to say I spent some hours googling for informations about this kind of word repetition problem :D Anyway after some time things got better. I always thought about our intellect being something "superior" to this world made of fragile bones and stinking flesh. However I realized that there's probably no real magic in it... knowing there are pills to make you happy is sort of shocking from a philosophical point of view :-) If you'll see me walking around with an esoskeleton and an happy face it will mean I tried the chemical approach ;) (don't try to understand this phrase, either you know what I mean - and you like dilbert strips - or it can't make sense). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Thu, 16 Jun 2005 07:36:18 -0400, Roy Smith <[EMAIL PROTECTED]> wrote: >Andrea Griffini <[EMAIL PROTECTED]> wrote: >> That strings in python are immutable it's surely >> just a detail, and it's implementation specific, >> but this doesn't means it's not something you can >> ignore for a while. > >I disagree. It is indeed something you can ignore for a while. The first >program you teach somebody to write is going to be: > >print "Hello, world" I mean that the fact that strings are immutable is one key aspect that cannot be worked around. Python is this way and in this very fact is different from e.g. C++. The ripple effect that this very little "detail" can have is not local. There are design based on strings that just do not make sense in python for this fact. It's not something you can "fix" later... if you need mutability you must simply not use strings for that (and this can have a serious impact on the source code). Of course there are programs in which that strings are immutable or not is irrelevant. But if you don't know what are the implications (e.g. how "is" works for strings in python) and you still don't run into problems it's just pure luck. The normal reaction I observed is that when they find a problem the result is a "python is buggy" idea. >It would be a mistake to mention now that "Hello, world" is an immutable >object. That's just not important at this point in the learning process. >Eventually, you're going to have to introduce the concept of immutability. >That point may not be much beyond lesson 2 or so, but it doesn't have to be >lesson 1. I must agree *if* you're teaching python first. I also *completely* agree if you're doing this just to get the appetite. What I don't agree is that starting from this level and going up is a good approach (with lously placed bricks you'll just not be able to hold the construction). To be able to build you'll need to memorize without a rationalization too many "details" that just do not make sense if you start from an ideal python world. I also must note that I, as a fourteen, found terribly interesting the idea of programming a computer even if the only things I could do were for example turning on and off pixels (blocks?) on a screen with resolution 40x50. Probably nowdays unless you show them an antialiased texture mapped 3D floating torus with their name and face on it in live video they'll prefer exchanging stupid messages with the mobile phone instead. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On 17 Jun 2005 01:25:29 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >I don't think anything significant changed in the percentages. Then why starting from print "Hello world" that can't be explained (to say better it can't be *really* understood) without introducing a huge amount of magic and not from a simple 8 bit CPU instead ? What are the pluses of the start-from-high-level approach ? If one is to avoid bordeom I don't agree as assembler is all but boring (when you start), or at least this was what *I* experienced. If it's about the time it will take to get a rotating 3d torus with live video on it I know for sure that most of the programmers I know that started from high level will probably *never* reach that point. Surely if you start say from pull-down menus they'll be able to do pull down menus. And IMO there are good chances they'll stay there lifetime. So is python the good first programming language ? IMO not at all if you wanna become a programmer; it hides too much and that hidden stuff will bite back badly. Unless you know what is behind python it will be almost impossible for you to remember and avoid all the traps. Buf if you need to know what is behind it then it's better to learn that stuff first, because it's more concrete and simpler from a logical point of view; the constructions are complex but (because) the bricks are simpler. But it probably all boils down to what is a programmer. Is C++ a good first programming language ? BWHAHAHAHAHAHAHAHA :D But apparently some guru I greatly respect thinks so (I'm not kidding, http://www.spellen.org/youcandoit/). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On 17 Jun 2005 05:30:25 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >I fail to see the relationship between your reply and my original >message. >I was complaining about the illusion that in the old time people were >more >interested in programming than now. Instead your reply is about low >level >languages being more suitable for beginners than high level languages. >I don't see the connection. I've been told in the past that one reason for which is good to start from high-level languages is that you can do more with less. In other words I've been told that showing a nice image and may be some music is more interesting than just making a led blinking. But if this is not the case (because just 1% is interested in those things no matter what) then why starting from high level first then ? I would say (indeed I would *hope*) that 1% is a low estimate, but probably I'm wrong as others with more experience than me in teaching agree with you. Having more experience than me in teaching programming is a very easy shot... I never taught anyone excluding myself. About the 1%, I've two brothers, and one of them got hooked to programming before me... the other never got interested in computers and now he's just a basic (no macros) ms office user. So in my case it was about 66%, and all started with a programmable pocket RPN calculator ... but there were no teachers involved; may be this is a big difference. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On Fri, 17 Jun 2005 08:40:47 -0400, Peter Hansen <[EMAIL PROTECTED]> wrote: >And the fact that he's teaching C++ instead of just C seems to go >against your own theories anyway... (though I realize you weren't >necessarily putting him forth as a support for your position). He's strongly advocating of a starting from high-level; comp.lang.c++.moderated is where I first posted on this issue. While I think that python is not a good first language, C++ is probably the *worst* first language I can think to. C++ has so many traps, asymmetries and ugly parts (many for backward compatibility) that I would say that one should try put aside logic when learning it and just read the facts; in many aspect C++ is the way it is for historical reasons or unexplicable incidents: IMO there's simply no way someone can deduce those using logic no matter how smart s/he is. C++ IMO must be learned by reading... thinking is pointless and in a few places even dangerous. Also, given the C/C++ philosophy of "the programmer always knows perfectly what is doing", experimenting is basically impossible; trial and error doesn't work because in C++ there is no error; you have undefined behaviour daemons instead of runtime error angels. Add to the picture the quality of compile time error messages from the primitive template technology and even compile time errors often look like riddles; if you forget a "const" you don't get "const expected"... you get two screens full of insults pointing you in the middle of a system header. Thinking to some of the bad parts of it it's quite shocking that C++ is good for anything, but indeed it does work; and can be better than C. I think C++ can be a great tool (if you understand how it works; i.e. if it has no magic at all for you) or your worst nightmare (if you do not understand how it works). I think that using C++ as the first language for someone learning programming is absurd. Francis thinks otherwise. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On 17 Jun 2005 06:35:58 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >Claudio Grondi: ... >>From my >>overall experience I infer, that it is not only possible >>but has sometimes even better chances for success, >>because one is not overloaded with the ballast of deep >>understanding which can not only be useful but also >>hinder from fast progress. > >FWIW, this is also my experience. Why hinder ? Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On 17 Jun 2005 21:10:37 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >Andrea Griffini wrote: >> Why hinder ? > ... >To be able to content himself with a shallow knowledge >is a useful skill ;) Ah! ... I agree. Currently for example my knowledge of Zope is pretty close to 0.00%, but I'm using it and I'm happy with it. I did what I was asked to do and took way less time than hand-writing the cgi stuff required. Every single time I've to touch those scripts I've to open the Zope book to get the correct method names. But I'd never dare to call myself a zope developer... with it I'm just at the "hello world" stage even if I accomplished what would require a lot of CGI expertise. But once I remember running in a problem; there was a file of about 80Mb uploaded in the Zope database that I wasn't able to extract. I was simply helpless: download always stopped arount 40Mb without any error message. I wandered on IRC for a day finding only other people that were better than me (that's easy) but not good enough to help me. In the end someone gave me the right suggestion, I just installed a local zope on my pc, copied the database file, extracted the file from the local instance and, don't ask me why, it worked. This very kind of problem solution (just try doing stupid things without understanding until you get something that looks like working) is what I hate *MOST*. That's one reason for which I hate windows installation/maintenance; it's not an exact science, it's more like try and see what happens. With programming that is something that IMO doesn't pay in the long run. I'm sure that someone that really knows Zope would have been able to get that file out in a minute, and may be doing exactly what I did. But knowing why! And this is a big difference. Indeed when talking about if learning "C" can hinder or help learning "C++" I remember thinking that to learn "C++" *superficially* learning "C" first is surely pointless or can even hinder. But to learn "C++" deeply (with all its quirks) I think that learning "C" first helps. So may be this better explain my position; if you wanna become a "real" programmer, one that really has things under control, then learning a simple assembler first is the main path (ok, may be even a language like C can be a reasonable start, but even in such a low-level language there are already so many things that are easier to understand if you really started from bytes). However, to be able to do just useful stuff with a computer you don't need to start that low; you can start from python (or, why not, even dreamweaver). Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: What is different with Python ?
On 18 Jun 2005 00:26:04 -0700, "Michele Simionato" <[EMAIL PROTECTED]> wrote: >Your position reminds me of this: > >http://www.pbm.com/~lindahl/real.programmers.html Yeah, but as I said I didn't use a TRS-80, but an Apple ][. But the years were those ;-) Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: exceptions considered harmful
On Fri, 17 Jun 2005 20:00:39 -0400, Roy Smith <[EMAIL PROTECTED]> wrote: >This sounds like a very C++ view of the world. In Python, for example, >exceptions are much more light weight and perfectly routine. The problem with exceptions is coping with partial updatd state. Suppose you call a complex computation routine (say a boolean operation between winged edge data structures representing nurbs boundary of two solids) and that you get back a "ZeroDivision" exception... how good is the data structure now ? Either you have some way to be able to easily *guarantee* coherence or you're doomed. Allowing the user to continue without being sure about what is in memory is not going to be that helpful; the result could be that instead of losing last ten minuts you're going to waste the last month of user work (because the user will save the corrupted data and will notice problems only much later). If however you can restore the situation by a rollback or by loading a preimage, or you know that the operation works only *reading* two solids and creating a new one *and* you know that it's not a problem to drop a broken solid data structure then I think it's ok to swallow an exception. IMO either you know exactly what caused the exception and how the state got influenced, or you must have thick logical walls protecting you and allowing the problem to not propagate (for example an RDBMS rollback facility). With python it's sort of easy to get rollback for class instances if the code to protect doesn't plays strange tricks. Even C++ is powerful enough to allow that. With C++ care must be taken to just avoid memory leaks or other resource related problem while in python this is rarely a problem; but the real issue with exception is partial state update and in this python is not different. If you really have to check every place for possible exceptions then the exception machinery is not such a big win compared to return codes. IMO exceptions are nice when there are many raise and few try or except in the code; but then either you have state protection or swallowing an exception is taboo. Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: Loop until condition is true
On Sat, 18 Jun 2005 13:35:16 -, Grant Edwards <[EMAIL PROTECTED]> wrote: >AFAICT, the main use for do/while in C is when you want to >define a block of code with local variables as a macro: When my job was squeezing most out of the CPU (videogame industry) I remember that the asm code generated by while (sz-- > 0) { /* do some stuff */ } was indeed worse than do { /* do some stuff */ } while (--sz); because of the initial "empty-loop" test (conditional jumps were bad, and forward conditional jumps were worse). So where at least one iteration was guaranteed the do-while loop was a better choice. Also I've been told there were compilers that if using for or while loops the generated code was L1: je L2 jmp L1 L2: Instead the do-while loop would have been L1: jne L1 I.e. the code was better *for each iteration* (one conditional jump instead of one conditional jump and one inconditional jump). I think compiler got better since then, even if I don't think they already so smart to be able to infer the "one interation guaranteed" property to avoid the initial test that often. Andrea -- http://mail.python.org/mailman/listinfo/python-list