On Sun, 13 Sep 2015 02:17 am, ru...@yahoo.com wrote: > Having programmed in C in the past,
Well, there's your problem. Like BASIC before it, anyone who has learned C is mentally crippled for life as a programmer *wink* > the model of Python I eventually > developed is very much (I think, haven't read the whole thread) like > Random832's. I think of boxes (objects) with slots containing "pointers" > that "point" to other boxes. Even today when dealing with complex > Python data structures, I draw boxes and arrows to help me understand > them and think of the arrows as "pointers". If you're going to abuse terminology, why don't you call the boxes "floats" since they "float around in memory", or some other story? After all, the JVM and .NET runtimes can and will move the boxes around as needed, which is sort of floating around. Then you can say that all Python objects are floats. Of course, what *you* mean by float is not what everyone else means by floats. But you've already made it clear that you're happy to use your own special meaning of "pointer" that disagrees with the computer science standard meaning, so what's the difference? > Frankly, I feel a little insulted by people who presume that having > learned what a pointer is in C, that my brain is so rigid that I must > necessarily think that pointer means exactly what pointer means in C > forever after. You C programmers, you always think it's about C *wink* C is not the only, or even the first, language to have standardised on a meaning for pointer in computer science. Pascal had pointers long before C, and I'm sure Pascal wasn't the first either. "Pointer" is a standard primitive data type across dozens of languages: it's an abstract data type holding the memory address of a variable (either a named, statically allocated variable, or more often, an anonymous, dynamically allocated variable). As such, it requires that variables have a fixed address. If the variable can move, the pointer will no longer point to the variable. If you want to use "pointer" to refer to something else, the onus is on you to make it clear that you're using it in a non-standard way. Some day, most programmers will be using nothing by dynamic languages which lack pointers-the-data-type, and the term will lose its baggage and can be safely used as a generic English term for "a thing which points". The tiny minority of systems programmers writing device drivers and kernel code in Rust (C having been long-since delegated to the wastebin of history -- well that's my fantasy and I'm sticking to it) will learn that, outside of their own niche, "pointer" does not have the technical meaning that they are used to, and everyone else will be as blissfully unaware of said technical meaning as the average programmer today is of continuations and futures. But this is not that day. > FYI (the general you), I am capable of extracting the > general principle and applying it to Python. Not just capable, but > using the concept from C made understanding Python faster than pretending > that somehow Python has some magical and unique way of structuring data > that's brand new. It's not magical or unique. It is shared by many other languages, such as Ruby, Lua, Java, Javascript. -- Steven -- https://mail.python.org/mailman/listinfo/python-list