Mastering Python... Best Resources?
I know the Python syntax pretty well. I know a lot of the libraries and tools. When I see professional Python programmer's code, I am often blown away with the code. I realized that even though I know the language, I know nothing about using it effectively. I would like to start using Python more in my professional career. Where can I find resources that will take my skills to the next level? I would prefer to watch a streaming video series, if possible. I've read quite a few books about Python. They cover a lot of topics, but none of them covered common conventions or hacks. I mean, I got good at C++ reading books by Scott Meyers, who concentrated on common idioms, things to avoid, the proper way to do things, etc. Right now, I am at that point where I know how to do write just about anything in the language. However, I still have that hesitation I get when I'm just not sure what's the right way. -- http://mail.python.org/mailman/listinfo/python-list
Re: Mastering Python... Best Resources?
On Aug 26, 8:44 am, Chris Angelico wrote: > On Fri, Aug 26, 2011 at 10:33 PM, Travis Parks wrote: > > I know the Python syntax pretty well. I know a lot of the libraries > > and tools. When I see professional Python programmer's code, I am > > often blown away with the code. I realized that even though I know the > > language, I know nothing about using it effectively. > > I would say that there are three aspects to using Python effectively: > > 1) Understanding the syntax, which you've mastered. > 2) Understanding the philosophy > 3) Knowing algorithms. > > The second is more or less what you're asking for, but the > language-independent third may be more useful to you. This is correct > Python syntax (#1), and decently Pythonic style (#2), but a hopelessly > flawed algorithm (#3): > > def fib(x): > return fib(x-1) + fib(x-2) if x>2 else 1 > > Or: > > def fib(x): > if x<3: return 1 > return fib(x-1) + fib(x-2) > > Both versions are clean and easy to read, but neither would be what > I'd call brilliant code. > > You can get books on algorithms from all sorts of places, and with a > very few exceptions, everything you learn with apply to Python and > also to every other language you use. > > ChrisA > > Well, I think I am going more for #2. I know about things like data structures and algorithms... in your case memoization. Here is a good example of what I am talking about. Someone took the time to write quicksort in a single line of code: def qsortr(list): return [] if list==[] else qsortr([x for x in list[1:] if x < list[0]]) + [list[0]] + qsortr([x for x in list[1:] if x >= list[0]]) I would never even think to use list comprehensions and splicing like that. I would write this code the same way I'd write it in C++/C#. I'm aware that writing code like the above example is probably bad practice (and that the implementation here has some major inefficiencies), but it is the "mentality" that goes into it. I haven't gotten to the point where I can truly use the language features to my full advantage. I haven't seen enough "tricks" to be effective. I feel like there is so much of the language I am not utilizing because I'm still thinking in terms of a less powerful language. I was hoping to find a series that would familiarize me with how real Python programmers get things done. -- http://mail.python.org/mailman/listinfo/python-list
Re: Mastering Python... Best Resources?
On Aug 26, 9:28 am, Chris Angelico wrote: > On Fri, Aug 26, 2011 at 10:58 PM, Travis Parks wrote: > > I haven't gotten to the point where I can truly use the language > > features to my full advantage. I haven't seen enough "tricks" to be > > effective. I feel like there is so much of the language I am not > > utilizing because I'm still thinking in terms of a less powerful > > language. I was hoping to find a series that would familiarize me with > > how real Python programmers get things done. > > Ah! Then I recommend poking around with the standard library. No > guarantees that it's ALL good code, but it probably will be. In any > case, it sounds like you're well able to evaluate code in your own > head and recognize the good from the ugly. > > In the source distribution (I'm looking at the latest straight from > hg, but presumably it's the same everywhere), there's a whole lot of > .py files in ./Lib - there's sure to be some good examples in there > somewhere. > > ChrisA > > I've been thinking about going through the docs on the main website. Cool thing is it has links to the actual lib files. I was checking out string.py yesterday. I was searching all over youtube for good videos of some type. Google has an intro course, but it didn't really do much for me. Microsoft has these series called 'Going Deep' that occasionally runs something super in-depth. The videos on C++ and the STL are really excellent. I was hoping someone had taken the time to create a similar series for Python. I can't help but remember my one professor in college, who really made pointers, bitwise arithmetic and low level OS operations make sense. He explained to us a lot about how the STL worked and showed us tons of C++/STL hacks. I probably learned more in the 2 years I had classes with him than I have in all the time I've programmed. To get that type of insight into another language, like Python, would be the ultimate gift for someone like me. Personally, I am tired of working in languages that don't strongly support functional paradigms. -- http://mail.python.org/mailman/listinfo/python-list
Re: Mastering Python... Best Resources?
On Aug 26, 11:12 am, Roy Smith wrote: > In article > <2309ec4b-e9a3-4330-9983-1c621ac16...@ea4g2000vbb.googlegroups.com>, > Travis Parks wrote: > > > I know the Python syntax pretty well. I know a lot of the libraries > > and tools. When I see professional Python programmer's code, I am > > often blown away with the code. I realized that even though I know the > > language, I know nothing about using it effectively. > > In a sense, I'm in the same boat as you. I've been using Python since > before the 2.0 series, and I tend to think of the language in much the > same way as I did back then. Which is to say I don't use the language, > as it currently exists, as effectively as I might. > > Here's some things I suggest you look at: > > Iterators. This is such a powerful concept. When I started with the > language, iterators largely meant the difference between range() and > xrange(). Now we've got a whole ecosystem which has grown up around > them (x + " comprehension" for x in {'list', 'dictionary', 'set'}), not > to mention generators and generator expressions. And the itertools > library. > > Decorators. Another powerful concept. We use these in our web servers > for all sorts of cool things. Adding cacheing. Imposing prerequisites > on route calls. I still don't think of using these immediately, but I > do see the notational convenience they provide for many things. > > Context Managers. One of the (very few) things that I always found > lacking in Python compared to C++ was deterministic object destruction. > Context managers give you this. I'm still exploring all the neat things > you can do with them. > > The full range of containers. I started with lists, tuples, and > dictionaries. Now we've got sets, frozensets, named tuples, deques, > Counters, defaultdicts (I love those), heaps, and I'm sure a few others > I've missed. List and dicts are such well designed containers, you can > do almost anything with just those two, but all the other new ones often > make things quicker, simpler, and more obvious. > > The profiler. Most people obsess about performance early on and don't > realize that most of their guesses about what's fast and what's slow are > probably wrong. Learn to use the profiler and understand what it's > telling you. > > Unittest. Testing is, in general, a neglected practice in most software > development shops, and that's a shame. Python has some really good > capabilities to support testing which you should get familiar with. > Unittest is just one of them. There's also doctest, nose, and a bunch > of other contributed modules. Look at them all, learn at least one of > them well, and use it for everything you write. > > > I've read quite a few books about Python. They cover a lot of topics, > > but none of them covered common conventions or hacks. I mean, I got > > good at C++ reading books by Scott Meyers, who concentrated on common > > idioms, things to avoid, the proper way to do things, etc. > > Ugh. The problem with Meyers's books is that they are needed in the > first place. C++ is such a horribly complicated language, you really > can't use it without making a serious study of it. There's too many > gotchas that you MUST know to avoid disaster with even the most basic > programs. > > Python isn't that way. You can learn a small, basic subset of the > language and get a lot done. You may not be doing things the most > effective way, but you're also not going to be looking at memory > corruption because you didn't understand the details of object lifetimes > or how type promotion, function overloading, and implicit temporary > object construction all interact. > > Thanks for the input. I had been writing my Compass project (http://compass.codeplex.com) in Pythonese. I was planning on implementing a lot of the features of MS' LINQ in Python iterators, too. I am surprised that there aren't a ton of Python libraries for general purpose algorithms. "yield" is one of my favorite keywords. :-) I will take a look at decorators especially. I see them being used for properties and other coolness. I started playing with unittest the other day. unittest.main(exit=False) <-- took me a while to find I will look at the containers, too. I have been trying to push tuple syntax support in C# for years now. Named tuples are so useful. I agree that C++ is too complicated. Bjarne should have cared less about backward compatibility with C and fixed some of the issues with it. He should have also made some of the defaults more intuitive - lik
Checking Signature of Function Parameter
I am trying to write an algorithms library in Python. Most of the functions will accept functions as parameters. For instance, there is a function called any: def any(source, predicate): for item in source: if predicate(item): return true; return false; There are some things I want to make sure of. 1) I want to make sure that source is iterable. 2) More importantly, I want to make sure that predicate is callable, accepting a thing, returning a bool. This is what I have so far: if source is None: raise ValueError("") if not isinstanceof(source, collections.iterable): raise TypeError("") if not callable(predicate): raise TypeError("") The idea here is to check for issues up front. In some of the algorithms, I will be using iterators, so bad arguments might not result in a runtime error until long after the calls are made. For instance, I might implement a filter method like this: def where(source, predicate): for item in source: if predicate(item): yield item Here, an error will be delayed until the first item is pulled from the source. Of course, I realize that functions don't really have return types. Good thing is that virtually everything evaluates to a boolean. I am more concerned with the number of parameters. Finally, can I use decorators to automatically perform these checks, instead of hogging the top of all my methods? -- http://mail.python.org/mailman/listinfo/python-list
Re: Checking Signature of Function Parameter
On Aug 28, 5:31 pm, Chris Angelico wrote: > On Mon, Aug 29, 2011 at 7:20 AM, Travis Parks wrote: > > > if source is None: raise ValueError("") > > if not isinstanceof(source, collections.iterable): raise TypeError("") > > if not callable(predicate): raise TypeError("") > > Easier: Just ignore the possibilities of failure and carry on with > your code. If the source isn't iterable, you'll get an error raised by > the for loop. If the predicate's not callable, you'll get an error > raised when you try to call it. The only consideration you might need > to deal with is that the predicate's not callable, and only if you're > worried that consuming something from your source would be a problem > (which it won't be with the normal iterables - strings, lists, etc, > etc). Otherwise, just let the exceptions be raised! > > ChrisA I guess my concern is mostly with the delayed exceptions. It is hard to find the source of an error when it doesn't happen immediately. I am writing this library so all of the calls can be chained together (composed). If this nesting gets really deep, finding the source is hard to do, even with a good debugger. Maybe I should give up on it, like you said. I am still familiarizing myself with the paradigm. I want to make sure I am developing code that is consistent with the industry standards. -- http://mail.python.org/mailman/listinfo/python-list
Re: Checking Signature of Function Parameter
On Aug 29, 2:30 am, Nobody wrote: > On Sun, 28 Aug 2011 14:20:11 -0700, Travis Parks wrote: > > More importantly, I want to make sure that > > predicate is callable, accepting a thing, returning a bool. > > The "callable" part is do-able, the rest isn't. > > The predicate may accept an arbitrary set of arguments via the "*args" > and/or "**kwargs" syntax, and pass these on to some other function. > Exactly *which* function may be the result of an arbitrarily complex > expression. Or it may not even call another function, but just use the > arbitrary set of arguments in an arbitrarily complex manner. > > IOW, determining in advance what will or won't work is actually impossible. > > Thanks for everyone's input. I decided that I will put some basic checks up front, like "is it None", "is it Iterable" and "is it callable". Other than that, I am letting things slide. Asking for forgiveness is always easier anyway. Just so everyone knows, I am defining these methods inside a class called IterableExtender: class IterableExtender(collections.Iterable):... I wanted to allow for calls like this: extend(range(0, 1000)).map(lambda x: x * x).where(lambda x: x % 2 == 0).first(lambda x: x % 7 == 0) It allows me to compose method calls similarly to LINQ in C#. I think this looks better than: first(where(map(range(0, 1000), lambda x: x * x, lambda x: x % 2 == 0, lambda x : x % 7 == 0))) Internally to the class, there are "private" static methods taking raw inputs and performing no checks. The public instance methods are responsible for checking input arguments and wrapping results. Eventually, I will start working on algorithms that work on MutableSequences, but for now I am just doing Iterables. This is turning out to be a great learning experience. -- http://mail.python.org/mailman/listinfo/python-list
Re: Checking Signature of Function Parameter
On Aug 29, 1:42 pm, Ian Kelly wrote: > On Mon, Aug 29, 2011 at 10:45 AM, Travis Parks wrote: > > I wanted to allow for calls like this: > > > extend(range(0, 1000)).map(lambda x: x * x).where(lambda x: x % 2 == > > 0).first(lambda x: x % 7 == 0) > > > It allows me to compose method calls similarly to LINQ in C#. I think > > this looks better than: > > > first(where(map(range(0, 1000), lambda x: x * x, lambda x: x % 2 == 0, > > lambda x : x % 7 == 0))) > > FWIW, I would be inclined to write that in Python like this: > > def first(iterable): > try: > return next(iter(iterable)) > except StopIteration: > raise ValueError("iterable was empty") > > squares = (x * x for x in range(0, 1000)) > first(x for x in squares if x % 14 == 0) Python's comprehensions make the need for many of the methods I am writing unnecessary. Which probably explains why no ones really bothered to write one before. The only problem I have above is either the composition causes complex method calls first(where(map(range(..., it requires complex comprehensions or it requires breaking the code into steps. Even my approach has problems, such as the overhead of carrying an invisible wrapper around. > > It does a bit too much to comfortably be a one-liner no matter which > way you write it, so I split it into two. > > Cheers, > Ian > > Yeah. I have already seen a lot of better ways of writing my code based solely on your example. I didn't know about iter as a built-in function. I have been calling __iter__ directly. I also need to think more about whether methods like "where" and "map" are going to be beneficial. The good thing is that someone will be able to use my wrapper in any context where an Iterable can be used. It will allow someone to switch between styles on the fly. I'm still not convinced that this library is going to be very "pythony". I wrote a post a few days ago about how I know the syntax and libraries fairly well, but I don't have the "philosophy". I haven't seen a lot of tricks and I am never sure what is the "norm" in Python. I am sure if an experienced Python programmer looked at my code, they'd immediately know I was missing a few things. -- http://mail.python.org/mailman/listinfo/python-list
Handling 2.7 and 3.0 Versions of Dict
I am writing a simple algorithms library that I want to work for both Python 2.7 and 3.x. I am writing some functions like distinct, which work with dictionaries under the hood. The problem I ran into is that I am calling itervalues or values depending on which version of the language I am working in. Here is the code I wrote to overcome it: import sys def getDictValuesFoo(): if sys.version_info < (3,): return dict.itervalues else: return dict.values getValues = getDictValuesFoo() def distinct(iterable, keySelector = (lambda x: x)): lookup = {} for item in iterable: key = keySelector(item) if key not in lookup: lookup[key] = item return getValues(lookup) I was surprised to learn that getValues CANNOT be called as if it were a member of dict. I figured it was more efficient to determine what getValues was once rather than every time it was needed. First, how can I make the method getValues "private" _and_ so it only gets evaluated once? Secondly, will the body of the distinct method be evaluated immediately? How can I delay building the dict until the first value is requested? I noticed that hashing is a lot different in Python than it is in .NET languages. .NET supports custom "equality comparers" that can override a type's Equals and GetHashCode functions. This is nice when you can't change the class you are hashing. That is why I am using a key selector in my code, here. Is there a better way of overriding the default hashing of a type without actually modifying its definition? I figured a requesting a key was the easiest way. -- http://mail.python.org/mailman/listinfo/python-list
Closures and Partial Function Application
I was a little disappointed the other day when I realized that closures were read-only. I like to use closures quite a bit. Can someone explain why this limitation exists? Secondly, since I can cheat by wrapping the thing being closure-ified, how can I write a simple wrapper that has all the same members as the thing (decorator), that then applies them to the underlying thing? I also like partial function application. What is the easiest way of achieving this in Python? Would it look something like this: def foo(x, y): return x + y xFoo = lambda y: foo(10, y) -- http://mail.python.org/mailman/listinfo/python-list
Re: Closures and Partial Function Application
On Aug 31, 1:18 pm, Chris Rebert wrote: > On Wed, Aug 31, 2011 at 9:45 AM, Travis Parks wrote: > > I was a little disappointed the other day when I realized that > > closures were read-only. I like to use closures quite a bit. > > Assuming I'm intuiting your question correctly, then you're incorrect; > they are "read/write". You just need a `nonlocal` declaration for the > variables in question. Seehttp://www.python.org/dev/peps/pep-3104/ > andhttp://docs.python.org/release/3.1.3/reference/simple_stmts.html#nonl... > for details. > > Cheers, > Chris > > Cool. So I just need to put "nonlocal" in front of the variable name. -- http://mail.python.org/mailman/listinfo/python-list
Re: Closures and Partial Function Application
On Aug 31, 1:51 pm, Travis Parks wrote: > On Aug 31, 1:18 pm, Chris Rebert wrote: > > > On Wed, Aug 31, 2011 at 9:45 AM, Travis Parks > > wrote: > > > I was a little disappointed the other day when I realized that > > > closures were read-only. I like to use closures quite a bit. > > > Assuming I'm intuiting your question correctly, then you're incorrect; > > they are "read/write". You just need a `nonlocal` declaration for the > > variables in question. Seehttp://www.python.org/dev/peps/pep-3104/ > > andhttp://docs.python.org/release/3.1.3/reference/simple_stmts.html#nonl... > > for details. > > > Cheers, > > Chris > > Cool. So I just need to put "nonlocal" in front of the variable name. Am I doing something wrong, here? nonlocal isn't registering. Which version did this get incorporated? -- http://mail.python.org/mailman/listinfo/python-list
Re: Closures and Partial Function Application
On Aug 31, 2:18 pm, Ian Kelly wrote: > On Wed, Aug 31, 2011 at 12:02 PM, Travis Parks wrote: > > Am I doing something wrong, here? nonlocal isn't registering. Which > > version did this get incorporated? > > 3.0 Ah, okay. It would be really useful for unit testing. Unfortunately, I want to make the code I am writing compatible with 2.x and 3.x. I will just deal with it until 3.x takes over. Glad to know Guido sees the importance. -- http://mail.python.org/mailman/listinfo/python-list
Re: Closures and Partial Function Application
On Aug 31, 2:03 pm, "bruno.desthuilli...@gmail.com" wrote: > On 31 août, 18:45, Travis Parks wrote: > > > I was a little disappointed the other day when I realized that > > closures were read-only. I like to use closures quite a bit. > > They are not _strictly_ read only, but Python being first and foremost > an OO language, it's usually way simpler to use OO instead of closures > when you start needing such features. I like to leave OO to large-scale architectures and leave functional paradigms for implementation details. Writing an entire class for wrapping an int seems excessive. Especially if that code is limited to a small scope. I agree, though, that there is a time and a place for everything. -- http://mail.python.org/mailman/listinfo/python-list
Re: Handling 2.7 and 3.0 Versions of Dict
On Aug 31, 7:37 pm, Gregory Ewing wrote: > Ian Kelly wrote: > > if sys.version_info < (3,): > > getDictValues = dict.itervalues > > else: > > getDictValues = dict.values > > > (which is basically what the OP was doing in the first place). > > And which he seemed to think didn't work for some > reason, but it seems fine as far as I can tell: > > Python 2.7 (r27:82500, Oct 15 2010, 21:14:33) > [GCC 4.2.1 (Apple Inc. build 5664)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> gv = dict.itervalues > >>> d = {1:'a', 2:'b'} > >>> gv(d) > > > % python3.1 > Python 3.1.2 (r312:79147, Mar 2 2011, 17:43:12) > [GCC 4.2.1 (Apple Inc. build 5664)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> gv = dict.values > >>> d = {1:'a', 2:'b'} > >>> gv(d) > dict_values(['a', 'b']) > > -- > Greg My problem was that I didn't understand the scoping rules. It is still strange to me that the getValues variable is still in scope outside the if/else branches. -- http://mail.python.org/mailman/listinfo/python-list
Algorithms Library - Asking for Pointers
Hello: I am working on an algorithms library. It provides LINQ like functionality to Python iterators. Eventually, I plan on having feaures that work against sequences and mappings. I have the code up at http://code.google.com/p/py-compass. This is my first project in Python, so I'd like some feedback. I want to know if I am following conventions (overall style and quality of code). Thanks, Travis Parks -- http://mail.python.org/mailman/listinfo/python-list
Re: Handling 2.7 and 3.0 Versions of Dict
On Sep 2, 12:36 pm, "Gabriel Genellina" wrote: > En Wed, 31 Aug 2011 22:28:09 -0300, Travis Parks > escribi : > > > On Aug 31, 7:37 pm, Gregory Ewing wrote: > >> Ian Kelly wrote: > >> > if sys.version_info < (3,): > >> > getDictValues = dict.itervalues > >> > else: > >> > getDictValues = dict.values > > >> > (which is basically what the OP was doing in the first place). > > > My problem was that I didn't understand the scoping rules. It is still > > strange to me that the getValues variable is still in scope outside > > the if/else branches. > > Those if/else are at global scope. An 'if' statement does not introduce a > new scope; so getDictValues, despite being "indented", is defined at > global scope, and may be used anywhere in the module. > > -- > Gabriel Genellina > > Does that mean the rules would be different inside a function? -- http://mail.python.org/mailman/listinfo/python-list
Re: Algorithms Library - Asking for Pointers
On Sep 2, 4:09 pm, Ian Kelly wrote: > On Fri, Sep 2, 2011 at 10:59 AM, Travis Parks wrote: > > Hello: > > > I am working on an algorithms library. It provides LINQ like > > functionality to Python iterators. Eventually, I plan on having > > feaures that work against sequences and mappings. > > > I have the code up athttp://code.google.com/p/py-compass. > > > This is my first project in Python, so I'd like some feedback. I want > > to know if I am following conventions (overall style and quality of > > code). > > Sure, here are my comments. > > In the "forever" and "__forever" functions, your use of the term > "generator" is confusing. "__forever" is a generator function, > because it has a yield statement. Its argument, called "generator", > appears to be a callable, not a generator or even necessarily a > generator function. Also, note that __forever(lambda: value) is > functionally equivalent to the more efficient itertools.repeat(value). > > The staticmethod __next(iterator) accesses the class it is defined in, > which suggests that it might be better written as a classmethod > __next(cls, iterator). > > Each of the LINQ-style methods is divided into two parts: the public > method that contains the docstring and some argument checks, and a > private staticmethod that contains the implementation. I'm not > certain what the purpose of that is. If it's to facilitate overriding > the implementation in subclasses, then you need to change the names of > the private methods to start with only one _ character so that they > won't be mangled by the compiler. > > The comments before each method that only contain the name of the > immediately following method are redundant. > > aggregate: the default aggregator is unintuitive to me. I would make > it a required field and add a separate method called sum that calls > aggregate with the operator.add aggregator. > Also, the implementation doesn't look correct. Instead of passing in > each item to the aggregator, you're passing in the number of items > seen so far? The LINQ Aggregate method is basically reduce, so rather > than reinvent the wheel I would suggest this: > > # MISSING is a unique object solely defined to represent missing arguments. > # Unlike None we can safely assume it will never be passed as actual data. > MISSING = object() > > def aggregate(self, aggregator, seed=MISSING): > if seed is self.MISSING: > return reduce(aggregator, self._iterable) > else: > return reduce(aggregator, self._iterable, seed) > > Note for compatibility that in Python 3 the reduce function has been > demoted from a builtin to a member of the functools module. > > any: the name of this method could cause some confusion with the "any" > builtin that does something a bit different. > > compare: the loop would more DRY as a for loop: > > def __compare(first, second, comparison): > for firstval, secondval in itertools.izip_longest(first, second, > fillvalue=self.MISSING): > if firstval is self.MISSING: > return -1 > elif secondval is self.MISSING: > return 1 > else: > result = comparison(firstval, secondval) > if result != 0: > return result > return 0 > > concatenate: again, no need to reinvent the wheel. This should be > more efficient: > > def concatenate(self, other): > return extend(itertools.chain(self.__iterable, other)) > > equals: could be just "return self.compare(other, comparison) == 0" > > __last: the loop could be a for loop: > > # assume we're looking at the last item and try moving to the next > item = result.Value > for item in iterator: pass > return item > > lastOrDefault: there's a lot of repeated logic here. This could just be: > > def lastOrDefault(self, default=None): > try: > return self.last() > except ValueError: > return default > > map / forEach: .NET has to separate these into separate methods due to > static typing. It seems a bit silly to have both of them in Python. > Also, map would be more efficient as "return itertools.imap(mapper, > self.__iterable)" > > max / min: it would be more efficient to use the builtin: > def max(self, key): > return max(self.__iterable, key=key) > If somebody really needs to pass a comparison function instead of a > key function, they can use functools.cmp_to_key. > > randomSamples: a more canonical way to pass the RNG would be to pass > an instance o
Re: Algorithms Library - Asking for Pointers
On Sep 2, 6:49 pm, Travis Parks wrote: > On Sep 2, 4:09 pm, Ian Kelly wrote: > > > > > > > On Fri, Sep 2, 2011 at 10:59 AM, Travis Parks > > wrote: > > > Hello: > > > > I am working on an algorithms library. It provides LINQ like > > > functionality to Python iterators. Eventually, I plan on having > > > feaures that work against sequences and mappings. > > > > I have the code up athttp://code.google.com/p/py-compass. > > > > This is my first project in Python, so I'd like some feedback. I want > > > to know if I am following conventions (overall style and quality of > > > code). > > > Sure, here are my comments. > > > In the "forever" and "__forever" functions, your use of the term > > "generator" is confusing. "__forever" is a generator function, > > because it has a yield statement. Its argument, called "generator", > > appears to be a callable, not a generator or even necessarily a > > generator function. Also, note that __forever(lambda: value) is > > functionally equivalent to the more efficient itertools.repeat(value). > > > The staticmethod __next(iterator) accesses the class it is defined in, > > which suggests that it might be better written as a classmethod > > __next(cls, iterator). > > > Each of the LINQ-style methods is divided into two parts: the public > > method that contains the docstring and some argument checks, and a > > private staticmethod that contains the implementation. I'm not > > certain what the purpose of that is. If it's to facilitate overriding > > the implementation in subclasses, then you need to change the names of > > the private methods to start with only one _ character so that they > > won't be mangled by the compiler. > > > The comments before each method that only contain the name of the > > immediately following method are redundant. > > > aggregate: the default aggregator is unintuitive to me. I would make > > it a required field and add a separate method called sum that calls > > aggregate with the operator.add aggregator. > > Also, the implementation doesn't look correct. Instead of passing in > > each item to the aggregator, you're passing in the number of items > > seen so far? The LINQ Aggregate method is basically reduce, so rather > > than reinvent the wheel I would suggest this: > > > # MISSING is a unique object solely defined to represent missing arguments. > > # Unlike None we can safely assume it will never be passed as actual data. > > MISSING = object() > > > def aggregate(self, aggregator, seed=MISSING): > > if seed is self.MISSING: > > return reduce(aggregator, self._iterable) > > else: > > return reduce(aggregator, self._iterable, seed) > > > Note for compatibility that in Python 3 the reduce function has been > > demoted from a builtin to a member of the functools module. > > > any: the name of this method could cause some confusion with the "any" > > builtin that does something a bit different. > > > compare: the loop would more DRY as a for loop: > > > def __compare(first, second, comparison): > > for firstval, secondval in itertools.izip_longest(first, second, > > fillvalue=self.MISSING): > > if firstval is self.MISSING: > > return -1 > > elif secondval is self.MISSING: > > return 1 > > else: > > result = comparison(firstval, secondval) > > if result != 0: > > return result > > return 0 > > > concatenate: again, no need to reinvent the wheel. This should be > > more efficient: > > > def concatenate(self, other): > > return extend(itertools.chain(self.__iterable, other)) > > > equals: could be just "return self.compare(other, comparison) == 0" > > > __last: the loop could be a for loop: > > > # assume we're looking at the last item and try moving to the next > > item = result.Value > > for item in iterator: pass > > return item > > > lastOrDefault: there's a lot of repeated logic here. This could just be: > > > def lastOrDefault(self, default=None): > > try: > > return self.last() > > except ValueError: > > return default > > > map / forEach: .NET has to separate these into separate methods due to > > static typing. It seems a bit silly to have both of them in Python. &
Re: Algorithms Library - Asking for Pointers
On Sep 3, 12:35 am, Chris Torek wrote: > In article <18fe4afd-569b-4580-a629-50f6c7482...@c29g2000yqd.googlegroups.com> > Travis Parks wrote: > > >[Someone] commented that the itertools algorithms will perform > >faster than the hand-written ones. Are these algorithms optimized > >internally? > > They are written in C, so avoid a lot of CPython interpreter > overhead. Mileage in Jython, etc., may vary... > -- > In-Real-Life: Chris Torek, Wind River Systems > Intel require I note that my opinions are not those of WRS or Intel > Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603 > email: gmail (figure it out) http://web.torek.net/torek/index.html I thought I would point out that many of the itertools functions change between 2.x and 3.x versions. Since 2.7 is supposed to be the last 2.x language, I suppose I will wait until 3.2 becomes the norm before I incorporate some of these changes. In the mean time, I will starting working on algorithms that work against Sequences. I think a really important lesson is that Python really doesn't need an algorithms library, like many others do. A lot of the common algorithms are supported by the syntax itself. All my library did was allow for easier function composition. -- http://mail.python.org/mailman/listinfo/python-list
Python ORMs Supporting POPOs and Substituting Layers in Django
Hello: A new guy showed up at work a few weeks ago and has started talking about replacing a 6 month old project, written in ASP.NET MVC, with an open source solution that can handle massive scaling. I think his primary concern is the "potential" need for massive web farms in the future. In order to prevent high licensing costs, I think he wants to move everything to open source technologies, such as the LAMP stack. I also don't think he truly understands what ASP.NET MVC is and thinks it is the older WebForms. I have been researching open source MVC frameworks and came across Django. It looks like an awesome tool, but I am willing to look at others. I have experience in Python (and enough in PHP to want to avoid it and absolutely none in Ruby) so I think it would be a good language to develop in. I was wondering if there were any ORMs for Python that used POPOs (plain old Python objects). There is a lot of business logic in my system, and so I want to keep my data objects simple and stupid. I want the ORM to be responsible for detecting changes to objects after I send them back to the data layer (rather than during business layer execution). Additionally, being a stateless environment, tracking objects' states isn't very useful anyway. Honestly, I doubt this guy is going to get his wish. The people paying for the application aren't going to be willing to throw 6 months of work down the drain. Never the less, I want to have plenty of research under my belt before being asked what my thoughts are. He was talking about using the Zend Framework with PHP, but I want to avoid that if possible. Django seems like one of the best MVC solutions in the Python arena. I would be willing to replace Django's ORM solution with something else, especially if it supported POPOs. I could even map all of the non-POPOs to POPOs if I needed to, I guess. Finally, I wanted to ask whether anyone has tried having Django call out to Python 3 routines. I am okay using Python 2.7 in Django, if I can have the controllers call business logic implemented in Python 3, accepting POPOs from the data layer. Django would really just be a coordinator: grab data from Django ORM, convert results into POPOs, load up Python 3 module with business logic, passing POPOs, returning POPOs and then converting those to view models. I'm sweating just thinking about it. My guess is that there would be a severe penalty for crossing process boundaries... but any insights would be appreciated. Thanks, Travis Parks -- http://mail.python.org/mailman/listinfo/python-list
Re: Python ORMs Supporting POPOs and Substituting Layers in Django
On Nov 5, 4:11 pm, Travis Parks wrote: > Hello: > > A new guy showed up at work a few weeks ago and has started talking > about replacing a 6 month old project, written in ASP.NET MVC, with an > open source solution that can handle massive scaling. I think his > primary concern is the "potential" need for massive web farms in the > future. In order to prevent high licensing costs, I think he wants to > move everything to open source technologies, such as the LAMP stack. I > also don't think he truly understands what ASP.NET MVC is and thinks > it is the older WebForms. > > I have been researching open source MVC frameworks and came across > Django. It looks like an awesome tool, but I am willing to look at > others. I have experience in Python (and enough in PHP to want to > avoid it and absolutely none in Ruby) so I think it would be a good > language to develop in. > > I was wondering if there were any ORMs for Python that used POPOs > (plain old Python objects). There is a lot of business logic in my > system, and so I want to keep my data objects simple and stupid. I > want the ORM to be responsible for detecting changes to objects after > I send them back to the data layer (rather than during business layer > execution). Additionally, being a stateless environment, tracking > objects' states isn't very useful anyway. > > Honestly, I doubt this guy is going to get his wish. The people paying > for the application aren't going to be willing to throw 6 months of > work down the drain. Never the less, I want to have plenty of research > under my belt before being asked what my thoughts are. He was talking > about using the Zend Framework with PHP, but I want to avoid that if > possible. Django seems like one of the best MVC solutions in the > Python arena. I would be willing to replace Django's ORM solution with > something else, especially if it supported POPOs. I could even map all > of the non-POPOs to POPOs if I needed to, I guess. > > Finally, I wanted to ask whether anyone has tried having Django call > out to Python 3 routines. I am okay using Python 2.7 in Django, if I > can have the controllers call business logic implemented in Python 3, > accepting POPOs from the data layer. Django would really just be a > coordinator: grab data from Django ORM, convert results into POPOs, > load up Python 3 module with business logic, passing POPOs, returning > POPOs and then converting those to view models. I'm sweating just > thinking about it. My guess is that there would be a severe penalty > for crossing process boundaries... but any insights would be > appreciated. > > Thanks, > Travis Parks Which web frameworks have people here used and which have they found to be: scalable, RAD compatible, performant, stable and/or providing good community support? I am really trying to get as much feedback as I can, to help form an unbiased opinion in case I need to make a recommendation... or fight an all out battle. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python ORMs Supporting POPOs and Substituting Layers in Django
On Nov 7, 12:44 pm, John Gordon wrote: > In John Gordon writes: > > > In <415d875d-bc6d-4e69-bcf8-39754b450...@n18g2000vbv.googlegroups.com> > > Travis Parks writes: > > > Which web frameworks have people here used and which have they found > > > to be: scalable, RAD compatible, performant, stable and/or providing > > > good community support? I am really trying to get as much feedback as > > I've used Django and it seems to be a very nice framework. However I've > > only done one project so I haven't delved too deeply. > > You are probably looking for more detail than "It's a nice framework" :-) > > The database model in Django is powerful; it allows you to do queries in > native Python code without delving into backend SQL stuff. > > I don't know how scalable/performant the database model is, as the one > project I worked on didn't deal with a ton of data. (But I'd be surprised > if it had poor performance.) > > The URL dispatcher provides a very nice and logical way to associate a > given URL with a given method call. > > Community support is excellent. > > -- > John Gordon A is for Amy, who fell down the stairs > gor...@panix.com B is for Basil, assaulted by bears > -- Edward Gorey, "The Gashlycrumb Tinies" I started the battle today. The "new guy" was trying to sell me on CodeIgnitor. I haven't looked at it, but it is PHP, so I really want to avoid it. The good thing is that all of his "friends" have been telling him to get into Python. I have been trying to convince him that PHP isn't cut out for background services and is mostly a front- end language. Python is much more geared towards hardcore data processing. Why write the system in two languages? I have been spending a lot of time looking at the Pyramid project: the next generation of the Pylons project. It looks powerful, but it seems to be a lot more complex than Django. -- http://mail.python.org/mailman/listinfo/python-list
xmlrpclib date times and a trailing Z
I am trying to connect to Marchex's a call tracking software using xmlrpclib. I was able to get some code working, but I ran into a problem dealing with transfering datetimes. When I construct a xmlrpclib.ServerProxy, I am setting the use_datetime flag to indicate that I want to automatically convert back and forth between date times in the datetime library. I have a working version that doesn't use this flag, and I have to convert from the xmlrpclib.DateTime type to the datetime.datetime type manually, via string parsing. The thing is, Marchex's API returns date's with a trailing Z, after the time component. I did some research and this is supposed to be an indicator that UTC was used. However, it doesn't seem like the xmlrpclib likes it very much. It looks like it is using this code internally: time.strptime(data, "%Y %m%dT%H:%M:%S") This code doesn't look like it handles time zones at all. I guess, is there a way to tell xmlrpclib to include time zones when parsing date times? Thanks, Travis Parks -- http://mail.python.org/mailman/listinfo/python-list
Re: Python ORMs Supporting POPOs and Substituting Layers in Django
On Nov 8, 12:09 am, Lie Ryan wrote: > On 11/08/2011 01:21 PM, Travis Parks wrote: > > > > > > > On Nov 7, 12:44 pm, John Gordon wrote: > >> In John Gordon writes: > > >>> In<415d875d-bc6d-4e69-bcf8-39754b450...@n18g2000vbv.googlegroups.com> > >>> Travis Parks writes: > >>>> Which web frameworks have people here used and which have they found > >>>> to be: scalable, RAD compatible, performant, stable and/or providing > >>>> good community support? I am really trying to get as much feedback as > >>> I've used Django and it seems to be a very nice framework. However I've > >>> only done one project so I haven't delved too deeply. > > >> You are probably looking for more detail than "It's a nice framework" :-) > > >> The database model in Django is powerful; it allows you to do queries in > >> native Python code without delving into backend SQL stuff. > > >> I don't know how scalable/performant the database model is, as the one > >> project I worked on didn't deal with a ton of data. (But I'd be surprised > >> if it had poor performance.) > > >> The URL dispatcher provides a very nice and logical way to associate a > >> given URL with a given method call. > > >> Community support is excellent. > > >> -- > >> John Gordon A is for Amy, who fell down the stairs > >> gor...@panix.com B is for Basil, assaulted by bears > >> -- Edward Gorey, "The Gashlycrumb Tinies" > > > I started the battle today. The "new guy" was trying to sell me on > > CodeIgnitor. I haven't looked at it, but it is PHP, so I really want > > to avoid it. The good thing is that all of his "friends" have been > > telling him to get into Python. I have been trying to convince him > > that PHP isn't cut out for background services and is mostly a front- > > end language. Python is much more geared towards hardcore data > > processing. Why write the system in two languages? > > > I have been spending a lot of time looking at the Pyramid project: the > > next generation of the Pylons project. It looks powerful, but it seems > > to be a lot more complex than Django. > > CodeIgniter is a very fine framework, however it builds on top of a > shitty excuse of a language called PHP. > > I've found that Django has a much better debugging tools; when a Django > page produces an exception, it would always produce a useful error page. > I haven't been able to do the same in CodeIgniter (nor in any PHP > framework I've used, I'm starting to think it's a language limitation); > often when you have errors, PHP would just silently return empty or > partial pages even with all the debugging flags on. > > IMO, Python has a much nicer choice of built-in data structure for data > processing. Python has a much more mature object-orientation, e.g. I > prefer writing l.append(x) rather than array_push(l, x). I think these > qualities are what makes you think Python is much, much more suitable > for data processing than PHP; and I wholesomely agree. > > Database abstraction-wise, Django's ORM wins hands down against > CodeIgniter's ActiveRecord. CodeIgniter's ActiveRecord is basically just > a thin wrapper that abstracts the perks of various database engine. > Django's ORM is a full blown ORM, it handles foreign key relationships > in OO way. The only disadvantage of Django's ORM is that since it's > written in Python, if you need to write a program working on the same > database that doesn't use Django nor Python, then you'll have a problem > since you'll have to duplicate the foreign key relationships. > > With all the bashing of PHP, PHP do have a few advantages. PHP and > CodeIgniter is much easier to set up and running than Django; and the > ability to create a .php file and have it running without having to > write the routing file is sometimes a bliss. And PHP are often used as > their own templating language; in contrast with Django which uses a > separate templating language. Having a full blown language as your > templating language can be a double-edged sword, but it is useful > nevertheless for experimental work. > > IMO, while it is easier to get up and running in PHP, in the long run > Python is much better in almost any other aspects.- Hide quoted text - > > - Show quoted text - The good thing is that I got the new guy to convert his thinking towards Python. He did a little research
Re: xmlrpclib date times and a trailing Z
On Nov 11, 7:20 pm, Travis Parks wrote: > I am trying to connect to Marchex's a call tracking software using > xmlrpclib. I was able to get some code working, but I ran into a > problem dealing with transfering datetimes. > > When I construct a xmlrpclib.ServerProxy, I am setting the > use_datetime flag to indicate that I want to automatically convert > back and forth between date times in the datetime library. > > I have a working version that doesn't use this flag, and I have to > convert from the xmlrpclib.DateTime type to the datetime.datetime type > manually, via string parsing. > > The thing is, Marchex's API returns date's with a trailing Z, after > the time component. I did some research and this is supposed to be an > indicator that UTC was used. However, it doesn't seem like the > xmlrpclib likes it very much. > > It looks like it is using this code internally: time.strptime(data, "%Y > %m%dT%H:%M:%S") > > This code doesn't look like it handles time zones at all. I guess, is > there a way to tell xmlrpclib to include time zones when parsing date > times? > > Thanks, > Travis Parks I did some chatting on IRC and it seems that the date/time format is not very well defined in XML RPC specs. So, basically, Marchex is using a format that the XML RPC library doesn't support. Strangely, Marchex supports incoming dates with the format MMddThhmmss. It just spits dates back out with -MM-ddThh:mm:ssZ. The ISO8601 standard seems to be used a lot, so it is surprising the library doesn't try multiple formats, at least. I find it strange that the library, in light of the fact that date formats aren't standardized, doesn't provide the ability to configure this. I also find it strange that the library doesn't incorporate Basic Authentication using urllib2, but instead rolls its own method of putting username:password@ before the netloc. I wish Python's libraries acted more like an integrated framework than just unrelated libraries. I suppose I am spoiled from years of working with all-in-one frameworks managed by a single group. That is not the way C/C++ works or how Linux works. The power generated by using a conglomeration of unrelated libraries is indisputable, even if it can be a productivity killer and just plain confusing. -- http://mail.python.org/mailman/listinfo/python-list
Using the Python Interpreter as a Reference
Hello: I am currently working on designing a new programming language. It is a compiled language, but I still want to use Python as a reference. Python has a lot of similarities to my language, such as indentation for code blocks, lambdas, non-locals and my language will partially support dynamic programming. Can anyone list a good introduction to the files found in the source code? I have been poking around the source code for a little bit and there is a lot there. So, I was hoping someone could point me to the "good parts". I am also wondering whether some of the code was generated because I see state transition tables, which I doubt someone built by hand. Any help would be greatly appreciated. It will be cool to see how the interpreter works internally. I am still wonder whether designing the language (going on 4 months now) will be harder than implementing it. Thanks, Travis Parks -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 21, 12:44 am, Steven D'Aprano wrote: > On Mon, 21 Nov 2011 13:33:21 +1100, Chris Angelico wrote: > > What's your language's "special feature"? I like to keep track of > > languages using a "slug" - a simple one-sentence (or less) statement of > > when it's right to use this language above others. For example, Python > > is optimized for 'rapid deployment'. > > "Python will save the world" > > http://proyectojuanchacon.blogspot.com/2010/07/saving-world-with-pyth... > > -- > Steven The language, psuedo name Unit, will be a low-level language capable of replacing C in most contexts. However, it will have integrated functional programming features (tail-end recursion optimization, tuples, currying, closures, function objects, etc.) and dynamic features (prototypical inheritance and late binding). It is a hybrid between C#, C++, F#, Python and JavaScript. The hope is that you won't pay for features you don't use, so it will run well on embedded devices as well as on desktops - that's to be seen. I'm no master compiler builder, here. The functional code is pretty basic: let multiply = function x y: return x * y # automatic generic arguments (integer here) let double = multiply _ 2 # short-hand currying - inlined if possible let doubled = [|0..10|].Apply(double) # double zero through 10 The dynamic code is pretty simple too: dynamic Prototype = function value: self.Value = value # simulated ctor Prototype.Double = function: self.Value * 2 # self refers to instance new Prototype(5).Double() # 10 new Prototype(6).Double() # 12 dynamic x = 5 # five wrapped with a bag x.Double = function: self * 2 x.Double() # 10 dynamic y = 6 y.Double = x.Double # member sharing y.Double() #12 The language also sports OOP features like are found in Java or C#: single inheritance; multiple interface inheritance; sealed, virtual and abstract types and members; explicit inheritance; extension methods and namespaces. The coolest feature will be its generics-oriented function signatures. By default everything is generic. You apply constraints to parameters, rather than specific types. For instance: let Average = function values: where values is ICountable IIterable assert values.Count > 0 "The values list cannot be empty." throws ArgumentException returns Float64 let sum = 0 for value in values: sum += value return sum / values.Count # floating point division As you can see, the function headers can be larger than the bodies themselves. They support type constraints, assertions (argument checking), exceptions enumeration, default parameters and return type information. All of them can be left out if the type of arguments can be inferred. This will not be an overnight project. :-) -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 22, 1:37 pm, Alan Meyer wrote: > On 11/20/2011 7:46 PM, Travis Parks wrote: > > > Hello: > > > I am currently working on designing a new programming language. ... > > I have great respect for people who take on projects like this. > > Your chances of popularizing the language are small. There must be > thousands of projects like this for every one that gets adopted by other > people. However your chances of learning a great deal are large, > including many things that you'll be able to apply to programs and > projects that, at first glance, wouldn't appear to benefit from this > kind of experience. If you get it working you'll have an impressive > item to add to your resume. > > I suspect that you'll also have a lot of fun. > > Good luck with it. > > Alan I've been learning a lot and having tons of fun just designing the language. First, I get think about all of the language features that I find useful. Then I get to learn a little bit how they work internally. For instance, functions are first-class citizens in Unit, supporting closures. To make that happen meant wrapping such functions inside of types and silently elavating local variables to reference counted pointers. Or, I realized that in order to support default arguments, I would have to silently wrap parameters in types that were either set or not set. That way calls to the default command could simply be replaced by an if statement. It was a really subtle implementation detail. It is also fun thinking about what makes sense. For instance, Unit will support calling methods with named arguments. Originally, I thought about using the '=' operator: Foo(name="bob" age=64) but, then I realized that the equals sign could be confused with assignment. Those types of syntactic conflicts occur quite often and lead to a lot of rethinking. Ultimately, somewhat good ideas get replaced with much better ideas. I had been contemplating Unit for months before the final look and feel of the language came into view. It isn't what I started out imagining, but I think it turned out better than I had originally planned. Recently, I rethought how functions looked, since the headers were too long: alias Predicate = function (value: & readonly T) throws() returns(Boolean) let Any = public function (values: & readonly IIterable) (?predicate: Predicate) throws() # ArgumentNullException inherits from UncheckedException returns(Boolean): # this can be on one line default predicate = (function value: true) assert predicate != null "The predicate cannot be null." ArgumentNullException for value in values: if predicate(value): return true return false Most of the time, throws clauses, returns clauses and parameter type constraints can be left off. Plus, now they can all appear on one line. Assertions and default statements now appear in the body. Assertions now optionally take a message and the exception type to throw. So, yeah, this has been an awesome project so far. I have dozens of documents and I have been keeping up on a blog. I've even started implementing a simple recursive descent parser just to make sure the syntax doesn't conflict. Now it will be a matter of formally defining a grammer and implementing the backend of the compiler... which I've never done before. I have been thinking about compiling into a language like C++ or C instead of assembler for my first time through. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 26, 1:53 pm, Rick Johnson wrote: > On Nov 20, 6:46 pm, Travis Parks wrote: > > > Hello: > > > I am currently working on designing a new programming language. It is > > a compiled language, but I still want to use Python as a reference. > > Python has a lot of similarities to my language, such as indentation > > for code blocks, > > I hope you meant to say "*forced* indention for code blocks"! "Forced" > being the key word here. What about tabs over spaces, have you decided > the worth of one over the other or are you going to repeat Guido's > folly? > > And please, i love Python, but the language is a bit asymmetrical. Do > try to bring some symmetry to this new language. You can learn a lot > from GvR's triumphs, however, you can learn even more from his follys. Personally, I find a lot of good things in Python. I thinking tabs are out-of-date. Even the MAKE community wishes that the need for tabs would go away and many implementations have done just that. I have been seriously debating about whether to force a specific number of spaces, such as the classic 4, but I am not sure yet. Some times, 2 or even 8 spaces is appropriate (although I'm not sure when). I have always found the standard library for Python to be disjoint. That can be really beneficial where it keeps the learning curve down and the size of the standard modules down. At the same time, it means re-learning whenever you use a new module. My language combines generators and collection initializers, instead of creating a whole new syntax for comprehensions. [| for i in 0..10: for j in 0.10: yield return i * j |] Lambdas and functions are the same thing in my language, so no need for a special keyword. I also distinguish between initialization and assignment via the let keyword. Also, non-locals do not need to be defined explicitly, since the scoping rules in Unit are far more "anal". In reality though, it takes a certain level of arrogance to assume that any language will turn out without bumps. It is like I was told in college long ago, "Only the smallest programs are bug free." I think the same thing could be said for a language. The only language without flaws would be so small that it would be useless. I love these types of discussions though, because it helps me to be aware. When designing a language, it is extremely helpful to hear what language features have led to problems. For instance, C#'s foreach loops internally reuse a variable, which translates to something like this: using (IEnumerator enumerator = enumerable.GetEnumerator()) { T current; while (enumerator.MoveNext()) { current = enumerator.Current; // inner loop code goes here } } Since the same variable is reused, threads referencing the loop variable work against whatever value is currently in the variable, rather than the value when the thread was created. Most of the time, this means every thread works against the same value, which isn't the expected outcome. Moving the variable inside the loop _may_ help, but it would probably be optimized back out of the loop by the compiler. With the growth of threaded applications, these types of stack-based optimizations may come to an end. That is why it is important for a next-gen language to have a smarter stack - one that is context sensitive. In Unit, the stack grows and shrinks like a dynamic array, at each scope, rather than at the beginning and end of each function. Sure, there's a slight cost in performance, but a boost in consistency. If a programmer really wants the performance, they can move the variable out of the loop themselves. In fact, there are a lot of features in Unit that will come with overhead, such as default arguments, non-locals, function-objects, etc. However, the game plan is to avoid the overhead if it isn't used. Some things, such as exception handling, will be hard to provide without overhead. My belief is that, provided a tool, most developers will use it and accept the slight runtime overhead. I think everyone has an idea about what would make for the perfect language. I am always willing to entertain ideas. I have pulled from many sources: C#, Java, Python, JavaScript, F#, Lisp and more. The hope is to provide as much expression with as much consistency as possible. Just the other day I spent 2 hours trying to determine how to create a null pointer (yeah, it took that long). let pi = null as shared * Integer32 # null is always a pointer Originally, I wanted 'as' to be a safe conversion. However, I decided to make use of the 'try' keyword to mean a safe conversion. let nd = try base as shared * Derived let d = if nd.Succeeded: nd.Value else: null # or, shorthand let i = try Integer32.Parse("123") else 0 Of course, the last line could cost performance wise. For that reason, Unit
Re: Using the Python Interpreter as a Reference
On Nov 27, 6:55 pm, Steven D'Aprano wrote: > On Sun, 27 Nov 2011 14:21:01 -0800, Travis Parks wrote: > > Personally, I find a lot of good things in Python. I thinking tabs are > > out-of-date. Even the MAKE community wishes that the need for tabs would > > go away and many implementations have done just that. > > Tabs have every theoretical advantage and only one practical > disadvantage: the common toolsets used by Unix programmers are crap in > their handling of tabs, and instead of fixing the toolsets, they blame > the tabs. > > The use of spaces as indentation is a clear case of a technically worse > solution winning over a better solution. > > > I have been > > seriously debating about whether to force a specific number of spaces, > > such as the classic 4, but I am not sure yet. Some times, 2 or even 8 > > spaces is appropriate (although I'm not sure when). > > Why on earth should your language dictate the width of an indentation? I > can understand that you might care that indents are consistent within a > single source code unit (a file?), but anything more than that is just > obnoxious. > > > I have always found the standard library for Python to be disjoint. That > > can be really beneficial where it keeps the learning curve down and the > > size of the standard modules down. At the same time, it means > > re-learning whenever you use a new module. > > I know what disjoint means, but I don't understand what you think it > means for a software library to be disjoint. I don't understand the rest > of the paragraph. > > > My language combines generators and collection initializers, instead of > > creating a whole new syntax for comprehensions. > > > [| for i in 0..10: for j in 0.10: yield return i * j |] > > Are we supposed to intuit what that means? > > Is | a token, or are the delimiters [| and |] ? > > Is there a difference between iterating over 0..10 and iterating over > what looks like a float 0.10? > > What is "yield return"? > > > Lambdas and functions are the same thing in my language, so no need for > > a special keyword. > > That does not follow. Lambdas and def functions are the same thing in > Python, but Python requires a special keyword. > > > I also distinguish between initialization and > > assignment via the let keyword. > > What does this mean? I can guess, but I might guess wrong. > > > Also, non-locals do not need to be > > defined explicitly, since the scoping rules in Unit are far more "anal". > > What does this mean? I can't even guess what you consider more anal > scoping rules. > > > In reality though, it takes a certain level of arrogance to assume that > > any language will turn out without bumps. It is like I was told in > > college long ago, "Only the smallest programs are bug free." I think the > > same thing could be said for a language. The only language without flaws > > would be so small that it would be useless. > > I'm pretty sure that being so small that it is useless would count as a > flaw. > > What does it mean to say that a language is "small"? > > A Turing Machine is a pretty small language, with only a few > instructions: step forward, step backwards, erase a cell, write a cell, > branch on the state of the cell. And yet anything that can be computed, > anything at all, can be computed by a Turning Machine: a Turing Machine > can do anything you can do in C, Lisp, Fortran, Python, Java... and very > probably anything you can (mentally) do, full stop. So what does that > mean about "small" languages? > > On the other hand, take Epigram, a functional programming language: > > http://en.wikipedia.org/wiki/Epigram_(programming_language) > > It is *less* powerful than a Turing Machine, despite being far more > complex. Similarly languages like regular expressions, finite automata > and context-free grammers are more complex, "bigger", possibly with > dozens or hundreds of instructions, and yet less powerful. Likewise for > spreadsheets without cycles. > > Forth is much smaller than Java, but I would say that Forth is much, much > more powerful in some sense than Java. You could write a Java compiler in > Forth more easily than you could write a Forth compiler in Java. > > -- > Steven > > Yes. I was mostly rambling. More explanation would have meant more typing. Languages that use type inference heavily typically find unique ways of indicating literals, including numbers and collections. In Unit, [||] indicates fixed length arrays, [] is for dynamic arrays, {} is for sets and unordered dictionaries
Re: Using the Python Interpreter as a Reference
On Nov 28, 2:32 pm, Ian Kelly wrote: > On Sun, Nov 27, 2011 at 4:55 PM, Steven D'Aprano > > wrote: > >> My language combines generators and collection initializers, instead of > >> creating a whole new syntax for comprehensions. > > >> [| for i in 0..10: for j in 0.10: yield return i * j |] > > > Are we supposed to intuit what that means? > > > Is | a token, or are the delimiters [| and |] ? > > > Is there a difference between iterating over 0..10 and iterating over > > what looks like a float 0.10? > > > What is "yield return"? > > I would assume that "yield return" is borrowed from C#, where it is > basically equivalent to Python's yield statement. The advantage of > using two keywords like that is that you can compare the statements > "yield return foo" and "yield break", which is a bit clearer than > comparing the equivalent "yield foo" and "return". > > Having to type out "yield return" in every comprehension seems a bit > painful to me, but I can understand the approach: what is shown above > is a full generator, not a single "generator expression" like we use > in Python, so the statement keywords can't be omitted. It's trading > off convenience for expressiveness (a bad trade-off IMO -- complex > generators should be named, not anonymous). > > >> Lambdas and functions are the same thing in my language, so no need for > >> a special keyword. > > > That does not follow. Lambdas and def functions are the same thing in > > Python, but Python requires a special keyword. > > I think the implication is that Unit has only one syntax for creating > functions, which is lambda-style. In any case, why does Python > require a special keyword? def is only used in a statement context, > and lambda is only used in an expression context. Why not use the > same keyword for both? I think the answer is historical: def came > first, and when anonymous functions were added it didn't make sense to > use the keyword "def" for them, because "def" implies a name being > defined. > > Cheers, > Ian > > Most languages I have worked with have a "lambda" syntax and a function syntax. It has always been a historical artifact. Languages start out avoiding functional features and then eventually adopt them. It seems that eventually, convenient high-order functions become a must-have (most standard algorithm packages). It is a conflict between old C-style programming and the need for functional code. As soon as functions can be assigned to variables, the code starts looking oddly like JavaScript. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 28, 3:40 pm, Gregory Ewing wrote: > Travis Parks wrote: > > I thinking tabs are > > out-of-date. Even the MAKE community wishes that the need for tabs > > would go away > > The situation with make is a bit different, because it > *requires* tabs in certain places -- spaces won't do. > Python lets you choose which to use as long as you don't > mix them up, and I like it that way. > > > let Parse = public static method (value: String) > > throws(FormatException UnderflowException OverflowException) > > Checked exceptions? I fear you're repeating a huge mistake > going down that route... > > -- > Greg > > Exception handling is one of those subjects few understand and fewer can implement properly in modern code. Languages that don't support exceptions as part of their signature lead to capturing generic Exception all throughout code. It is one of those features I wish .NET had. At the same time, with my limited experience with Java, it has been a massive annoyance. Perhaps something to provide or just shut off via a command line parameter. What indications have there been that this has been a flaw? I can see it alienating a large group of up- and-coming developers. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 28, 5:24 pm, Steven D'Aprano wrote: > On Mon, 28 Nov 2011 12:32:59 -0700, Ian Kelly wrote: > > On Sun, Nov 27, 2011 at 4:55 PM, Steven D'Aprano > > wrote: > [...] > >>> Lambdas and functions are the same thing in my language, so no need > >>> for a special keyword. > > >> That does not follow. Lambdas and def functions are the same thing in > >> Python, but Python requires a special keyword. > > > I think the implication is that Unit has only one syntax for creating > > functions, which is lambda-style. In any case, why does Python require > > a special keyword? def is only used in a statement context, and lambda > > is only used in an expression context. Why not use the same keyword for > > both? > > Because the syntax is completely different. One is a statement, and > stands alone, the other is an expression. Even putting aside the fact > that lambda's body is an expression, and a def's body is a block, def > also requires a name. Using the same keyword for both would require > special case reasoning: sometimes def is followed by a name, sometimes > without a name. That's ugly. > > def name(args): block # okay > > funcs = [def args: expr, # okay so far > def name(args): expr, # not okay > def: expr, # okay > ] > > def: expr # also okay > > def: expr > expr # but not okay > > x = def x: expr # okay > x = def x(x): expr # not okay > > Using the same keyword for named and unnamed functions is, in my opinion, > one of those foolish consistencies we are warned about. When deciding on > syntax, the difference between anonymous and named functions are more > significant than their similarities. A good example I have run into is recursion. When a local function calls itself, the name of the function may not be part of scope (non- local). Languages that support tail-end recursion optimization can't optimize. In order to support this, a function in Unit will have access to its own name and type. In other words, special scoping rules are in place in Unit to allow treating a function as an expression. > > > I think the answer is historical: def came first, and when > > anonymous functions were added it didn't make sense to use the keyword > > "def" for them, because "def" implies a name being defined. > > That reasoning still applies even if they were added simultaneously. > > Lambda is pretty old: it certainly exists in Python 1.5 and almost > certainly in 1.4. While it doesn't exist as a keyword in Python 0.9.1, > there is a module called "lambda" with a function "lambda" that uses more > or less the same syntax. Instead of lambda x: x+1, you would instead > write lambda("x", "x+1"). So the idea of including anonymous functions > was around in Python circles before the syntax was locked in. I find that interesting. I also find it interesting that the common functional methods (all, any, map, filter) are basically built into Python core language. That is unusual for most imperative programming languages early-on. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 28, 8:49 pm, Chris Angelico wrote: > On Tue, Nov 29, 2011 at 11:54 AM, DevPlayer wrote: > > To me, I would think the interpreter finding the coder's intended > > indent wouldn't be that hard. And just make the need for consistant > > spaces or tabs irrevelent simply by reformatting the indent as > > expected. Pretty much all my text editors can. > > The trouble with having a language declaration that "a tab is > equivalent to X spaces" is that there's no consensus as to what X > should be. Historically X has always been 8, and quite a few programs > still assume this. I personally like 4. Some keep things narrow with > 2. You can even go 1 - a strict substitution of \t with \x20. Once you > declare it in your language, you immediately break everyone who uses > anything different. > > ChrisA Yeah. We must remember the Unix users, espcially those who don't know how to hack Vim or bash. I've decided not to require a specific number of spaces. I am still teetering on whether to allow tabs. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using the Python Interpreter as a Reference
On Nov 28, 5:57 pm, Steven D'Aprano wrote: > On Mon, 28 Nov 2011 13:29:06 -0800, Travis Parks wrote: > > Exception handling is one of those subjects few understand and fewer can > > implement properly in modern code. Languages that don't support > > exceptions as part of their signature lead to capturing generic > > Exception all throughout code. It is one of those features I wish .NET > > had. At the same time, with my limited experience with Java, it has been > > a massive annoyance. Perhaps something to provide or just shut off via a > > command line parameter. What indications have there been that this has > > been a flaw? I can see it alienating a large group of up- and-coming > > developers. > > http://www.ibm.com/developerworks/java/library/j-jtp05254/index.html > > Note also that Bruce Eckel repeats a rumour that checked exceptions were > *literally* an experiment snuck into the Java language while James > Gosling was away on holiday. > > http://www.mindview.net/Etc/Discussions/UnCheckedExceptionComments > > Even if that is not true, checked exceptions are a feature that *in > practice* seems to lead to poor exception handling and cruft needed only > to satisfy the compiler: > > http://www.alittlemadness.com/2008/03/12/checked-exceptions-failed-ex... > > and other annoyances. It's main appeal, it seems to me, is to give a > false sense of security to Java developers who fail to realise that under > certain circumstances Java will raise certain checked exceptions *even if > they are not declared*. E.g. null pointer exceptions. > > See also: > > http://java.dzone.com/articles/checked-exceptions-i-love-you > > and note especially the comment from the coder who says that he simply > declares his functions to throw Exception (the most generic checked > exception), thus defeating the whole point of checked exceptions. > > -- > Steven I think all of the examples you listed referred specifically to most programmers finding ways around the annoyance. I have heard about throwing generic Exception or inheriting all custom exception types from RuntimeException. I did this quite often myself. In general, unchecked exceptions shouldn't be caught. They occur because of bad code and insufficient testing. Checked exceptions occur because of downed databases, missing files, network problems - things that may become available later without code changes. One day, I went through about 10,000 lines of code and moved argument checking code outside of try blocks because I realized I was handling some of them by accident. Here is the program: if me == idiot: exit(). People don't think about this, but the exceptions thrown by a module are part of that module's interface. Being able to make sure only what you expect to come out is important. Unlike Java, Unit requires you to opt in to using throws clauses. If you don't list one, one is generated for you automatically. The benefit: you can see what a function throws and protect yourself without all the babysitting. A lack of exception handling is big problem in .NET. I have had libraries from big names including Novell and Oracle throw NullReferenceExceptions because _they_ didn't know what would happen in cases where a DLL is missing or a dependency isn't installed. They could have done better testing, but if the biggest names in development can't manage to figure it, I say leave it up to the compiler. Returning nulls or special value in cases of failures takes us back to the days of C and Fortran. -- http://mail.python.org/mailman/listinfo/python-list