propagating distutil user options between commands
When building a C extension, Distutils standard command 'install' calls the 'build' command before performing the installation (see Lib/distutils/command/install.py and build.py). Reusing the build command is the correct way to ensure the installation payload is ready, but the two commands support very different sets of user options and it is impossible to combine them: install validates them at the beginning, rejecting user options meant for build as "unrecognized". This becomes a problem with necessary build options like --compiler; there is an easy workaround for my specific case (running setup.py twice: "build --compiler=..." then "install --skip-build"), but --skip-build is an ad hoc option, other combinations of commands and options can have the same problem. There are systematic solutions, like letting every option propagate to subcommands without checking, on the assumption that unrecognized options do no harm, or labeling options by command (e.g. setup.py install --build:compiler=foo --install_scripts:force) to let every command validate its options only and limit options to specific commands as a byproduct. What are the reasons for the current strict policy in Distutils? Can it be changed? Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Favorite non-python language trick?
Joseph Garvin wrote: > I'm curious -- what is everyone's favorite trick from a non-python > language? And -- why isn't it in Python? Duff's device is a classic masterpiece of lateral thinking. It is not possible in Python for many fundamental reasons, we are not at risk. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Which kid's beginners programming - Python or Forth?
Ivan Van Laningham wrote: [...] > > Seriously, PostScript is a lot more fun to learn than Forth, and more > directly useful. Since the rewards are so immediate, a kid's attention > could be gained and kept pretty easily. PostScript is easy, but I'm afraid some technical details could get in the way of enjoyable exploration, e.g. font types or scaling. PostScript is also a single purpose language: it can print static graphics and with a slightly more complex setup it can display static graphics on the screen, period. No interactivity, no files, no network, no general computation or data structures. > But I'd still recommend Python as a first programming language. Keep to > the standard stuff--ignore list comprehensions and so on--until he or > she has the basic control flow down pat. Python is general purpose; it can do graphics with a path/stroke model like Postscript's and a whole world of other things. There are many complex features in Python that shouldn't be introduced before the need arises. List comprehensions, however, *are* the basic control flow; loops are much more verbose and they should be used only when necessary. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Text Summarization
Jim Jones wrote: > Is there a Python library that would allow me to take a paragraph of text, > and generate a one or two sentence summary of that paragraph? There is a OTS wrapper. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to get the "longest possible" match with Python's RE module?
kondal wrote: > This is the way the regexp works python doesn't has anything to do with > it. It starts parsing the data with the pattern given. It returns the > matched string acording the pattern and doesn't go back to find the > other combinations. I've recently had the same problem in Java, using automatically generated regular expressions to find the longest match; I failed on cases like matching the whole of "Abcdefg", but also the whole of "AbCdefg" or "ABcdefg", with ([A-Z][a-z])?([A-Z][A-Za-z]{1,10})? . No systematic way to deal with these corner cases was available, and unsystematic ways (with greedy and reluctant quantifiers) were too complex. I ended up eliminating regular expressions completely and building a dynamic programming parser that returns the set of all match lengths; it wasn't hard and it should be even easier in Python. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: How to get the "longest possible" match with Python's RE module?
Licheng Fang wrote: > Another question: my task is to find in a given string the substrings > that satisfies a particular pattern. That's why the first tool that > came to my mind is regular expression. Parsers, however, only give a > yes/no answer to a given string. To find all substrings with a > particular pattern I may have to try every substring, which may be an > impossible task. You can collect all successful parser results beginning from each index in the string; this gives you all matches with that first index. You could extend to multiple results general bottom-up context-free language parsing like Earley or Tomita's algorithms; for reasonable languages most locations can be excluded for most rules at the beginning, with great performance improvements over trying over and over again. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: How to get the "longest possible" match with Python's RE module?
Frederic Rentsch wrote: >If you need regexes, why not just reverse-sort your expressions? This > seems a lot easier and faster than writing another regex compiler. > Reverse-sorting places the longer ones ahead of the shorter ones. Unfortunately, not all regular expressions have a fixed match length. Which is the longest of, for example, /(abc)?def/ and /(def)?ghi/ depends on the input. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiple instances of a python program
Most of the Python interpreter is a shared library, at least on Windows. The only duplication is in loaded Python code, which includes only your bot and the libraries it uses. If you have memory problems, try to do without some libraries or to edit unused parts out of them. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3131: Supporting Non-ASCII Identifiers
On May 13, 5:44 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: > In summary, this PEP proposes to allow non-ASCII letters as > identifiers in Python. If the PEP is accepted, the following > identifiers would also become valid as class, function, or > variable names: Löffelstiel, changé, ошибка, or 売り場 > (hoping that the latter one means "counter"). I am strongly against this PEP. The serious problems and huge costs already explained by others are not balanced by the possibility of using non-butchered identifiers in non-ASCII alphabets, especially considering that one can write any language, in its full Unicode glory, in the strings and comments of suitably encoded source files. The diatribe about cross language understanding of Python code is IMHO off topic; if one doesn't care about international readers, using annoying alphabets for identifiers has only a marginal impact. It's the same situation of IRIs (a bad idea) with HTML text (happily Unicode). > - should non-ASCII identifiers be supported? why? No, they are useless. > - would you use them if it was possible to do so? in what cases? No, never. Being Italian, I'm sometimes tempted to use accented vowels in my code, but I restrain myself because of the possibility of annoying foreign readers and the difficulty of convincing every text editor I use to preserve them > Python code is written by many people in the world who are not familiar > with the English language, or even well-acquainted with the Latin > writing system. Such developers often desire to define classes and > functions with names in their native languages, rather than having to > come up with an (often incorrect) English translation of the concept > they want to name. The described set of users includes linguistically intolerant people who don't accept the use of suitable languages instead of their own, and of compromised but readable spelling instead of the one they prefer. Most "people in the world who are not familiar with the English language" are much more mature than that, even when they don't write for international readers. > The syntax of identifiers in Python will be based on the Unicode > standard annex UAX-31 [1]_, with elaboration and changes as defined > below. Not providing an explicit listing of allowed characters is inexcusable sloppiness. The XML standard is an example of how listings of large parts of the Unicode character set can be provided clearly, exactly and (almost) concisely. > ``ID_Start`` is defined as all characters having one of the general > categories uppercase letters (Lu), lowercase letters (Ll), titlecase > letters (Lt), modifier letters (Lm), other letters (Lo), letter numbers > (Nl), plus the underscore (XXX what are "stability extensions" listed in > UAX 31). > > ``ID_Continue`` is defined as all characters in ``ID_Start``, plus > nonspacing marks (Mn), spacing combining marks (Mc), decimal number > (Nd), and connector punctuations (Pc). Am I the first to notice how unsuitable these characters are? Many of these would be utterly invisible ("variation selectors" are Mn) or displayed out of sequence (overlays are Mn), or normalized away (combining accents are Mn) or absurdly strange and ambiguous (roman numerals are Nl, for instance). Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3131: Supporting Non-ASCII Identifiers
Martin v. Lowis wrote: > Lorenzo Gatti wrote: >> Not providing an explicit listing of allowed characters is inexcusable >> sloppiness. > That is a deliberate part of the specification. It is intentional that > it does *not* specify a precise list, but instead defers that list > to the version of the Unicode standard used (in the unicodedata > module). Ok, maybe you considered listing characters but you earnestly decided to follow an authority; but this reliance on the Unicode standard is not a merit: it defers to an external entity (UAX 31 and the Unicode database) a foundation of Python syntax. The obvious purpose of Unicode Annex 31 is defining a framework for parsing the identifiers of arbitrary programming languages, it's only, in its own words, "specifications for recommended defaults for the use of Unicode in the definitions of identifiers and in pattern-based syntax". It suggests an orderly way to add tens of thousands of exotic characters to programming language grammars, but it doesn't prove it would be wise to do so. You seem to like Unicode Annex 31, but keep in mind that: - it has very limited resources (only the Unicode standard, i.e. lists and properties of characters, and not sensible programming language design, software design, etc.) - it is culturally biased in favour of supporting as much of the Unicode character set as possible, disregarding the practical consequences and assuming without discussion that programming language designers want to do so - it is also culturally biased towards the typical Unicode patterns of providing well explained general algorithms, ensuring forward compatibility, and relying on existing Unicode standards (in this case, character types) rather than introducing new data (but the character list of Table 3 is unavoidable); the net result is caring even less for actual usage. >> The XML standard is an example of how listings of large parts of the >> Unicode character set can be provided clearly, exactly and (almost) >> concisely. > And, indeed, this is now recognized as one of the bigger mistakes > of the XML recommendation: they provide an explicit list, and fail > to consider characters that are unassigned. In XML 1.1, they try > to address this issue, by now allowing unassigned characters in > XML names even though it's not certain yet what those characters > mean (until they are assigned). XML 1.1 is, for practical purposes, not used except by mistake. I challenge you to show me XML languages or documents of some importance that need XML 1.1 because they use non-ASCII names. XML 1.1 is supported by many tools and standards because of buzzword compliance, enthusiastic obedience to the W3C and low cost of implementation, but this doesn't mean that its features are an improvement over XML 1.0. >>> ``ID_Continue`` is defined as all characters in ``ID_Start``, plus >>> nonspacing marks (Mn), spacing combining marks (Mc), decimal number >>> (Nd), and connector punctuations (Pc). >> >> Am I the first to notice how unsuitable these characters are? > Probably. Nobody in the Unicode consortium noticed, but what > do they know about suitability of Unicode characters... Don't be silly. These characters are suitable for writing text, not for use in identifiers; the fact that UAX 31 allows them merely proves how disconnected from actual programming language needs that document is. In typical word processing, what characters are used is the editor's problem and the only thing that matters is the correctness of the printed result; program code is much more demanding, as it needs to do more (exact comparisons, easy reading...) with less (straightforward keyboard inputs and monospaced fonts instead of complex input systems and WYSIWYG graphical text). The only way to work with program text successfully is limiting its complexity. Hard to input characters, hard to see characters, ambiguities and uncertainty in the sequence of characters, sets of hard to distinguish glyphs and similar problems are unacceptable. It seems I'm not the first to notice a lot of Unicode characters that are unsuitable for identifiers. Appendix I of the XML 1.1 standard recommends to avoid variation selectors, interlinear annotations (I missed them...), various decomposable characters, and "names which are nonsensical, unpronounceable, hard to read, or easily confusable with other names". The whole appendix I is a clear admission of self-defeat, probably the result of committee compromises. Do you think you could do better? Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Eclipse/PyDev question
On Aug 13, 11:48 am, king kikapu <[EMAIL PROTECTED]> wrote: > Hi, > > i am using Eclipse (Platform Runtime binary) with PyDev and i was > wondering if someone can help me with this: > > 1. I set breakpoints to a .py file and i have told Eclipse to open the > Debug perspective when it sees that some .py file(s) of my project > indeed contains breakpoints. So, i press F9, Eclipse starts, Debug > perspective opens and i can use the debugger just fine. But when the > app terminates, how can i tell Eclipse to switch automatically to the > PyDev perspective and not remain in the Debug one ? You don't, Eclipse keeps the same perspective because for what it knows you might want to debug some more and it correctly avoids to decide what is good for you. Switching to the debug perspective when you issue a debug command is an exception to the normal switching of perspectives with the respective big buttons and the menu. If you wish to switch perspective to edit code before debugging again, putting editors and appropriate accessory views in the debug perspective might be good enough. > 2. Let's say we have a project that consists of some .py files. I want > to press F9 when the editor displays anyone of these files but make > Eclipse to run the whole project (that has another .py as "default") > and not the script that i am currently working on, is that possible ?? Executing the current file is a bad habit, Eclipse remembers a list of execution/debug configurations that can be selected from a dropdown list in the toolbar and edited with a dialog box; after you setup entry points for a project you can use and edit them as needed. I'm using Eclipse for Java and my entry points include remote debugging of a GUI application, about 6 JUnit tests, about 3 command line tools with many complex parameter sets each, and some Ant builds; it would take about one hour of trial and error to reconstruct the command lines, classpaths and JVM options. I only run the current file as a draft for an edited configuration. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: sorteddict [was a PEP proposal, but isn't anymore!]
I don't see a focused discussion of computational complexity of a sorted dict; its API cannot be simpler than sorting a dictionary and it has issues and complications that have already been discussed without completely satisfactory solutions, so the only possible reason to adopt a sorted dict is that some important use case for mapping types becomes significantly cheaper. With n entries, the size of a non-sorted hashtable, of a hashtable plus lists or sets of keys, and of reasonable sorted dict implementations with trees are all O(n). No substantial space advantage can be obtained by sorting dictionaries. Iterating through all n entries of a mapping, once, in sorted order, is O(n) time and O(1) space with an unsorted hash table, a hash table with a sorted list of the keys and all types of tree that I know of. If there is a performance gain, it must come from amortizing insertions, deletions and index-building. (The other operation, value updates for an existing key, doesn't matter: updates cause no structural changes and they must not invalidate any iterator.) Let's consider a very simple use case: n insertions followed by x iterations through all entries and n*y lookups by key. Cost for a hashtable and an ad hoc sorted list of the keys, fundamentally equivalent to sorting a Python dict: O(n) for insertions O(n log n) for indexing O(nx) for iterations O(ny) for lookups Cost for a tree: O(n log n) for insertions no indexing O(nx) for iterations O(ny log n) for lookups The hashtable comes out ahead because of cheaper lookups, for any x and y; note that without lookups there is no reason to use a mapping instead of a list of (key,value) tuples. With an equal number k of insertions and deletions between the iterations, the hashtable must be reindexed x times: O(n) for insertions O(kx) for updates and deletions O(nx log n) for indexing and reindexing O(nx) for iterations O(ny) for lookups The tree might be cheaper: O(n log n) for insertions O(kx log n) for updates and deletions no indexing and reindexing O(nx) for iterations O(ny log n) for lookups For a fixed small k, or with k proportional to n, reindexing the hashtable and lookups in the tree are equally mediocre. Maybe we could make k changes in the middle of each iteration. For a naively reindexed hashtable: O(n) for insertions O(kx) for updates and deletions O(knx log n) for indexing and reindexing O(nx) for iterations O(ny) for lookups For a tree, the costs remain as above: the new factor of n for the hashtable is fatal. Clever updates of the existing index or use of a heap would lower the cost, but they would need to be encapsulated as a sorteddict implementation. Is this a practical use case? When are sequential visits of all elements in order frequently suspended to make insertions and deletions, with a need for efficient lookup by key? - Priority queues; but given the peculiar pattern of insertions and deletions there are more suitable specialized data structures. - A* and similar best-first algorithms. It's a small but important niche; maybe it isn't important enough for the standard library. Other absent datatypes like heaps, an immutable mapping type similar to frozenset and tuple, or disjoint sets with would be more fundamental and general, and a mapping that remembers the order of insertion regardless of keys would be equally useful. In the Java collections framework all these kinds of mapping and others coexist peacefully, but Python doesn't have the same kitchen sink approach to basic libraries. Regarding the API, a sorted dict should not expose random access by an entry's position in the sequence: it is a gratuitous difficulty for the implementor and, more importantly, a perversion of the mapping data type. For that purpose there are lists and tuples, or explicit indices like those of the Boost multi-index containers (http:// www.boost.org/libs/multi_index). The only differences with dict should be the constraint that items(), keys(), values(), iteritems(), iterkeys(), itervalues() return entries sorted by key. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: pytz has so many timezones!
On Oct 8, 10:40 am, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > Sanjay wrote: > > Hi All, > > > I am using pytz.common_timezones to populate the timezone combo box of > > some user registration form. But as it has so many timezones (around > > 400), it is a bit confusing to the users. Is there a smaller and more > > practical set? If not, some suggestions on how to handle the > > registration form effectively would help me a lot. > > I'm not a timezone-guru - but I _think_ if there are 400 timezones defined, > it should list them, shouldn't it? What if you lived in the one that's not > part of the "practical subset"? I think the problem is displaying a dropdown list of timezones. People have little idea of how their timezones are called; it would be better to list countries, which are many but nicely alphabetized, and map the country to its only timezone, or for the few large countries with more than a TZ offer a choice later. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Do I need Python to run Blender correctly?
On Jan 25, 9:25 am, "AKA gray asphalt" <[EMAIL PROTECTED]> wrote: > I downloaded Blender but there was no link for python. Am I on the right > track? Don't worry, Blender includes its own bundled Python interpreter, which is usually one version behind; just leave it alone. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Overloading the tilde operator?
On Feb 8, 7:02 am, Dave Benjamin <[EMAIL PROTECTED]> wrote: > Neil Cerutti wrote: > > There's been only one (or two?) languages in history that > > attempted to provide programmers with the ability to implement > > new infix operators, including defining precedence level and > > associativity (I can't think of the name right now). > > You're probably thinking of SML or Haskell. OCaml also allows you to > define new infix operators, but the associativities are fixed (and > determined by what punctuation you use). Also some flavours of Prolog, as descrived in the classic book by Clocksin & Mellish. Regarding the OP, I hope his need for an infix tilde operator is overestimated; there are plenty of infix operators that can be abused, and at least one of them should be unused and available for redefinition. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: parse HTML by class rather than tag
On Feb 23, 8:54 am, [EMAIL PROTECTED] wrote: > Hello, > > i'm would be interested in parsing a HTML files by its corresponding > opening and closing tags but by taking into account the class > attributes and its values, [...] > so i wondering if i should go with regular expression, but i do not > think so as i must jumpt after inner closing div, or with a simple > parser, i've searched and > foundhttp://www.diveintopython.org/html_processing/basehtmlprocessor.html > but i would like the parser not to change anything at all (no > lowercase). Horribly brittle idea. Use a robust HTML parser (e.g. http://www.crummy.com/software/BeautifulSoup/) to build a document tree, then visit it top down and look at the value of the 'class' attributes. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Are the critiques in "All the things I hate about Python" valid?
On Saturday, February 17, 2018 at 12:28:29 PM UTC+1, Ben Bacarisse wrote: > Marko Rauhamaa writes: > > > Many people think static typing is key to high quality. I tend to think > > the reverse is true: the boilerplate of static typing hampers > > expressivity so much that, on the net, quality suffers. > > I don't find that with Haskell. It's statically typed but the types are > almost always inferred. If you see an explicit type, it's usually > because the author thinks it helps explain something. > > (I don't want to start a Haskell/Python thread -- the only point is that > static typing does not inevitably imply lots of 'boilerplate'.) > > -- > Ben. There are two sides to not declaring types: having readers spend a fraction of a second to figure out what types are being used and having tools apply type inference for useful purposes. Python is bad at type inference (but only because deliberate loopholes like eval() are preserved) but good at making programmers trust code, while Haskell is bad at encouraging straightforward and understandable types but good at extracting maximum value from type inference. -- https://mail.python.org/mailman/listinfo/python-list
Re: Which one is the best XML-parser?
On Thursday, June 23, 2016 at 11:03:18 PM UTC+2, David Shi wrote: > Which one is the best XML-parser? > Can any one tell me? > Regards. > David Lxml offers lxml.etree.iterparse (http://lxml.de/tutorial.html#event-driven-parsing), an important combination of the memory savings of incremental parsing and the convenience of visiting a DOM tree without dealing with irrelevant details. An iterable incrementally produces DOM element objects, which can be deleted after processing them and before proceeding to parse the rest of the document. This technique allows easy processing of huge documents containing many medium-size units of work whose DOM trees fit into memory easily. -- https://mail.python.org/mailman/listinfo/python-list
Re: Getting back into PyQt and not loving it.
PyGTK is obsolete and stopped at Python 2.7, while PyGObject for Windows is several versions behind (currently 3.18 vs 3.21) and it doesn't support Python 3.5. Game over for GTK+. -- https://mail.python.org/mailman/listinfo/python-list
Re: Type hinting of Python is just a toy ?
On Friday, January 4, 2019 at 9:05:11 AM UTC+1, iam...@icloud.com wrote: > I read that pep 484 type hinting of python has no effect of performance, then > what’s the purpose of it? Just a toy ? Having no effect on performance is a good thing; Python is already slowish, additional runtime type checking would be a problem. The purpose of type hinting is helping tools, for example ones that look for type errors in source code (e.g. a function parameter is supposed to be a string, but an integer is being passed). > > Python is an old programming language, but not better than other programming > languages, then what are you all dong for so many times ? Being nice in general, and not too aggressive with trolls in particular, is also a good thing. > > Pep484 is too complex. Typle should not a seperate type, in fact it should be > just a class. Like this in other programming language > Python: Tuple(id: int, name: string, age: int) > Other: class someClass { > public int id; > public string name; > public int age; > } But tuple (not Tuple) is already is a class. Are you missing the difference between declaring a type and invoking a constructor? Try to work out complete examples. > Design of OOP of python is too bad, so it treat Tuple as a seperate type. If you mean that defining classes could be replaced by uniformly using tuples, it is not the case because classes can have a lot of significant behaviour, including encapsulation. If you mean that the specific tuple class shouldn't exist and all classes should be in some way like tuple, it is not the case because many classes have to behave differently and above that tuple has special syntax support. It's about as special as the dict class and the list class, and clearly different. > Why looks different than others? afraid of cannot been watched by others? Like most programming languages, Python was deliberately designed to be different from existing programming languages in order to make an experiment (which could be summarized as interpreted, with a lot of convenient syntax in order to be brief and readable, strictly object oriented, strongly but dynamically typed) and to gain adoption (by offering an advantage to users who wouldn't bother trying a language that is only marginally different from existing ones). By all means, use other programming languages if you think they are better, but don't expect Python to change in radical ways. -- https://mail.python.org/mailman/listinfo/python-list
Re: New user's initial thoughts / criticisms of Python
Regarding the "select" statement, I think the most "Pythonic" approach is using dictionaries rather than nested ifs. Supposing we want to decode abbreviated day names ("mon") to full names ("Monday"): day_abbr='mon' day_names_mapping={ 'mon':'Monday', 'tue':'Tuesday', 'wed':'Wednesday', 'thu':'Thursday', 'fri':'Friday', 'sat':'Saturday', 'sun':'Sunday' } try: full_day_name=day_names_mapping[day_abbr.casefold()] except KeyError: raise GoodLuckFixingItException('We don't have "'+day_abbr+'" in our week') This style is more compact (usually one line per case) and more meaningful (generic processing driven by separate data) than a pile of if statement, and more flexible: full_day_names=('Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday') day_names={x.casefold()[0:3] : x for x in full_day_names} # A dict can also contain tuples, lists, and nested dicts, consolidating multiple switches over the same keys and organizing nested switches and other more complex control structures. -- https://mail.python.org/mailman/listinfo/python-list
Re: anomaly
On Monday, May 11, 2015 at 2:58:09 AM UTC+2, zipher wrote: > I guess everyone expects this behavior since Python implemented this idea of > "everything is an object", but I think this branch of OOP (on the branch of > the Tree of Programming Languages) has to be chopped off. The idea of > everything is an object is backwards (unless your in a LISP machine). Like I > say, it's trying to be too pure and not practical. Expressing this sort of emphatic, insulting and superficial opinions, to the people who would be most irritated by them (the Python mailing list) and without the slightest interest for contrary viewpoints and constructive discussion, is a very unpleasant form of trolling. If you don't like Python, you are welcome to prefer other programming languages. If you want to use Python with C-like primitive types, you can use arrays. Both choices are perfectly good, and routinely made without bothering other people with inane conversations. Lorenzo Gatti -- https://mail.python.org/mailman/listinfo/python-list
Re: Beginner's assignment question
On Mar 1, 3:39 pm, Schizoid Man <[EMAIL PROTECTED]> wrote: > As in variable assignment, not homework assignment! :) > > I understand the first line but not the second of the following code: > > a, b = 0, 1 > a, b = b, a + b > > In the first line a is assigned 0 and b is assigned 1 simultaneously. > > However what is the sequence of operation in the second statement? I;m > confused due to the inter-dependence of the variables. The expressions of the right of the assignment operator are evaluated before assigning any new values, to the destinations on the left side of the assignment operator. So substitutig the old values of a and b the second assignment means a, b = 0, 0 + 1 Simplifying the Python Reference Manual ("6.3 Assignment Statements") a little : assignment_stmt ::= target_list "="+ expression_list An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right. [...] WARNING: Although the definition of assignment implies that overlaps between the left-hand side and the right-hand side are `safe' (for example "a, b = b, a" swaps two variables), overlaps within the collection of assigned-to variables are not safe! For instance, the following program prints "[0, 2]": x = [0, 1] i = 0 i, x[i] = 1, 2 print x Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: type-checking support in Python?
On 7 Ott, 08:36, Lawrence D'Oliveiro <[EMAIL PROTECTED] central.gen.new_zealand> wrote: > In message <[EMAIL PROTECTED]>, Gabriel > > Genellina wrote: > > As an example, in the oil industry here in my country there is a mix of > > measurement units in common usage. Depth is measured in meters, but pump > > stroke in inches; loads in lbs but pressures in kg/cm². > > Isn't the right way to handle that to attach dimensions to each number? Can you afford to avoid floats and ints? Attaching suffixes is the best one can do with the builtin types. In C++ one can check dimensions at compile time (http://www.boost.org/ doc/libs/1_36_0/doc/html/boost_units.html) with a modest increase of cumbersomeness, but Python would need very heavyweight classes containing a value and its dimension and a replacement of all needed functions and operations. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Thoughts on language-level configuration support?
On 31 Mar, 09:19, jfager wrote: > On Mar 31, 2:54 am, David Stanek wrote: > > > On Mon, Mar 30, 2009 at 9:40 AM, jfager wrote: > > >http://jasonfager.com/?p=440. > > > > The basic idea is that a language could offer syntactic support for > > > declaring configurable points in the program. The language system > > > would then offer an api to allow the end user to discover a programs > > > configuration service, as well as a general api for providing > > > configuration values. A configuration "service"? An "end user" that bothers to discover it? API for "providing" configuration "values"? This suggestion, and the companion blog post, seem very distant from the real world for a number of reasons. 1) Users want to supply applications with the least amount of useful configuration information as rarely and easily as possible, not to use advanced tools to satisfy an application's crudely expressed configuration demands. Reducing inconvenience for the user entails sophisticated and mostly ad hoc techniques: deciding without asking (e.g. autoconf looking into C compiler headers and trying shell commands or countless applications with "user profiles" querying the OS for the current user's home directory), asking when the software is installed (e.g. what 8 bit character encoding should be used in a new database), designing sensible and safe defaults. 2) Practical complex configuration files (or their equivalent in a DB, a LDAP directory, etc.) are more important and more permanent than the applications that use them; their syntax and semantics should be defined by external specifications (such as manuals and examples), not in the code of a particular implementation. User documentation is necessary, and having a configuration mechanism that isn't subject to accidents when the application is modified is equally important. 3) Configuration consisting of values associated with individual variables is an unusually simple case. The normal case is translating between nontrivial sequential, hierarchical or reticular data structures in the configuration input and quite different ones in the implementation. 4) Your actual use case seems to be providing a lot of tests with a replacement for the "real" configuration of the actual application. Branding variables as "configuration" all over the program isn't an useful way to help the tests and the actual application build the same data structures in different ways. > > What value does this have over simply having a configuration file. > > "Simply having a configuration file" - okay. What format? What if > the end user wants to keep their configuration info in LDAP? Wait a minute. Reading the "configuration" from a live LDAP directory is a major feature, with involved application specific aspects (e.g. error handling) and a solid justification in the application's requirements (e.g. ensuring up to date authentication and authorization data), not an interchangeable configuration provider and certainly not something that the user can replace. Deciding where the configuration comes from is an integral part of the design, not something that can or should be left to the user: there can be value in defining common object models for various sources of configuration data and rules to combine them, like e.g. in the Spring framework for Java, but it's only a starting point for the actual design of the application's configuration. > > In your load testing application you could have easily checked for the > > settings in a config object. > > Not really easily, no. It would have been repeated boilerplate across > many different test cases (actually, that's what we started with and > refactored away), instead of a simple declaration that delegated the > checking to the test runner. A test runner has no business configuring tests beyond calling generic setup and teardown methods; tests can be designed smartly and factored properly to take care of their own configuration without repeating "boilerplate". > > I think that the discover-ability of > > configuration can be handled with example configs and documentation. > > Who's keeping that up to date? Who's making sure it stays in sync > with the code? Why even bother, if you could get it automatically > from the code? It's the code that must remain in sync with the documentation, the tests, and the actual usage of the application. For example, when did you last see incompatible changes in Apache's httpd.conf? You seem to think code is central and actual use and design is a second class citizen. You say in your blog post: "Users shouldn’t have to pore through the code to find all the little bits they can tweak". They shouldn't because a well designed application has adequate documentation of what should be configured in the form of manuals, wizards, etc. and they shouldn't because they don't want to tweak little bits, not even if they have to. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: 2d graphics - what module to use?
On 25 Lug, 08:13, Pierre Dagenais <[EMAIL PROTECTED]> wrote: > What is the easiest way to draw to a window? I'd like to draw something > like sine waves from a mathematical equation. > Newbie to python. What you are really asking for is what GUI library you should use; every one allows you to draw freely. What do you need to do besides drawing sine waves? You should look at your full range of options; http://wiki.python.org/moin/GuiProgramming is a good starting point. The "easiest" way to draw might be with those toolkits that offer primarily a canvas to draw on rather than composable widgets. For example, Pyglet (http://pyglet.org/) offers OpenGL contexts with sensible defaults and unobtrusive automation: from pyglet import * from pyglet.gl import * import math win = window.Window(width=700, height=700, caption="sine wave demo", resizable=True) frequency,phase,amplitude=0.1,0.0,0.9 @win.event def on_draw(): half_height=win.height*0.5 glClear(GL_COLOR_BUFFER_BIT) glColor3f(0.9, 1.0, 0.8) glBegin(GL_LINE_STRIP) for x in xrange(0,win.width): y=half_height*(1.0+amplitude*math.sin(x*frequency+phase)) glVertex2f(x,y) glEnd() app.run() Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: str(bytes) in Python 3.0
On Apr 12, 5:51 pm, Kay Schluehr <[EMAIL PROTECTED]> wrote: > On 12 Apr., 16:29, Carl Banks <[EMAIL PROTECTED]> wrote: > > > > And making an utf-8 encoding default is not possible without writing a > > > new function? > > > I believe the Zen in effect here is, "In the face of ambiguity, refuse > > the temptation to guess." How do you know if the bytes are utf-8 > > encoded? > > How many "encodings" would you define for a Rectangle constructor? > > Making things infinitely configurable is very nice and shows that the > programmer has worked hard. Sometimes however it suffices to provide a > mandatory default and some supplementary conversion methods. This > still won't exhaust all possible cases but provides a reasonable > coverage. There is no sensible default because many incompatible encodings are in common use; programmers need to take responsibility for tracking ot guessing string encodings according to their needs, in ways that depend on application architecture, characteristics of users and data, and various risk and quality trade-offs. In languages that, like Java, have a default encoding for convenience, documents are routinely mangled by sloppy programmers who think that they live in an ASCII or UTF-8 fairy land and that they don't need tight control of the encoding of all text that enters and leaves the system. Ceasing to support this obsolete attitude with lenient APIs is the only way forward; being forced to learn that encodings are important is better than, say, discovering unrecoverable data corruption in a working system. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: XML-schema 'best practice' question
On 18 Set, 08:28, Frank Millman <[EMAIL PROTECTED]> wrote: > I am thinking of adding a check to see if a document has changed since > it was last validated, and if not, skip the validation step. However, > I then do not get the default values filled in. > > I can think of two possible solutions. I just wondered if this is a > common design issue when it comes to xml and schemas, and if there is > a 'best practice' to handle it. > > 1. Don't use default values - create the document with all values > filled in. > > 2. Use python to check for missing values and fill in the defaults > when processing the document. > > Or maybe the best practice is to *always* validate a document before > processing it. The stated problem rings a lot of premature optimization bells; performing the validation and default-filling step every time, unconditionally, is certainly the least crooked approach. In case you really want to avoid unnecessary schema processing, if you are willing to use persistent data to check for changes (for example, by comparing a hash or the full text of the current document with the one from the last time you performed validation) you can also store the filled-in document that you computed, either as XML or as serialized Python data structures. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: XML-schema 'best practice' question
On 20 Set, 07:59, Frank Millman <[EMAIL PROTECTED]> wrote: > I want to introduce an element of workflow management (aka Business > Process Management) into the business/accounting system I am > developing. I used google to try to find out what the current state of > the art is. After several months of very confusing research, this is > the present situation, as best as I can figure it out. What is the state of the art of existing, working software? Can you leverage it instead of starting from scratch? For example, the existing functionality of your accounting software can be reorganized as a suite of components, web services etc. that can be embedded in workflow definitions, and/or executing a workflow engine can become a command in your application. > There is an OMG spec called BPMN, for Business Process Modeling > Notation. It provides a graphical notation [snip] > there is no standard way > of exchanging a diagram between different vendors, or of using it as > input to a workflow engine. So BPMN is mere theory. This "spec" might be a reference for evaluating actual systems, but not a standard itself. > There is an OASIS spec called WS-BPEL, for Web Services Business > Process Execution Language. It defines a language for specifying > business process behavior based on Web Services. This does have a > formal xml-based specification. However, it only covers processes > invoked via web services - it does not cover workflow-type processes > within an organisation. To try to fill this gap, a few vendors got > together and submitted a draft specification called BPEL4People. This > proposes a series of extensions to the WS-BPEL spec. It is still at > the evaluation stage. Some customers pay good money for buzzword compliance, but are you sure you want to be so bleeding edge that you care not only for WS- something specifications, but for "evaluation stage" ones? There is no need to wait for BPEL4People before designing workflow systems with human editing, approval, etc. Try looking into case studies of how BPEL is actually used in practice. > The BPMN spec includes a section which attempts to provide a mapping > between BPMN and BPEL, but the authors state that there are areas of > incompatibility, so it is not a perfect mapping. Don't worry, BPMN does not exist: there is no incompatibility. On the other hand, comparing and understanding BPMN and BPEL might reveal different purposes and weaknesses between the two systems and help you distinguish what you need, what would be cool and what is only a bad idea or a speculation. > Eventually I would like to make sense of all this, but for now I want > to focus on BPMN, and ignore BPEL. I can use wxPython to design a BPMN > diagram, but I have to invent my own method of serialising it so that > I can use it to drive the business process. For good or ill, I decided > to use xml, as it seems to offer the best chance of keeping up with > the various specifications as they evolve. If you mean to use workflow architectures to add value to your business and accounting software, your priority should be executing workflows, not editing workflow diagrams (which are a useful but unnecessary user interface layer over the actual workflow engine); making your diagrams and definitions compliant with volatile and unproven specifications should come a distant last. > I don't know if this is of any interest to anyone, but it was > therapeutic for me to try to organise my thoughts and get them down on > paper. I am not expecting any comments, but if anyone has any thoughts > to toss in, I will read them with interest. 1) There are a number of open-source or affordable workflow engines, mostly BPEL-compliant and written in Java; they should be more useful than reinventing the wheel. 2) With a good XML editor you can produce the workflow definitions, BPEL or otherwise, that your workflow engine needs, and leave the interactive diagram editor for a phase 2 that might not necessarily come; text editing might be convenient enough for your users, and for graphical output something simpler than an editor (e.g a Graphviz exporter) might be enough. 3) Maybe workflow processing can grow inside your existing accounting application without the sort of "big bang" redesign you seem to be planning; chances are that the needed objects are already in place and you only need to make workflow more explicit and add appropriate new features. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: XML-schema 'best practice' question
Sorry for pressing the send button too fast. On 20 Set, 07:59, Frank Millman <[EMAIL PROTECTED]> wrote: > I want to introduce an element of workflow management (aka Business > Process Management) into the business/accounting system I am > developing. I used google to try to find out what the current state of > the art is. After several months of very confusing research, this is > the present situation, as best as I can figure it out. What is the state of the art of existing, working software? Can you leverage it instead of starting from scratch? For example, the existing functionality of your accounting software can be reorganized as a suite of components, web services etc. that can be embedded in workflow definitions, and/or executing a workflow engine can become a command in your application. > There is an OMG spec called BPMN, for Business Process Modeling > Notation. It provides a graphical notation [snip] > there is no standard way > of exchanging a diagram between different vendors, or of using it as > input to a workflow engine. So BPMN is mere theory. This "spec" might be a reference for evaluating actual systems, but not a standard itself. > There is an OASIS spec called WS-BPEL, for Web Services Business > Process Execution Language. It defines a language for specifying > business process behavior based on Web Services. This does have a > formal xml-based specification. However, it only covers processes > invoked via web services - it does not cover workflow-type processes > within an organisation. To try to fill this gap, a few vendors got > together and submitted a draft specification called BPEL4People. This > proposes a series of extensions to the WS-BPEL spec. It is still at > the evaluation stage. Some customers pay good money for buzzword compliance, but are you sure you want to be so bleeding edge that you care not only for WS- something specifications, but for "evaluation stage" ones? There is no need to wait for BPEL4People before designing workflow systems with human editing, approval, etc. Try looking into case studies of how BPEL is actually used in practice. > The BPMN spec includes a section which attempts to provide a mapping > between BPMN and BPEL, but the authors state that there are areas of > incompatibility, so it is not a perfect mapping. Don't worry, BPMN does not exist: there is no incompatibility. On the other hand, comparing and understanding BPMN and BPEL might reveal different purposes and weaknesses between the two systems and help you distinguish what you need, what would be cool and what is only a bad idea or a speculation. > Eventually I would like to make sense of all this, but for now I want > to focus on BPMN, and ignore BPEL. I can use wxPython to design a BPMN > diagram, but I have to invent my own method of serialising it so that > I can use it to drive the business process. For good or ill, I decided > to use xml, as it seems to offer the best chance of keeping up with > the various specifications as they evolve. If you mean to use workflow architectures to add value to your business and accounting software, your priority should be executing workflows, not editing workflow diagrams (which are a useful but unnecessary user interface layer over the actual workflow engine); making your diagrams and definitions compliant with volatile and unproven specifications should come a distant last. > I don't know if this is of any interest to anyone, but it was > therapeutic for me to try to organise my thoughts and get them down on > paper. I am not expecting any comments, but if anyone has any thoughts > to toss in, I will read them with interest. 1) There are a number of open-source or affordable workflow engines, mostly BPEL-compliant and written in Java; they should be more useful than reinventing the wheel. 2) With a good XML editor you can produce the workflow definitions, BPEL or otherwise, that your workflow engine needs, and leave the interactive diagram editor for a phase 2 that might not necessarily come; text editing might be convenient enough for your users, and for graphical output something simpler than an editor (e.g a Graphviz exporter) might be enough. 3) Maybe workflow processing can grow inside your existing accounting application without the sort of "big bang" redesign you seem to be planning; chances are that the needed objects are already in place and you only need to make workflow more explicit and add appropriate new features. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Pyfora, a place for python
On Nov 1, 8:06 am, Saketh wrote: > Hi everyone, > > I am proud to announce the release of Pyfora (http://pyfora.org), an > online community of Python enthusiasts to supplement comp.lang.python > and #python. While the site is small right now, please feel free to > register and post any questions or tips you may have. I'll feel free to not even bookmark it. I'm sorry, but it is just a bad idea. Your forum cannot (and should not) compete either with Python's official newsgroup, IRC channel and mailing list or with popular, well- made and well-frequented general programming sites like stackoverflow.com. It would be the Internet equivalent of looking for a poker tournament in a desert valley instead of driving half an hour less and going to Las Vegas: there are no incentives to choose your forum, except perhaps for isolationists who value being a big fish in a small pond over being part of a community. If you want to claim a small Python-related corner of the web, you should write a blog: if it is any good, and probably even if it isn't, it would be linked and read by someone and it would add to collective knowledge instead of fragmenting it. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Pyfora, a place for python
On Nov 3, 11:37 am, Steven D'Aprano wrote: > On Tue, 03 Nov 2009 02:11:59 -0800, Lorenzo Gatti wrote: [...] > Are you saying that now that comp.lang.python and stackoverflow exists, > there no more room in the world for any more Python forums? > > I think that's terrible. Although there is a high barrier to entry for general Python forums, it is not a problem because the door is always open for specialized forums that become the natural "home" of some group or thought leader or of some special interest, for example the forum of a new software product or of the fans of an important blog. Unfortunately, pyfora.org has neither a distinct crowd behind it nor an unique topic, and thus no niche to fill; it can only contribute fragmentation, which is unfortunate because Saketh seems enthusiastic. What in some fields (e.g. warez forums or art boards) would be healthy redundancy and competition between sites and forums becomes pure fragmentation if the only effect of multiple forums is to separate the same questions and opinions that would be posted elsewhere from potential readers and answerers. Reasonable people know this and post their requests for help and discussions either in the same appropriate places as everyone else or in random places they know and like; one needs serious personal issues to abandon popular forums for obscure ones. > Saketh, would you care to give a brief explanation for sets your forum > apart from the existing Python forums, and why people should choose to > spend time there instead of (or as well as) the existing forums? What > advantages does it have? That's the point, I couldn't put it better. > > It would be the Internet equivalent of looking for a poker tournament in > > a desert valley instead of driving half an hour less and going to Las > > Vegas: > > [...] > How about avoiding the noise and obtrusive advertising and bright lights > of Las Vegas, the fakery, the "showmanship", > [...] > if you're interested in poker without all the mayonnaise, maybe > that poker tournament away from the tourists is exactly what you need. I didn't explain my similitude clearly: I was comparing the fitness for purpose of going to Las Vegas with a plan to gamble with the absurdity of stopping, say, at an isolated gas station in the hope of finding a poker tournament there. If you are hinting that popular newsgroups and forums might be so full of fakery, showmanship, mayonnaise, etc. to deserve secession, it's another topic. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Choosing GUI Module for Python
On Nov 9, 9:01 pm, Simon Hibbs wrote: > The main objection to using PyQT untill now was that for commercial > development you needed to buy a license (it was free for GPL > projects). That's rapidly becoming a non-issue as the core QT > framework is now LGPL and Nokia have a project underway to produce > PyQT compatible LGPL python bindings under the PySide project. I also would like to use PySide, but unlike PyQt and Qt itself it doesn't seem likely to support Windows in the foreseeable future. A pity, to put it mildly. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Choosing GUI Module for Python
On Nov 10, 11:08 pm, Simon Hibbs wrote: > Since QT runs on Windows, > porting to the Windows version of QT shouldn't be hard. The PySide developers, who are better judges of their own project than you and me, consider a Windows port so hard (and time consuming) that they didn't even try; a second iteration of the already working binding generator has a higher priority than supporting a large portion of the potential user base with a Windows port, so don't hold your breath. On a more constructive note, I started to follow the instructions at http://www.pyside.org/docs/pyside/howto-build/index.html (which are vague and terse enough to be cross-platform) with Microsoft VC9 Express. Hurdle 0: recompile Qt because the provided DLLs have hardcoded wrong paths that confuse CMake. How should Qt be configured? My first compilation attempt had to be aborted (and couldn't be resumed) after about 2 hours: trial and error at 1-2 builds per day could take weeks. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: Choosing GUI Module for Python
On Nov 11, 9:48 am, Lorenzo Gatti wrote: > On a more constructive note, I started to follow the instructions > athttp://www.pyside.org/docs/pyside/howto-build/index.html(which are > vague and terse enough to be cross-platform) with Microsoft VC9 > Express. > Hurdle 0: recompile Qt because the provided DLLs have hardcoded wrong > paths that confuse CMake. > How should Qt be configured? My first compilation attempt had to be > aborted (and couldn't be resumed) after about 2 hours: trial and error > at 1-2 builds per day could take weeks. Update: I successfully compiled Qt (with WebKit disabled since it gives link errors), as far as I can tell, and I'm now facing apiextractor. Hurdle 1a: convince CMake that I actually have Boost headers and compiled libraries. The Boost directory structure is confusing (compiled libraries in two places), and CMake's script (FindBoost.cmake) is inconsistent (should I set BOOST_INCLUDEDIR or BOOST_INCLUDE_DIR?), obsolete (last known version is 1.38 rather than the requisite 1.40) and rather fishy (e.g. hardcoded "c:\boost" paths). Would the Cmake-based branch of Boost work better? Any trick or recipe to try? Hurdle 1b: the instructions don't mention a dependency from libxml2. Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list
Re: random number including 1 - i.e. [0,1]
On 10 Giu, 06:23, Esmail wrote: > Here is part of the specification of an algorithm I'm implementing that > shows the reason for my original query: > > vid = w * vid + c1 * rand( ) * ( pid – xid ) + c2 * Rand( ) * (pgd –xid ) (1a) > > xid = xid + vid (1b) > > where c1 and c2 are two positive constants, > rand() and Rand() are two random functions in the range [0,1], > ^ > and w is the inertia weight. 1) I second John Yeung's suggestion: use random integers between 0 and N-1 or N inclusive and divide by N to obtain a maximum value of (N-1)/ N or 1 as you prefer. Note that N doesn't need to be very large. 2) I'm not sure a pseudo-closed range is different from a pseudo-open one. You are perturbing vid and xid by random amounts, scaled by arbitrary coefficients c1 and c2: if you multiply or divide these coefficients by (N-1)/N the minimum and maximum results for the two choices can be made identical up to floating point mangling. Regards, Lorenzo Gatti -- http://mail.python.org/mailman/listinfo/python-list