ANN: Version 0.5.1 of the Python config module has been released.
What Does It Do? The CFG configuration format is a text format for configuration files which is similar to, and a superset of, the JSON format.It has the following aims: * Allow a hierarchical configuration scheme with support for key-value mappings and lists. * Support cross-references between one part of the configuration and another. * Provide a string interpolation facility to easily build up configuration values from other configuration values. * Provide the ability to compose configurations (using include and merge facilities). * Provide the ability to access real application objects safely. * Be completely declarative. It overcomes a number of drawbacks of JSON when used as a configuration format: * JSON is more verbose than necessary. * JSON doesn’t allow comments. * JSON doesn’t provide first-class support for dates and multi-line strings. * JSON doesn’t allow trailing commas in lists and mappings. * JSON doesn’t provide easy cross-referencing, interpolation, or composition. The Python config module provides an interface to work with configuration files written in the CFG format. Comprehensive documentation is available at https://docs.red-dove.com/cfg/index.html and you can report issues / enhancement requests at https://github.com/vsajip/py-cfg-lib/issues As always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers, Vinay Sajip -- https://mail.python.org/mailman/listinfo/python-list
Re: on writing a while loop for rolling two dice
On 11/09/2021 18.03, Chris Angelico wrote: > On Sat, Sep 11, 2021 at 3:26 PM dn via Python-list > wrote: >> >> On 31/08/2021 01.50, Chris Angelico wrote: >>> On Mon, Aug 30, 2021 at 11:13 PM David Raymond >>> wrote: > def how_many_times(): > x, y = 0, 1 > c = 0 > while x != y: > c = c + 1 > x, y = roll() > return c, (x, y) Since I haven't seen it used in answers yet, here's another option using our new walrus operator def how_many_times(): roll_count = 1 while (rolls := roll())[0] != rolls[1]: roll_count += 1 return (roll_count, rolls) >>> >>> Since we're creating solutions that use features in completely >>> unnecessary ways, here's a version that uses collections.Counter: >>> >>> def how_many_times(): >>> return next((count, rolls) for count, rolls in >>> enumerate(iter(roll, None)) if len(Counter(rolls)) == 1) >>> >>> Do I get bonus points for it being a one-liner that doesn't fit in >>> eighty characters? >> >> >> Herewith my claim to one-liner fame (assuming such leads in any way to >> virtue or fame) >> >> It retains @Peter's preference for a more re-usable roll_die() which >> returns a single event, cf the OP's roll() which returns two results). >> >> >> import itertools, random >> >> def roll_die(): >> while True: >> yield random.randrange(1, 7) >> >> def how_many_times(): >> return list( itertools.takewhile( lambda r:r[ 0 ] != r[ 1 ], >> zip( roll_die(), roll_die() ) >> ) >>) >> >> Also, a claim for 'bonus points' because the one-liner will fit within >> 80-characters - if only I didn't have that pernicious and vile habit of >> coding a more readable layout. >> >> It doesn't use a two-arg iter, but still rates because it does use a >> relatively-obscure member of the itertools library... >> > > Nice, but that's only going to give you the ones that don't match. You > can then count those, and that's a start, but how do you capture the > matching rolls? > > I smell another opportunity for gratuitous use of a language feature: > nonlocal. In a lambda function. Which may require shenanigans of epic > proportions. The stated requirement is: "I'd like to get the number of times I tried". Given such: why bother with returning any of the pairs of values? Further, if you look at the OP's original solution, it only publishes the last pair, ie the match, without mention of the list of non-matches. Was it perhaps only a means of testing the solution? Regret that I'll settle for (or continue to seek) 'fame'. I don't play guitar, so have no use for epic. -- Regards, =dn -- https://mail.python.org/mailman/listinfo/python-list
Re: on writing a while loop for rolling two dice
On 2021-09-08 13:07:47 +1200, Greg Ewing wrote: > On 8/09/21 2:53 am, Grant Edwards wrote: > >#define IF if ( > >#define THEN ) { > >#define ELSE } else { > >#define ENDIF } > > I gather that early versions of some of the Unix utilities were > written by someone who liked using macros to make C resemble Algol. Steve Bourne, the author of the eponymous shell. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Friday Finking: Contorted loops
On 2021-09-10 12:26:24 +0100, Alan Gauld via Python-list wrote: > On 10/09/2021 00:47, Terry Reedy wrote: > > even one loop is guaranteed.) "do-while" or "repeat-until is even rarer > > since fractional-loop include this as a special case. > > Is there any empirical evidence to support this? > Or is it just a case of using the tools that are available? > In my experience of using Pascal (and much later with Delphi) > that I used repeat loops at least as often as while loops, > possibly more. > > But using Python and to a lesser extent C (which has a > rather horrible do/while) construct How is C's do/while loop more horrible than Pascal's repeat/until? They seem almost exactly the same to me (the differences I see are the inverted condition (debatable which is better) and the added block delimiters (which I actually like)). > So is it the case that the "need" for repeat loops is > rare, simply a result of there being no native repeat > loop available? A tiny non-representative data point: In an old collection of small C programs of mine I find: 35 regular for loops 28 while loops 2 infinite for loops 1 "infinite" for loop (i.e. it exits somewhere in the middle) 0 do/while loops. So even though do/while loops are available in C (and I don't find them horrible) I apparently found very little use for them (I'm sure if I look through more of my C programs I'll find a few examples, but this small samples shows they are rare. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature -- https://mail.python.org/mailman/listinfo/python-list
Re: on floating-point numbers
On 2021-09-05 22:32:51 -, Grant Edwards wrote: > On 2021-09-05, Peter J. Holzer wrote: [on the representability of fractional numbers as floating point numbers] > And once you understand that, ignore it and write code under the > assumumption that nothing can be exactly represented in floating > point. In almost all cases even the input values aren't exact. > If you like, you can assume that 0 can be exactly represented without > getting into too much trouble as long as it's a literal constant value > and not the result of any run-time FP operations. > > If you want to live dangerously, you can assume that integers with > magnitude less than a million can be exactly represented. That > assumption is true for all the FP representations I've ever used, If you know nothing about the FP representation you use you could do that (however, there is half-precision (16-bit) floating-point which has an even shorter mantissa). But if you are that conservative, you should be equally conservative with your integers, which probably means you can't depend on more than 16 bits (±32767). However, we are using Python here which means we have at least 9 decimal digits of useable mantissa (https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex somewhat unhelpfully states that "[f]loating point numbers are usually implemented using double in C", but refers to https://docs.python.org/3/library/sys.html#sys.float_info which in turn refers directly to the DBL_* constants from C99. So DBL_EPSILON is at most 1E-9. in practice almost certainly less than 1E-15). > but once you start depending on it, you're one stumble from the edge > of the cliff. I think this attitude will prevent you from using floating point numbers when you could, reinventing the wheel, probably badly. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature -- https://mail.python.org/mailman/listinfo/python-list
Re: on floating-point numbers
On 2021-09-05 23:21:14 -0400, Richard Damon wrote: > > On Sep 5, 2021, at 6:22 PM, Peter J. Holzer wrote: > > On 2021-09-04 10:01:23 -0400, Richard Damon wrote: > >>> On 9/4/21 9:40 AM, Hope Rouselle wrote: > >>> Hm, I think I see what you're saying. You're saying multiplication and > >>> division in IEEE 754 is perfectly safe --- so long as the numbers you > >>> start with are accurately representable in IEEE 754 and assuming no > >>> overflow or underflow would occur. (Addition and subtraction are not > >>> safe.) > >>> > >> > >> Addition and Subtraction are just as safe, as long as you stay within > >> the precision limits. > > > > That depends a lot on what you call "safe", > > > > a * b / a will always be very close to b (unless there's an over- or > > underflow), but a + b - a can be quite different from b. > > > > In general when analyzing a numerical algorithm you have to pay a lot > > more attention to addition and subtraction than to multiplication and > > division. > > > Yes, it depends on your definition of safe. If ‘close’ is good enough > then multiplication is probably safer as the problems are in more > extreme cases. If EXACT is the question, addition tends to be better. > To have any chance, the numbers need to be somewhat low ‘precision’, > which means the need to avoid arbitrary decimals. If you have any "decimals" (i.e decimal digits to the right of your decimal point) then the input values won't be exactly representable and the nearest representation will use all available bits, thus losing some precision with most additions. > Once past that, as long as the numbers are of roughly the same > magnitude, and are the sort of numbers you are apt to just write, you > can tend to add a lot of them before you get enough bits to accumulate > to have a problem. But they won't be exact. You may not care about rounding errors in the tenth digit after the point, but you are only close, not exact. So if you are fine with a tiny rounding error here, why are you upset about equally tiny rounding errors on multiplication? > With multiplication, every multiply roughly adds the number of bits of > precision, so you quickly run out, and one divide will have a chance > to just end the process. Nope. The relative error stays the same unlike for addition where is can get very large very quickly. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature -- https://mail.python.org/mailman/listinfo/python-list
Re: on floating-point numbers
On Sun, Sep 12, 2021 at 1:07 AM Peter J. Holzer wrote: > If you have any "decimals" (i.e decimal digits to the right of your > decimal point) then the input values won't be exactly representable and > the nearest representation will use all available bits, thus losing some > precision with most additions. That's an oversimplification, though - numbers like 12345.03125 can be perfectly accurately represented, since the fractional part is a (negative) power of two. The perceived inaccuracy of floating point numbers comes from an assumption that a string of decimal digits is exact, and the computer's representation of it is not. If I put this in my code: ONE_THIRD = 0.3 then you know full well that it's not accurate, and that's nothing to do with IEEE floating-point! The confusion comes from the fact that one fifth (0.2) can be represented precisely in decimal, and not in binary. Once you accept that "perfectly representable numbers" aren't necessarily the ones you expect them to be, 64-bit floats become adequate for a huge number of tasks. Even 32-bit floats are pretty reliable for most tasks, although I suspect that there's little reason to use them now - would be curious to see if there's any performance benefit from restricting to the smaller format, given that most FPUs probably have 80-bit or wider internal registers. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Friday Finking: Contorted loops
On 10/09/2021 19:49, Stefan Ram wrote: > Alan Gauld writes: >> OK, That's a useful perspective that is at least consistent. >> Unfortunately it's not how beginners perceive it > ... > > Beginners perceive it the way it is explained to them by > their teacher. I'm not sure that's true. Most beginners, in my experience, learn the syntax from their teachers and then go off and play. What they observe happening is what sticks. And python loop 'else' constructs appear inconsistent to them. As teachers we like to think we are passing on our wisdom to our students but in reality everyone learns from their own experience. The teachers advice is just the starting point. Hopefully, that starting point sends them in the right direction but that's the best we can hope for. -- Alan G Author of the Learn to Program web site http://www.alan-g.me.uk/ http://www.amazon.com/author/alan_gauld Follow my photo-blog on Flickr at: http://www.flickr.com/photos/alangauldphotos -- https://mail.python.org/mailman/listinfo/python-list
Re: on writing a while loop for rolling two dice
On 11/09/2021 10:09, dn via Python-list wrote: The stated requirement is: "I'd like to get the number of times I tried". Given such: why bother with returning any of the pairs of values? Indeed, if that's the requirement, then you can do even better, noting that the probability of getting a matched pair is 1/6 (6 matches out of 6*6 possibilities). So the answer to the problem is exactly the same as rolling a single die until you get any particular number (e.g., 1). This is somewhat easier to simulate than the two-dice problem (and the number of throws until a match is also a known, analytic distribution that you could sample from, but this is probably easier). -- https://mail.python.org/mailman/listinfo/python-list
Re: on floating-point numbers
On 2021-09-12 01:40:12 +1000, Chris Angelico wrote: > On Sun, Sep 12, 2021 at 1:07 AM Peter J. Holzer wrote: > > If you have any "decimals" (i.e decimal digits to the right of your > > decimal point) then the input values won't be exactly representable and > > the nearest representation will use all available bits, thus losing some > > precision with most additions. > > That's an oversimplification, though - numbers like 12345.03125 can be > perfectly accurately represented, since the fractional part is a > (negative) power of two. Yes. I had explained that earlier in this thread. > The perceived inaccuracy of floating point numbers comes from an > assumption that a string of decimal digits is exact, and the > computer's representation of it is not. If I put this in my code: > > ONE_THIRD = 0.3 > > then you know full well that it's not accurate, and that's nothing to > do with IEEE floating-point! The confusion comes from the fact that > one fifth (0.2) can be represented precisely in decimal, and not in > binary. Exactly. > Once you accept that "perfectly representable numbers" aren't > necessarily the ones you expect them to be, 64-bit floats become > adequate for a huge number of tasks. Yep. That's what I was trying to convey. > Even 32-bit floats are pretty reliable for most tasks, although I > suspect that there's little reason to use them now - would be curious > to see if there's any performance benefit from restricting to the > smaller format, given that most FPUs probably have 80-bit or wider > internal registers. AFAIK C compilers on 64-bit AMD/Intel architecture don't use the x87 ABI any more, they use the various vector extensions (SSE, etc.) instead. Those have hardware support for 64 and 32 bit FP values, so 32 bit are probably faster, if only because you can cram more of them into a register. Modern GPUs now have 16 bit FP numbers - those are perfectly adequate for neural networks and also some graphics tasks and you can transfer twice as many per memory cycle ... hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature -- https://mail.python.org/mailman/listinfo/python-list
RE: Friday Finking: Contorted loops
Alan and others, I think human languages used to make computer languages will often cause confusion. Some languages have an IF .. ELSE construct but also an EITHER ... OR and a NEITHER ... NOR and other twists and turns like words that sometimes come apart and you end up having to dangle a part that was in the front of a word to later in the sentence and so on. But I suspect many languages do NOT naturally have a construct like: WHILE ... ELSE. The might have a sentence like "While it is sunny you should use sunscreen but when it rains use an umbrella." It probably is even a tad deceptive to use WHILE in one part and not in the other. Perfectly valid sentences are "When going outside if it is sunny use sunscreen but if it is rainy use an umbrella" or skip the while and use a more standard if/else. The world "while" just does not feel like a partner for "else". So say you want to have a loop starting with WHILE and FOLLOWED by a single ELSE clause. Arguably you could make WHILE as a construct return a status of sorts if it runs at all or perhaps if it exits after at least one iteration because the condition evaluates to FALSE. It would either return false if you exit with a BREAK or by an error or perhaps not exit at all if you do a return from within. So if you made up a syntax like: IF (WHILE condition {...}) ELSE {...} Then what would that mean? Again, this is a make-believe construct. In the above, if WHILE returned a True of some sort, the else is skipped. Otherwise, no matter what has been done within the while loop, it is done. But as noted we have odd choices here potentially. Could we differentiate between a BREAK statement within and something like BREAK OK variant that means the while is to be treated as succeeded and please do not do the trailing ELSE? I can see many possible ways to design things and cannot expect humans to automatically assume the specific nomenclature will be meaningful to them. There is an alternative that people who are not damn sure what the meaning is can do. Create a variable that is set to False or True to represent something before the WHILE is entered. Then make sure your code flips that value in cased you want to make sure a trailing statement is run. Then following the while, you place an IF statement that tests that variable and does what the ELSE cluse would have done, or not. Looking at other constructs, look at this code with a try: i=0 while i<5: try: assert(i!=3) #Raises an AssertionError if i==3 print("i={0}".format(i)) except: continue finally: i+= 1; #Increment i Now attach an ELSE clause to the WHILE, LOL! At some point, van some humans decide just not to write the code this way? What about code that uses CONTINUE to the point where you enter the WHILE statement and get a secondary IF or something that keeps triggering a CONTINUE to start the next iteration. Arguably, this can effectively mean the WHILE loop did nothing. An example would be evaluating the contents of a data structure like a list and adding all numeric items together and ignoring any that are character strings. Given all characters, no summation is done. The first statement in the loop tests a list item and does a CONTINUE. But by the rules as I see them, the loop was entered. Yet, a similar loop written where the WHILE condition simply tests if ANY item is numeric, might drop right through to an ELSE clause. Bottom line is humans do not all think alike and language constructs that are clear and logical to one may be confusing or mean the opposite to others. I can even imagine designing an interface like this: WHILE (condition): ... IF_NOT_RUN: ... IF_EXITED_EARLY: ... IF_ERROR_THROWN: ... ON_PREMATURE_RETURN_DO_THIS: ... I am not suggesting we need critters like that, simply that ELSE is a grab bag case that can mean many things to many people. But if the specific meaning is clearly documented, use it. Lots of people who program in languages like Python do not necessarily even speak much English and just memorize the keywords. We can come up with ever more interesting or even bizarre constructs like multiple WHILE in a row with each one being called only if the previous one failed to process the data. An example might be if each tests the data type and refuses to work on it so the next one in line is called. That could perhaps be done by having multiple ELSE statements each with another WHILE. But is that an ideal way to do this or perhaps instead use some variant of a switch statement or a dictionary pointing to functions to invoke or something. Time to go do something lese of even minor usefulness! -Original Message- From: Python-list On Behalf Of Alan Gauld via Python-list Sent: Saturday, September 11, 2021 3:59 AM To: python-list@python.org Subject: Re: Friday Finking: Contorted loops On 10/09/2021 19:49, Stefan Ram wrote: > Alan Gauld writes: >> OK, That's a useful perspective that i
RE: Friday Finking: Contorted loops
Peter, in your own personal finite sample, I am wondering what you might do TODAY if you looked at your loops again and considered redoing them for an assortment of reasons ranging from using the code for teaching to efficiency to just fitting your mood better? I have seen seasoned authors go back to their early work and groan. Some have even reissued earlier work with a partial rewrite often with a long additional preface explaining why and even mentioned what was changed and bemoaning how they thought differently back then. My guess is that many of us (meaning myself included) often approach a problem and go with the first thing that comes to mind. If it fits well enough, we move on to the next thing we can do. If not, we may step back and evaluate multiple additional options and try another tack. I have seen not of sort-of redundant code because someone did not plan ahead and realize something very similar might be needed later and thus did not make a general function they could re-use. Occasionally they may later go back and re-do but often, not so much and just keep copying lines and making minor modifications. Same general idea. And perhaps worse, you may write a loop and later have to keep adding code to deal with new requirements and special cases and rather than pause and analyze and perhaps start again with a cleaner or more easily extendable solution, just keep grafting on things to make the darn current code work. Code that has many ways to exit a loop is often an example of this happening. So if you looked at your own code now, in the context of the rest of your code, would you change things? in python, I suspect I would seriously change an amazing number of things for older code including code being ported. It supports quite a few programming constructs and styles and has access to plenty of modules that mean you need not re-invent all the time. How many formal loops might you replace with a list comprehension or use a generator, NOW? How many problems you once solved by doing things like looping and searching for an element being present in a list when now you might use a set or dictionary? The reality is many people learn the basics of a language and write using fairly basic constructs and only later master the more advanced topics. But their mature work may then often heavily use those later and more effective methods. Functional programming often uses constructs where loops become invisible. Objects often hide loops in all kinds of methods. Sometimes recursion effectively does a loop. It is sometimes easy to write programs with no visible loops. So when counting the various kinds, are you looking for direct or indirect methods too like map/reduce or vectorized operations? -Original Message- From: Python-list On Behalf Of Peter J. Holzer Sent: Saturday, September 11, 2021 10:42 AM To: python-list@python.org Subject: Re: Friday Finking: Contorted loops On 2021-09-10 12:26:24 +0100, Alan Gauld via Python-list wrote: > On 10/09/2021 00:47, Terry Reedy wrote: > > even one loop is guaranteed.) "do-while" or "repeat-until is even > > rarer since fractional-loop include this as a special case. > > Is there any empirical evidence to support this? > Or is it just a case of using the tools that are available? > In my experience of using Pascal (and much later with Delphi) that I > used repeat loops at least as often as while loops, possibly more. > > But using Python and to a lesser extent C (which has a rather horrible > do/while) construct How is C's do/while loop more horrible than Pascal's repeat/until? They seem almost exactly the same to me (the differences I see are the inverted condition (debatable which is better) and the added block delimiters (which I actually like)). > So is it the case that the "need" for repeat loops is rare, simply a > result of there being no native repeat loop available? A tiny non-representative data point: In an old collection of small C programs of mine I find: 35 regular for loops 28 while loops 2 infinite for loops 1 "infinite" for loop (i.e. it exits somewhere in the middle) 0 do/while loops. So even though do/while loops are available in C (and I don't find them horrible) I apparently found very little use for them (I'm sure if I look through more of my C programs I'll find a few examples, but this small samples shows they are rare. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" -- https://mail.python.org/mailman/listinfo/python-list