Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Steven D'Aprano
On Thu, 14 Nov 2013 09:26:18 -0800, Rick Johnson wrote:

> On Wednesday, November 13, 2013 11:50:40 PM UTC-6, Steven D'Aprano
> wrote:
[...]
>> of course, but that in general *its too damn hard* for human
>> programmers to write good, reliable, maintainable, correct (i.e.
>> bug-free) code using process-wide global variables.
> 
> Complete FUD. Maybe for you. Not for me.

I wasn't taking about genius programmers like you Rick, that would be 
silly. I'm talking about mere mortals like the rest of us.


>> Global variables are the spaghetti code of namespacing -- everything is
>> mixed up together in one big tangled mess.
> 
> It's a tangled mess if you design it to be a tangled mess.

Nobody sets out to *design* a tangled mess. What normally happens is that 
a tangled mess is the result of *lack of design*.


>> The more global variables you have, the worse the tangle.
> 
> Complete illogic.
> 
> What if all the globals are only accessed and never mutated? 

Then they aren't global VARIABLES. You'll note that I was very careful to 
refer to "variables".

Read-only global constants don't increase coupling to anywhere near the 
same degree as writable global variables. As such, they're far less 
harmful.

Of course, there is still some degree of coupling -- suppose one chunk of 
code wants a global constant X=23 and another chunk of code wants a 
global constant X=42? But such issues are generally easy to spot and easy 
to fix.


>> One or two is not too bad. With good conventions for encapsulation to
>> limit the amount of tangled, coupled code (e.g. naming conventions, or
>> limiting globals to a single module at a time by default) the amount of
>> harm can be reduced to manageable levels.
> 
>> SO now your agreeing that globals are not evil again.

In this thread, I have never called global variables "evil". I have 
called them *harmful*, and tried to make it clear that harm is not a 
dichotomy "zero harm" versus "infinite harm", but a matter of degree. I 
stand by that.


>> Global variables increases coupling between distant parts of the code.
>> I remember a nasty little bug in Windows where removing IE stopped
>> copy-and- paste from working everywhere. That's a sign of excess
>> coupling between code -- there's no logical reason why removing a web
>> browser should cause copying text in Notepad to fail.
> 
> Do you have link describing this bug? I am not aware of such bug ,but
> uh, i would not at all be surprised that windows could break from
> removing that gawd awful IE.
> 
> Come to think of it, i'll bet it's not even a bug at all, but a feature
> to prevent "wise users" from removing IE, thereby maintaining IE's
> footprint in the wild.

Heh. 

Sorry, I can't find the link. It was well over five years ago, probably 
more like ten. But whether deliberate or accidental, that's the sort of 
thing I mean when I talk about excessive coupling. Note that coupling in 
and of itself is not harmful -- for example, you want the brake pedal of 
your car to be coupled to the brakes. Excess and inappropriate coupling 
is harmful: pressing the brake pedal shouldn't turn off the headlights, 
nor should a blown brake light stop the brakes from working. Hence we try 
to minimize coupling to only those areas that actually need them.

With physical devices, that's often -- not always -- trivial. The 
constraints of physical matter makes it natural to keep things loosely 
coupled. When you're building a car, the hard part is getting the 
coupling that you actually do want, not avoiding coupling you don't. 
Physical devices are, as a general rule, inherently and naturally 
encapsulated: the brake pedal is physically uncoupled from the brakes 
unless you literally connect them with steel cables and wires. Your 
fridge isn't connected to anything except the power supply, so it 
physically can't flush the toilet. Since the toilet and fridge are made 
in different factories and installed by different people, there's no 
motivation to couple them. Even if some bright spark decided that since 
opening the fridge door turns on the fridge light, and pressing the 
toilet button opens the cistern value, the two operations are related and 
therefore "Don't Repeat Yourself" applies and there should be a single 
mechanism to do both, it is impractical to build such a system. (And 
thank goodness. But just wait until we have internet-enabled fridges and 
toilets...)

But with software, coupling is *easy*. By default, code in a single 
process is completely coupled. Think of a chunk of machine code running 
in a single piece of memory. We have to build in our own conventions for 
decoupling code: subroutines, local variables, objects, modular code, and 
so forth. Physical objects are inherently decoupled. Code is inherently 
coupled, and we need conventions to decouple it. One of those conventions 
is to prefer local variables to global variables, and another is to limit 
the scope of global variables to per module rather than proce

Re: Getting globals of the caller, not the defining module

2013-11-15 Thread Steven D'Aprano
On Thu, 14 Nov 2013 20:56:34 +, Rotwang wrote:

[...]
>> How about this?
>>
>> # module A.py
>> import inspect
>> def spam():
>>  return inspect.stack()[1][0].f_globals
> 
> Bump. Did this do what you wanted, or not?


Sort of. If anything, it convinced me that I don't, in fact, want what I 
thought I wanted.

I'm still playing around with the code, but it's looking likely that auto-
detecting the caller's globals is not really what I want.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Django Weekend Cardiff

2013-11-15 Thread D.M. Procida
(With apologies if you have already seen this on another email list or
newsgroup.)

The UK's first-ever Django conference will take place on the 7th-9th
February 2014 in Cardiff, Wales.



The programme for the event:

Friday: tutorials and demonstrations (also open to the public)
Saturday:   talks
Sunday: code sprints and clinics

The conference is Django-focused, but all of all aspects of Python fall
within its remit - particularly in the tutorials and workshops.

A venue has been booked at Cardiff University.

Registration and ticket sales will open soon, as well as a call for
papers.

To be a success, the conference needs the support of:

*   people in Wales, the UK and beyond who will participate as
attendees or volunteers
*   speakers who'd like to give talks or conduct tutorials
*   organisations locally and internationally willing to provide
sponsorship or other support

If you can offer support, please get in touch.

One of the aims of the conference is to establish it as an annual event
that will raise the profile in Wales of open-source software in general
and  Python in particular, and also bolster the local open-source
software community here. 

Above all, however, the intention is to establish the Django Weekend in
Cardiff as a meaningful and enjoyable date in the Django/Python
calendar.

We'll publish updates on our website, our Twitter account
 and elsewhere as appropriate.

Daniele
-- 
https://mail.python.org/mailman/listinfo/python-list


understanding someone else's program

2013-11-15 Thread C. Ng
Hi all,

Please suggest how I can understand someone else's program where
- documentation is sparse
- in function A, there will be calls to function B, C, D and in those 
functions will be calls to functions R,S,T and so on so forth... making it 
difficult to trace what happens to a certain variable

Am using ERIC4 IDE.

Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: understanding someone else's program

2013-11-15 Thread Ben Finney
"C. Ng"  writes:

> Please suggest how I can understand someone else's program

Welcome to this forum!

I sympathise with this query. Much of the craft of programming is in
understanding the code written by other programmers, and learning from
that experience how to improve the understandability of the code one
writes.

In general, the answer to your question is: Read a lot of other people's
code, preferably by the side of the programmer who wrote it. Experiment
with a lot of code written by others, and test one's understanding by
improving it and confirming it still works :-)

> where
> - documentation is sparse

Sadly the case for the majority of software any of us will be involved
with maintaining.

> - in function A, there will be calls to function B, C, D and in
> those functions will be calls to functions R,S,T and so on so
> forth... making it difficult to trace what happens to a certain
> variable

This is normal modular programming. Ideally, those functions should each
be doing one conceptually simple task, with a narrowly-defined
interface, and implementing its job by putting together other parts at a
lower level.

Is there something particular about these functions that make them more
difficult than good code?

-- 
 \  “Generally speaking, the errors in religion are dangerous; |
  `\those in philosophy only ridiculous.” —David Hume, _A Treatise |
_o__)   of Human Nature_, 1739 |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


python 3.3 repr

2013-11-15 Thread Robin Becker

I'm trying to understand what's going on with this simple program

if __name__=='__main__':
print("repr=%s" % repr(u'\xc1'))
print("%%r=%r" % u'\xc1')

On my windows XP box this fails miserably if run directly at a terminal

C:\tmp> \Python33\python.exe bang.py
Traceback (most recent call last):
  File "bang.py", line 2, in 
print("repr=%s" % repr(u'\xc1'))
  File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\xc1' in position 6: 
character maps to 


If I run the program redirected into a file then no error occurs and the the 
result looks like this


C:\tmp>cat fff
repr='┴'
%r='┴'

and if I run it into a pipe it works as though into a file.

It seems that repr thinks it can render u'\xc1' directly which is a problem 
since print then seems to want to convert that to cp437 if directed into a terminal.


I find the idea that print knows what it's printing to a bit dangerous, but it's 
the repr behaviour that strikes me as bad.


What is responsible for defining the repr function's 'printable' so that repr 
would give me say an Ascii rendering?

-confused-ly yrs-
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: Program Translation - Nov. 14, 2013

2013-11-15 Thread Clive Page

On 14/11/2013 17:36, Gordon Sande wrote:


Indeed! Under NAGWare Fortran it runs to completion with C=all but pulls an
undefined reference when C=undefined is added.

Lots of obsolete features and other warnings but no compiler error
messages.

The obvious lessons are that 1. Fortran has very good historical continuity
and 2. the good debugging Fortran compilers do a good job.




I would also check it out with FTNCHEK as well - it usually finds lots 
of potential or actual problems with code of this vintage.



--
Clive Page
--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Ned Batchelder
On Friday, November 15, 2013 6:28:15 AM UTC-5, Robin Becker wrote:
> I'm trying to understand what's going on with this simple program
> 
> if __name__=='__main__':
>   print("repr=%s" % repr(u'\xc1'))
>   print("%%r=%r" % u'\xc1')
> 
> On my windows XP box this fails miserably if run directly at a terminal
> 
> C:\tmp> \Python33\python.exe bang.py
> Traceback (most recent call last):
>File "bang.py", line 2, in 
>  print("repr=%s" % repr(u'\xc1'))
>File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
>  return codecs.charmap_encode(input,self.errors,encoding_map)[0]
> UnicodeEncodeError: 'charmap' codec can't encode character '\xc1' in position 
> 6: 
> character maps to 
> 
> If I run the program redirected into a file then no error occurs and the the 
> result looks like this
> 
> C:\tmp>cat fff
> repr='┴'
> %r='┴'
> 
> and if I run it into a pipe it works as though into a file.
> 
> It seems that repr thinks it can render u'\xc1' directly which is a problem 
> since print then seems to want to convert that to cp437 if directed into a 
> terminal.
> 
> I find the idea that print knows what it's printing to a bit dangerous, but 
> it's 
> the repr behaviour that strikes me as bad.
> 
> What is responsible for defining the repr function's 'printable' so that repr 
> would give me say an Ascii rendering?
> -confused-ly yrs-
> Robin Becker

In Python3, repr() will return a Unicode string, and will preserve existing 
Unicode characters in its arguments.  This has been controversial.  To get the 
Python 2 behavior of a pure-ascii representation, there is the new builtin 
ascii(), and a corresponding %a format string.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

On 15/11/2013 11:38, Ned Batchelder wrote:
..


In Python3, repr() will return a Unicode string, and will preserve existing 
Unicode characters in its arguments.  This has been controversial.  To get the 
Python 2 behavior of a pure-ascii representation, there is the new builtin 
ascii(), and a corresponding %a format string.

--Ned.



thanks for this, edoesn't make the split across python2 - 3 any easier.
--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list


Best approach to edit linux configuration file

2013-11-15 Thread Himanshu Garg
I have to setup the DNS server.  For this I have to edit the configuration 
files.

For this I have to search if the lines(block of text) already exist in the file 
and if not, I have to add them to the file.

So, I want to know what is the best way to accomplish this.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Best approach to edit linux configuration file

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 12:05 AM, Himanshu Garg  wrote:
> I have to setup the DNS server.  For this I have to edit the configuration 
> files.
>
> For this I have to search if the lines(block of text) already exist in the 
> file and if not, I have to add them to the file.
>
> So, I want to know what is the best way to accomplish this.

Is your script allowed to take complete control of the file, or do you
have to cope with human edits?

If you CAN take control, things are easy. Just keep track of your own
content and match exact lines; as long as you always make consistent
output, you can look for those lines precisely.

But if, as I suspect from your (scanty) description, you can't, then
you'll need to figure out how to identify whether the lines exist or
not. That means text parsing rules. Python can definitely do this;
it's simply a matter of figuring out what you're looking for, what
you're adding, etc.

Configuring DNS is pretty easy for a script to do. I've done it
several times (though only once in Python - other languages other
times).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: understanding someone else's program

2013-11-15 Thread Joel Goldstick
On Fri, Nov 15, 2013 at 6:19 AM, Ben Finney  wrote:
> "C. Ng"  writes:
>
>> Please suggest how I can understand someone else's program
>
> Welcome to this forum!
>
> I sympathise with this query. Much of the craft of programming is in
> understanding the code written by other programmers, and learning from
> that experience how to improve the understandability of the code one
> writes.
>
> In general, the answer to your question is: Read a lot of other people's
> code, preferably by the side of the programmer who wrote it. Experiment
> with a lot of code written by others, and test one's understanding by
> improving it and confirming it still works :-)
>
>> where
>> - documentation is sparse
>
> Sadly the case for the majority of software any of us will be involved
> with maintaining.
>
>> - in function A, there will be calls to function B, C, D and in
>> those functions will be calls to functions R,S,T and so on so
>> forth... making it difficult to trace what happens to a certain
>> variable
>
> This is normal modular programming. Ideally, those functions should each
> be doing one conceptually simple task, with a narrowly-defined
> interface, and implementing its job by putting together other parts at a
> lower level.
>
> Is there something particular about these functions that make them more
> difficult than good code?
>
> --
>  \  “Generally speaking, the errors in religion are dangerous; |
>   `\those in philosophy only ridiculous.” —David Hume, _A Treatise |
> _o__)   of Human Nature_, 1739 |
> Ben Finney
>
> --
> https://mail.python.org/mailman/listinfo/python-list

Much more time is spent figuring out old code than writing new code!
Python docstrings help a little.  Do you know about a utility called
pydocs?  If you don't, read about it.  Using pydocs you can produce
documentation for all the modules you need to understand.  It will
pull out the docstrings at the top of the module,and for each method
and function.  Normally, that level of documentation won't be good
enough to satisfy the needs of a new reader, so go through each
function and understand them one at a time.  Add to the docstrings.


-- 
Joel Goldstick
http://joelgoldstick.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: understanding someone else's program

2013-11-15 Thread Jean-Michel Pichavant
- Original Message -
> Hi all,
> 
> Please suggest how I can understand someone else's program where
> - documentation is sparse
> - in function A, there will be calls to function B, C, D and in
> those functions will be calls to functions R,S,T and so on so
> forth... making it difficult to trace what happens to a certain
> variable
> 
> Am using ERIC4 IDE.
> 
> Thanks.

If the documentation is sparse, writing the doc yourself is one way to dive 
into someone else's code. To begin with, you can stick to the function purpose, 
and for the WTF functions try to document the parameters and return values as 
well.

It may take a lot of time depending on how good the current code is.

JM


-- IMPORTANT NOTICE: 

The contents of this email and any attachments are confidential and may also be 
privileged. If you are not the intended recipient, please notify the sender 
immediately and do not disclose the contents to any other person, use it for 
any purpose, or store or copy the information in any medium. Thank you.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: understanding someone else's program

2013-11-15 Thread William Ray Wing
On Nov 15, 2013, at 6:05 AM, C. Ng  wrote:

> Hi all,
> 
> Please suggest how I can understand someone else's program where
> - documentation is sparse
> - in function A, there will be calls to function B, C, D and in those 
> functions will be calls to functions R,S,T and so on so forth... making 
> it difficult to trace what happens to a certain variable
> 
> Am using ERIC4 IDE.
> 
> Thanks.
> -- 
> https://mail.python.org/mailman/listinfo/python-list

The other suggestions you have received are good places to start.  I'd add only 
one other - that you consider running the code inside an IDE and 
single-stepping through as you watch what happens to the variables.  As you get 
a better and better feel for what the code is doing, you can move up from 
single stepping to setting break points before and after places you are still 
scratching you head over.

-Bill
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Program Translation - Nov. 14, 2013

2013-11-15 Thread E.D.G.
"E.D.G."  wrote in message 
news:ro-dnch2dptbrhnpnz2dnuvz_rsdn...@earthlink.com...


  The responses regarding that Etgtab program were encouraging.  I was 
not sure if anyone would even recognize the code as the program was written 
quite a while ago.


  The main reason for wanting to translate it into modern language code 
is so that it can be easily modified and also merged with another computer 
program.  The main language it would probably be translated into is True 
BASIC.  This is because the person doing the work is a retired professional 
computer programmer who does work like that as a hobby.  But he will only 
work with True BASIC.  In fact he already translated most of the Etgtab 
program.  The effort got stopped when he could not understand some of the 
FORTRAN code.  Unlike working personnel, retired people can start and stop 
efforts like that as they please.


  From discussions with people in several Newsgroups the conclusions I 
arrived at in the past few weeks are the following:


  Perl would not work because it does calculations too slowly. 
Standard Python would also not work for the same reason.  However, there are 
Python routines available that would make it possible to accelerate the 
calculations.


  FORTRAN, True BASIC, XBasic, and another language called Julia likely 
do calculations fast enough.  Julia looks like it is specifically designed 
for that type of work.


http://julialang.org/

  I am checking with that programmer to see if he wants to continue 
with the effort.


  The program itself has some importance for earthquake related 
research.  A number of years ago I checked with the U.S. Government's "Ask A 
Geologist" staff to see if they knew about any freeware programs that 
researchers could use to generate those types of data.  And I was told that 
they did not know of any.  Apparently they did not even know that Etgtab 
exists.  I had to do some Internet searches to find it.


  The Solid Earth Tide data it generates are probably fairly good.  The 
plan is to check its ocean tide data against data from the following Web 
site to see how well they match.


http://tbone.biol.sc.edu/tide/

  We could not find any good freeware programs for generating the types 
of sun and moon location data needed for this research and so we wrote one 
ourselves.  It has been available for a number of years as a freeware 
program written in True BASIC.


--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Ned Batchelder
On Friday, November 15, 2013 7:16:52 AM UTC-5, Robin Becker wrote:
> On 15/11/2013 11:38, Ned Batchelder wrote:
> ..
> >
> > In Python3, repr() will return a Unicode string, and will preserve existing 
> > Unicode characters in its arguments.  This has been controversial.  To get 
> > the Python 2 behavior of a pure-ascii representation, there is the new 
> > builtin ascii(), and a corresponding %a format string.
> >
> > --Ned.
> >
> 
> thanks for this, edoesn't make the split across python2 - 3 any easier.
> -- 
> Robin Becker

No, but I've found that significant programs that run on both 2 and 3 need to 
have some shims to make the code work anyway.  You could do this:

try:
repr = ascii
except NameError:
pass

and then use repr throughout.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: understanding someone else's program

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 12:49 AM, Jean-Michel Pichavant
 wrote:
> If the documentation is sparse, writing the doc yourself is one way to dive 
> into someone else's code. To begin with, you can stick to the function 
> purpose, and for the WTF functions try to document the parameters and return 
> values as well.

Agreed. I just had someone do that with my code - it was sparsely
commented, and he went through adding docs based on what he thought
functions did (based on their names and a cursory look at their bodies
- return values, particularly, were often documented by description,
which wasn't particularly useful with certain callbacks). Seeing where
he'd misdescribed something was a great way for me to figure out which
functions were poorly named, or at least begging for better comments.

If you have the luxury of working with the original programmer, that
would be something I'd strongly recommend. Even if you can't, try to
set some comments down; but be aware that false comments are worse
than none at all, so do notate which are your comments and which bits
you're particularly unsure of.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Mark Lawrence

On 15/11/2013 06:44, Steven D'Aprano wrote:

On Thu, 14 Nov 2013 17:10:02 +, Mark Lawrence wrote:


On 14/11/2013 03:56, renato.barbosa.pim.pere...@gmail.com wrote:

I apologize again for my bad english and any inconvenience that I have
generated.



I do wish that people would stop apologising for poor English, it's an
extremely difficult language.  IIRC there are eight different ways of
pronouncing the vowel combination au.  Whatever happened to "There
should be one-- and preferably only one --obvious way to do it."? :)


Words like "sorry", "pardon me", etc. are the social grease to smooth out
interactions between people. Instead, I read such apologies as a flag
that we ought to make allowances for any grammatical or spelling errors
they may make, rather than to interpret them as signs of laziness or
stupidity.

I'm inclined to forgive nearly any language error from somebody who is
trying their best to communicate, while people who merely cannot be
bothered to use language which is at least an approximation to
grammatically correct, syntactically valid, correctly-spelled sentences
inspire similar apathy in me. If they can't be bothered to write as well
as they are capable of, I can't be bothered to answer their questions.

A few minor errors is one thing, but when you see people whose posts are
full of error after error and an apparent inability to get English syntax
right, you have to wonder how on earth they expect to be a programmer?
Compilers are even less forgiving of errors than is my wife, and she once
kicked a man to death for using a colon where a semi-colon was required.
(Only joking. He didn't actually die.)


Semi-colons or more accurately the lack of them, used to be the bain of 
my life.  Good old CORAL 66 had its BEGIN, END and COMMENT (maybe in 
single quotes?), but there was no ENDCOMMENT, no guesses how it was 
spelt.  Could have retired years ago...




This doesn't apply to people who gave some sort of sign that they're
doing the best that they can, whether it is due to inexperience,
dyslexia, being Foreign *wink*, or even broken keyboard. ("Nw kyboard is
on ordr, pls xcus my lack of lttr aftr D and b4 F.")


I had another wonderful day yesterday hacking foreigners to bits and 
burning them, great fun.  Is the last part above in parentheses meant to 
be related to a broken keyboard or is it simply modern textspeak?




But it does amuse me when non-native English speakers apologise, then
write a post which is better written, more clear, and far more articulate
than the native English speakers :-)



I wish you'd written "clearer" rather than "more clear", this would have 
shown that your English is good like what mine is.


--
Python is the second best programming language in the world.
But the best has yet to be invented.  Christian Tismer

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Roy Smith
In article ,
Ned Batchelder  wrote:

> In Python3, repr() will return a Unicode string, and will preserve existing 
> Unicode characters in its arguments.  This has been controversial.  To get 
> the Python 2 behavior of a pure-ascii representation, there is the new 
> builtin ascii(), and a corresponding %a format string.

I'm still stuck on Python 2, and while I can understand the controversy ("It 
breaks my Python 2 code!"), this seems like the right thing to have done.  In 
Python 2, unicode is an add-on.  One of the big design drivers in Python 3 was 
to make unicode the standard.

The idea behind repr() is to provide a "just plain text" representation of an 
object.  In P2, "just plain text" means ascii, so escaping non-ascii characters 
makes sense.  In P3, "just plain text" means unicode, so escaping non-ascii 
characters no longer makes sense.

Some of us have been doing this long enough to remember when "just plain text" 
meant only a single case of the alphabet (and a subset of ascii punctuation).  
On an ASR-33, your C program would print like:

MAIN() \(
PRINTF("HELLO, ASCII WORLD");
\)

because ASR-33's didn't have curly braces (or lower case).

Having P3's repr() escape non-ascii characters today makes about as much sense 
as expecting P2's repr() to escape curly braces (and vertical bars, and a few 
others) because not every terminal can print those.

--
Roy Smith
r...@panix.com

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

On 15/11/2013 13:54, Ned Batchelder wrote:
.


No, but I've found that significant programs that run on both 2 and 3 need to 
have some shims to make the code work anyway.  You could do this:

 try:
 repr = ascii
 except NameError:
 pass


yes I tried that, but it doesn't affect %r which is inlined in unicodeobject.c, 
for me it seems easier to fix windows to use something like a standard encoding 
of utf8 ie cp65001, but that's quite hard to do globally. It seems sitecustomize 
is too late to set os.environ['PYTHONIOENCODING'], perhaps I can stuff that into 
one of the global environment vars and have it work for all python invocations.

--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Serhiy Storchaka

15.11.13 15:54, Ned Batchelder написав(ла):

No, but I've found that significant programs that run on both 2 and 3 need to 
have some shims to make the code work anyway.  You could do this:

 try:
 repr = ascii
 except NameError:
 pass

and then use repr throughout.


Or rather

try:
ascii
except NameError:
ascii = repr

and then use ascii throughout.


--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

..

I'm still stuck on Python 2, and while I can understand the controversy ("It breaks 
my Python 2 code!"), this seems like the right thing to have done.  In Python 2, 
unicode is an add-on.  One of the big design drivers in Python 3 was to make unicode the 
standard.

The idea behind repr() is to provide a "just plain text" representation of an object.  In P2, 
"just plain text" means ascii, so escaping non-ascii characters makes sense.  In P3, "just 
plain text" means unicode, so escaping non-ascii characters no longer makes sense.



unfortunately the word 'printable' got into the definition of repr; it's clear 
that printability is not the same as unicode at least as far as the print 
function is concerned. In my opinion it would have been better to leave the old 
behaviour as that would have eased the compatibility.


The python gods don't count that sort of thing as important enough so we get the 
mess that is the python2/3 split. ReportLab has to do both so it's a real issue; 
in addition swapping the str - unicode pair to bytes str doesn't help one's 
mental models either :(


Things went wrong when utf8 was not adopted as the standard encoding thus 
requiring two string types, it would have been easier to have a len function to 
count bytes as before and a glyphlen to count glyphs. Now as I understand it we 
have a complicated mess under the hood for unicode objects so they have a 
variable representation to approximate an 8 bit representation when suitable etc 
etc etc.



Some of us have been doing this long enough to remember when "just plain text" 
meant only a single case of the alphabet (and a subset of ascii punctuation).  On an 
ASR-33, your C program would print like:

MAIN() \(
PRINTF("HELLO, ASCII WORLD");
\)

because ASR-33's didn't have curly braces (or lower case).

Having P3's repr() escape non-ascii characters today makes about as much sense 
as expecting P2's repr() to escape curly braces (and vertical bars, and a few 
others) because not every terminal can print those.


.
I can certainly remember those days, how we cried and laughed when 8 bits became 
popular.

--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Joel Goldstick
>> Some of us have been doing this long enough to remember when "just plain
>> text" meant only a single case of the alphabet (and a subset of ascii
>> punctuation).  On an ASR-33, your C program would print like:
>>
>> MAIN() \(
>> PRINTF("HELLO, ASCII WORLD");
>> \)
>>
>> because ASR-33's didn't have curly braces (or lower case).
>>
>> Having P3's repr() escape non-ascii characters today makes about as much
>> sense as expecting P2's repr() to escape curly braces (and vertical bars,
>> and a few others) because not every terminal can print those.
>>
> .
> I can certainly remember those days, how we cried and laughed when 8 bits
> became popular.
>
Really? you cried and laughed over 7 vs. 8 bits?  That's lovely (?).
;).  That eighth bit sure was less confusing than codepoint
translations


> --
> Robin Becker
> --
> https://mail.python.org/mailman/listinfo/python-list



-- 
Joel Goldstick
http://joelgoldstick.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

On 15/11/2013 14:40, Serhiy Storchaka wrote:
..



and then use repr throughout.


Or rather

 try:
 ascii
 except NameError:
 ascii = repr

and then use ascii throughout.




apparently you can import ascii from future_builtins and the print() function is 
available as


from __future__ import print_function

nothing fixes all those %r formats to be %a though :(
--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Grant Edwards
On 2013-11-14, Mark Lawrence  wrote:
> On 14/11/2013 03:56, renato.barbosa.pim.pere...@gmail.com wrote:
>> I apologize again for my bad english and any inconvenience that I have 
>> generated.
>
> I do wish that people would stop apologising for poor English, it's an 
> extremely difficult language.

It's certainly not necessary from anybody for whom English is not a
first language -- and that's usually pretty easy to guess based on
domains and personal names.

There are people (not many in this group) who grew up speaking English
and really ought to apologize for their writing -- but they never do.

So a good rule of thumb is:

If you think maybe you need to apologize for your English, you don't

If it never occurred to you that you need to apologize, you might. 

;)

-- 
Grant Edwards   grant.b.edwardsYow! Let's all show human
  at   CONCERN for REVERAND MOON's
  gmail.comlegal difficulties!!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

...

became popular.


Really? you cried and laughed over 7 vs. 8 bits?  That's lovely (?).
;).  That eighth bit sure was less confusing than codepoint
translations



no we had 6 bits in 60 bit words as I recall; extracting the nth character 
involved division by 6; smart people did tricks with inverted multiplications 
etc etc  :(

--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Grant Edwards
On 2013-11-15, Paul Rudin  wrote:
> Steven D'Aprano  writes:
>
>> A few minor errors is one thing, but when you see people whose posts are 
>> full of error after error and an apparent inability to get English syntax 
>> right, you have to wonder how on earth they expect to be a programmer? 
>
> The irritating thing is apparent lack of care. A post is written once
> and will be seen (perhaps not read) by many people. People post with the
> intention of others reading their words. If they can't be bothered to
> take a little care in writing, why should we spend time reading?

Just because English is your second language it doesn't mean you don't
need to pay attention to what keys you're hitting and proof-read a
posting before hitting "send".

And yes, people can _easily_ tell the difference between errors caused
by being lazy/sloppy and errors caused by writing in a second
language.

-- 
Grant Edwards   grant.b.edwardsYow! Let me do my TRIBUTE
  at   to FISHNET STOCKINGS ...
  gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Joel Goldstick
On Fri, Nov 15, 2013 at 10:03 AM, Robin Becker  wrote:
> ...
>
>>> became popular.
>>>
>> Really? you cried and laughed over 7 vs. 8 bits?  That's lovely (?).
>> ;).  That eighth bit sure was less confusing than codepoint
>> translations
>
>
>
> no we had 6 bits in 60 bit words as I recall; extracting the nth character
> involved division by 6; smart people did tricks with inverted
> multiplications etc etc  :(
> --

Cool, someone here is older than me!  I came in with the 8080, and I
remember split octal, but sixes are something I missed out on.
> Robin Becker



-- 
Joel Goldstick
http://joelgoldstick.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Ned Batchelder
On Friday, November 15, 2013 9:43:17 AM UTC-5, Robin Becker wrote:
> Things went wrong when utf8 was not adopted as the standard encoding thus 
> requiring two string types, it would have been easier to have a len function 
> to 
> count bytes as before and a glyphlen to count glyphs. Now as I understand it 
> we 
> have a complicated mess under the hood for unicode objects so they have a 
> variable representation to approximate an 8 bit representation when suitable 
> etc 
> etc etc.
> 

Dealing with bytes and Unicode is complicated, and the 2->3 transition is not 
easy, but let's please not spread the misunderstanding that somehow the 
Flexible String Representation is at fault.  However you store Unicode code 
points, they are different than bytes, and it is complex having to deal with 
both.  You can't somehow make the dichotomy go away, you can only choose where 
you want to think about it.

--Ned.

> -- 
> Robin Becker

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 1:43 AM, Robin Becker  wrote:
> ..
>
>> I'm still stuck on Python 2, and while I can understand the controversy
>> ("It breaks my Python 2 code!"), this seems like the right thing to have
>> done.  In Python 2, unicode is an add-on.  One of the big design drivers in
>> Python 3 was to make unicode the standard.
>>
>> The idea behind repr() is to provide a "just plain text" representation of
>> an object.  In P2, "just plain text" means ascii, so escaping non-ascii
>> characters makes sense.  In P3, "just plain text" means unicode, so escaping
>> non-ascii characters no longer makes sense.
>>
>
> unfortunately the word 'printable' got into the definition of repr; it's
> clear that printability is not the same as unicode at least as far as the
> print function is concerned. In my opinion it would have been better to
> leave the old behaviour as that would have eased the compatibility.

"Printable" means many different things in different contexts. In some
contexts, the sequence \x66\x75\x63\x6b is considered unprintable, yet
each of those characters is perfectly displayable in its natural form.
Under IDLE, non-BMP characters can't be displayed (or at least, that's
how it has been; I haven't checked current status on that one). On
Windows, the console runs in codepage 437 by default (again, I may be
wrong here), so anything not representable in that has to be escaped.
My Linux box has its console set to full Unicode, everything working
perfectly, so any non-control character can be printed. As far as
Python's concerned, all of that is outside - something is "printable"
if it's printable within Unicode, and the other hassles are matters of
encoding. (Except the first one. I don't think there's an encoding
"g-rated".)

> The python gods don't count that sort of thing as important enough so we get
> the mess that is the python2/3 split. ReportLab has to do both so it's a
> real issue; in addition swapping the str - unicode pair to bytes str doesn't
> help one's mental models either :(

That's fixing, in effect, a long-standing bug - of a sort. The name
"str" needs to be applied to the most normal string type. As of Python
3, that's a Unicode string, which is as it should be. In Python 2, it
was the ASCII/bytes string, which still fit the description of "most
normal string type", but that means that Python 2 programs are
Unicode-unaware by default, which is a flaw. Hence the Py3 fix.

> Things went wrong when utf8 was not adopted as the standard encoding thus
> requiring two string types, it would have been easier to have a len function
> to count bytes as before and a glyphlen to count glyphs. Now as I understand
> it we have a complicated mess under the hood for unicode objects so they
> have a variable representation to approximate an 8 bit representation when
> suitable etc etc etc.

http://unspecified.wordpress.com/2012/04/19/the-importance-of-language-level-abstract-unicode-strings/

There are languages that do what you describe. It's very VERY easy to
break stuff. What happens when you slice a string?

>>> foo = "asdf"
>>> foo[:2],foo[2:]
('as', 'df')

>>> foo = "q\u1234zy"
>>> foo[:2],foo[2:]
('qሴ', 'zy')

Looks good to me. I split a four-character string, I get two
one-character strings. If that had been done in UTF-8, either I would
need to know "don't split at that boundary, that's between bytes in a
character", or else the indexing and slicing would have to be done by
counting characters from the beginning of the string - an O(n)
operation, rather than an O(1) pointer arithmetic, not to mention that
it'll blow your CPU cache (touching every part of a potentially-long
string) just to find the position.

The only reliable way to manage things is to work with true Unicode.
You can completely ignore the internal CPython representation; what
matters is that in Python (any implementation, as long as it conforms
with version 3.3 or later) lets you index Unicode codepoints out of a
Unicode string, without differentiating between those that happen to
be ASCII, those that fit in a single byte, those that fit in two
bytes, and those that are flagged RTL, because none of those
considerations makes any difference to you.

It takes some getting your head around, but it's worth it - same as
using git instead of a Windows shared drive. (I'm still trying to push
my family to think git.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 2:02 AM, Grant Edwards  wrote:
> And yes, people can _easily_ tell the difference between errors caused
> by being lazy/sloppy and errors caused by writing in a second
> language.

Yes, and even among people for whom English is the first language,
idioms can cause offense. On another list I'm on (Savoynet), one
person got somewhat offended at someone apparently calling him
completely ignorant, when actually no such slight was intended.
Welcome to English, where we all use the same words (mostly) but you
really need to be careful talking about knocking someone up...

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

On 15/11/2013 15:07, Joel Goldstick wrote:






Cool, someone here is older than me!  I came in with the 8080, and I
remember split octal, but sixes are something I missed out on.


The pdp 10/15 had 18 bit words and could be organized as 3*6 or 2*9, pdp 8s had 
12 bits I think, then came the IBM 7094 which had 36 bits and finally the 
CDC6000 & 7600 machines with 60 bits, some one must have liked 6's

-mumbling-ly yrs-
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Roy Smith
On Nov 15, 2013, at 10:18 AM, Robin Becker wrote:

> The pdp 10/15 had 18 bit words and could be organized as 3*6 or 2*9

I don't know about the 15, but the 10 had 36 bit words (18-bit halfwords).  One 
common character packing was 5 7-bit characters per 36 bit word (with the sign 
bit left over).

Anybody remember RAD-50?  It let you represent a 6-character filename (plus a 
3-character extension) in a 16 bit word.  RT-11 used it, not sure if it showed 
up anywhere else.

---
Roy Smith
r...@panix.com

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Robin Becker

.


Dealing with bytes and Unicode is complicated, and the 2->3 transition is not 
easy, but let's please not spread the misunderstanding that somehow the Flexible 
String Representation is at fault.  However you store Unicode code points, they 
are different than bytes, and it is complex having to deal with both.  You can't 
somehow make the dichotomy go away, you can only choose where you want to think 
about it.

--Ned.

...
I don't think that's what I said; the flexible representation is just an added 
complexity that has come about because of the wish to store strings in a compact 
way. The requirement for such complexity is the unicode type itself (especially 
the storage requirements) which necessitated some remedial action.


There's no point in fighting the change to using unicode. The type wasn't 
required for any technical reason as other languages didn't go this route and 
are reasonably ok, but there's no doubt the change made things more difficult.

--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list


Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 2:26 AM, Tim Daneliuk  wrote:
> On 11/15/2013 02:19 AM, Steven D'Aprano wrote:
>> Nobody sets out to*design*  a tangled mess. What normally happens is that
>> a tangled mess is the result of*lack of design*.
>
> This has been an interesting thread - to me anyway - but this bit
> above caught my eye.  People write programs for lots of reasons -
> personal, academic, scientific, and commercial - but I actually
> don't thing the resultant messes are caused by a "lack of
> design" most of the time.  In my experience they're caused by only two
> things:
>
> 2) An evolving set of requirements.

This can be an explanation for a lack of design, but it's no less a
lack. Sometimes, something just grows organically... from a nucleus of
good design, but undesigned growth. Maybe it's time it got redesigned;
or maybe redesigning would take too much effort and it's just not
worth spending that time on something that's going to be phased out by
the next shiny thing in a couple of years anyway. Doesn't change the
fact that the current state is not the result of design, but of
disorganized feature creep. That's not necessarily a terrible thing,
but Steven's point still stands: such lack of design often results in
a tangled mess, and a tangled mess can often be blamed on lack of
design.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Tim Daneliuk

On 11/15/2013 02:19 AM, Steven D'Aprano wrote:

Nobody sets out to*design*  a tangled mess. What normally happens is that
a tangled mess is the result of*lack of design*.


This has been an interesting thread - to me anyway - but this bit
above caught my eye.  People write programs for lots of reasons -
personal, academic, scientific, and commercial - but I actually
don't thing the resultant messes are caused by a "lack of
design" most of the time.  In my experience they're caused by only two things:

1) A lack of skill by inexperienced programmers who've been
   given more to do than they're yet ready to do and whose
   senior colleagues are not mentoring them (or such mentoring
   is being rejected because of ego and/or politics).

2) An evolving set of requirements.

#2 is particularly prevalent in commercial environments.  Modern
business is forced to respond to changing commercial conditions
in nearly realtime these days.   The pace of required innovation is
fast that - all too often - no one actually knows what the "requirements"
are during the design phase.  Requirements get *discovered* during the
coding phase.  This is not a moral failing or lack of discipline, it's
the simple reality that what you thought you needed to deliver changed
in the intervening 6 months of coding because the business changed.

 


Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

--
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Antoon Pardon
Op 15-11-13 16:39, Robin Becker schreef:
> .
>>
>> Dealing with bytes and Unicode is complicated, and the 2->3 transition
>> is not easy, but let's please not spread the misunderstanding that
>> somehow the Flexible String Representation is at fault.  However you
>> store Unicode code points, they are different than bytes, and it is
>> complex having to deal with both.  You can't somehow make the
>> dichotomy go away, you can only choose where you want to think about it.
>>
>> --Ned.
> ...
> I don't think that's what I said; the flexible representation is just an
> added complexity ...

No it is not, at least not for python programmers. (It of course is for
the python implementors). The python programmer doesn't have to care
about the flexible representation, just as the python programmer doesn't
have to care about the internal reprensentation of (long) integers. It
is an implemantation detail that is mostly ignorable.

-- 
Antoon Pardon

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Alister
On Sat, 16 Nov 2013 02:12:16 +1100, Chris Angelico wrote:

> On Sat, Nov 16, 2013 at 2:02 AM, Grant Edwards 
> wrote:
>> And yes, people can _easily_ tell the difference between errors caused
>> by being lazy/sloppy and errors caused by writing in a second language.
> 
> Yes, and even among people for whom English is the first language,
> idioms can cause offense. On another list I'm on (Savoynet), one person
> got somewhat offended at someone apparently calling him completely
> ignorant, when actually no such slight was intended.
> Welcome to English, where we all use the same words (mostly) but you
> really need to be careful talking about knocking someone up...
> 
> ChrisA

And "Bumming a fag" can be taken completely the wrong way.



-- 
Some programming languages manage to absorb change, but withstand 
progress.
-- Epigrams in Programming, ACM SIGPLAN Sept. 1982
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Front-end to GCC

2013-11-15 Thread sharath . cs . smp
On Sunday, 20 October 2013 10:56:46 UTC-7, Philip Herron  wrote:
> Hey,
> 
> 
> 
> I've been working on GCCPY since roughly november 2009 at least in its
> 
> concept. It was announced as a Gsoc 2010 project and also a Gsoc 2011
> 
> project. I was mentored by Ian Taylor who has been an extremely big
> 
> influence on my software development carrer.
> 
> 
> 
> Gccpy is an Ahead of time implementation of Python ontop of GCC. So it
> 
> works as you would expect with a traditional compiler such as GCC to
> 
> compile C code. Or G++ to compile C++ etc.
> 
> 
> 
> Whats interesting and deserves a significant mention is my work is
> 
> heavily inspired by Paul Biggar's phd thesis on optimizing dynamic
> 
> languages and his work on PHC a ahead of time php compiler. I've had
> 
> so many ups and down in this project and i need to thank Andi Hellmund
> 
> for his contributions to the project.
> 
> http://paulbiggar.com/research/#phd-dissertation
> 
> 
> 
> The project has taken so many years as an in my spare time project to
> 
> get to this point. I for example its taken me so long simply to
> 
> understand a stabilise the core fundamentals for the compiler and how
> 
> it could all work.
> 
> 
> 
> The release can be found here. I will probably rename the tag to the
> 
> milestone (lucy) later on.
> 
> https://github.com/redbrain/gccpy/releases/tag/v0.1-24
> 
> (Lucy is our dog btw, German Shepard (6 years young) loves to lick
> 
> your face off :) )
> 
> 
> 
> Documentation can be found http://gcc.gnu.org/wiki/PythonFrontEnd.
> 
> (Although this is sparse partialy on purpose since i do not wan't
> 
> people thinking this is by any means ready to compile real python
> 
> applications)
> 
> 
> 
> I've found some good success with this project in compiling python
> 
> though its largely unknown to the world simply because i am nervous of
> 
> the compiler and more specifically the python compiler world.
> 
> 
> 
> But at least to me there is at least to me an un-answered question in
> 
> current compiler implementations.  AOT vs Jit.
> 
> 
> 
> Is a jit implementation of a language (not just python) better than
> 
> traditional ahead of time compilation.
> 
> 
> 
> What i can say is ahead of time at least strips out the crap needed
> 
> for the users code to be run. As in people are forgetting the basics
> 
> of how a computer works in my opinion when it comes to making code run
> 
> faster. Simply need to reduce the number of instructions that need to
> 
> be executed in order to preform what needs to be done. Its not about
> 
> Jit and bla bla keyword llvm keyword instruction scheduling keyword
> 
> bla.
> 
> 
> 
> I could go into the arguments but i feel i should let the project
> 
> speak for itself its very immature so you really cant compare it to
> 
> anything like it but it does compile little bits and bobs fairly well
> 
> but there is much more work needed.
> 
> 
> 
> There is nothing at steak, its simply an idea provoked from a great
> 
> phd thesis and i want to see how it would work out. I don't get funded
> 
> of paid. I love working on compilers and languages but i don't have a
> 
> day job doing it so its my little pet to open source i believe its at
> 
> least worth some research.
> 
> 
> 
> I would really like to hear the feedback good and bad. I can't
> 
> describe how much work i've put into this and how much persistence
> 
> I've had to have in light of recent reddit threads talking about my
> 
> project.
> 
> 
> 
> I have so many people to thank to get to this point! Namely Ian
> 
> Taylor, Paul Biggar, Andi Hellmund, Cyril Roelandt  Robert Bradshaw,
> 
> PyBelfast, and the Linux Outlaws community. I really couldn't have got
> 
> to this point in my life without the help of these people!
> 
> 
> 
> Thanks!
> 
> 
> 
> --Phil

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 2:39 AM, Robin Becker  wrote:
>> Dealing with bytes and Unicode is complicated, and the 2->3 transition is
>> not easy, but let's please not spread the misunderstanding that somehow the
>> Flexible String Representation is at fault.  However you store Unicode code
>> points, they are different than bytes, and it is complex having to deal with
>> both.  You can't somehow make the dichotomy go away, you can only choose
>> where you want to think about it.
>>
>> --Ned.
>
> ...
> I don't think that's what I said; the flexible representation is just an
> added complexity that has come about because of the wish to store strings in
> a compact way. The requirement for such complexity is the unicode type
> itself (especially the storage requirements) which necessitated some
> remedial action.
>
> There's no point in fighting the change to using unicode. The type wasn't
> required for any technical reason as other languages didn't go this route
> and are reasonably ok, but there's no doubt the change made things more
> difficult.

There's no perceptible difference between a 3.2 wide build and the 3.3
flexible representation. (Differences with narrow builds are bugs, and
have now been fixed.) As far as your script's concerned, Python 3.3
always stores strings in UTF-32, four bytes per character. It just
happens to be way more efficient on memory, most of the time.

Other languages _have_ gone for at least some sort of Unicode support.
Unfortunately quite a few have done a half-way job and use UTF-16 as
their internal representation. That means there's no difference
between U+0012, U+0123, and U+1234, but U+12345 suddenly gets handled
differently. ECMAScript actually specifies the perverse behaviour of
treating codepoints >U+ as two elements in a string, because it's
just too costly to change.

There are a small number of languages that guarantee correct Unicode
handling. I believe bash scripts get this right (though I haven't
tested; string manipulation in bash isn't nearly as rich as a proper
text parsing language, so I don't dig into it much); Pike is a very
Python-like language, and PEP 393 made Python even more Pike-like,
because Pike's string has been variable width for as long as I've
known it. A handful of other languages also guarantee UTF-32
semantics. All of them are really easy to work with; instead of
writing your code and then going "Oh, I wonder what'll happen if I
give this thing weird characters?", you just write your code, safe in
the knowledge that there is no such thing as a "weird character"
(except for a few in the ASCII set... you may find that code breaks if
given a newline in the middle of something, or maybe the slash
confuses you).

Definitely don't fight the change to Unicode, because it's not a
change at all... it's just fixing what was buggy. You already had a
difference between bytes and characters, you just thought you could
ignore it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread William Ray Wing
On Nov 15, 2013, at 10:18 AM, Robin Becker  wrote:

> On 15/11/2013 15:07, Joel Goldstick wrote:
> 
> 
> 
> 
>> 
>> Cool, someone here is older than me!  I came in with the 8080, and I
>> remember split octal, but sixes are something I missed out on.
> 
> The pdp 10/15 had 18 bit words and could be organized as 3*6 or 2*9, pdp 8s 
> had 12 bits I think, then came the IBM 7094 which had 36 bits and finally the 
> CDC6000 & 7600 machines with 60 bits, some one must have liked 6's
> -mumbling-ly yrs-
> Robin Becker
> -- 
> https://mail.python.org/mailman/listinfo/python-list

Yes, the PDP-8s, LINC-8s, and PDP-12s were all 12-bit computers.  However the 
LINC-8 operated with word-pairs (instruction in one location followed by 
address to be operated on in the next) so it was effectively a 24-bit computer 
and the PDP-12 was able to execute BOTH PDP-8 and LINC-8 instructions (it added 
one extra instruction to each set that flipped the mode).

First assembly language program I ever wrote was on a PDP-12.  (If there is an 
emoticon for a face with a gray beard, I don't know it.)

-Bill
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Gene Heskett
On Friday 15 November 2013 11:28:19 Joel Goldstick did opine:

> On Fri, Nov 15, 2013 at 10:03 AM, Robin Becker  
wrote:
> > ...
> > 
> >>> became popular.
> >> 
> >> Really? you cried and laughed over 7 vs. 8 bits?  That's lovely (?).
> >> ;).  That eighth bit sure was less confusing than codepoint
> >> translations
> > 
> > no we had 6 bits in 60 bit words as I recall; extracting the nth
> > character involved division by 6; smart people did tricks with
> > inverted multiplications etc etc  :(
> > --
> 
> Cool, someone here is older than me!  I came in with the 8080, and I
> remember split octal, but sixes are something I missed out on.

Ok, if you are feeling old & decrepit, hows this for a birthday: 10/04/34, 
I came into micro computers about RCA 1802 time.  Wrote a program for the 
1802 without an assembler, for tape editing in '78 at KRCR-TV in Redding 
CA, that was still in use in '94, but never really wrote assembly code 
until the 6809 was out in the Radio Shack Color Computers.  os9 on the 
coco's was the best teacher about the unix way of doing things there ever 
was.  So I tell folks these days that I am 39, with 40 years experience at 
being 39. ;-)

> > Robin Becker


Cheers, Gene
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)

Counting in binary is just like counting in decimal -- if you are all 
thumbs.
-- Glaser and Way
A pen in the hand of this president is far more
dangerous than 200 million guns in the hands of
 law-abiding citizens.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Neil Cerutti
On 2013-11-15, Steven D'Aprano
 wrote:
> On Thu, 14 Nov 2013 20:03:44 +, Alister wrote:
>> As a native of England I have to agree it is far to arrogant
>> to expect everyone else to be able to speak good English when
>> I can barley order a beer in any other language. (even or
>> especially in the USA)
>
> Apparently you can "barley" write UK English either :-)
>
> No offence intended, I just thought that was an amusing error
> to make. The word you're after is "barely", barley is a grain
> similar to wheat or oats. Also "far too arrogant".

I just learned about this kind of error yesterday while browsing
the programming reddit!

http://en.wikipedia.org/wiki/Muphry's_law

-- 
Neil Cerutti
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Zero Piraeus
:

On Fri, Nov 15, 2013 at 10:32:54AM -0500, Roy Smith wrote:
> Anybody remember RAD-50?  It let you represent a 6-character filename
> (plus a 3-character extension) in a 16 bit word.  RT-11 used it, not
> sure if it showed up anywhere else.

Presumably 16 is a typo, but I just had a moderate amount of fun
envisaging how that might work: if the characters were restricted to
vowels, then 5**6 < 2**14, giving a couple of bits left over for a
choice of four preset "three-character" extensions.

I can't say that AEIOUA.EX1 looks particularly appealing, though ...

 -[]z.

-- 
Zero Piraeus: pollice verso
http://etiol.net/pubkey.asc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 4:06 AM, Zero Piraeus  wrote:
> :
>
> On Fri, Nov 15, 2013 at 10:32:54AM -0500, Roy Smith wrote:
>> Anybody remember RAD-50?  It let you represent a 6-character filename
>> (plus a 3-character extension) in a 16 bit word.  RT-11 used it, not
>> sure if it showed up anywhere else.
>
> Presumably 16 is a typo, but I just had a moderate amount of fun
> envisaging how that might work: if the characters were restricted to
> vowels, then 5**6 < 2**14, giving a couple of bits left over for a
> choice of four preset "three-character" extensions.
>
> I can't say that AEIOUA.EX1 looks particularly appealing, though ...

Looks like it might be this scheme:

https://en.wikipedia.org/wiki/DEC_Radix-50

36-bit word for a 6-char filename, but there was also a 16-bit
variant. I do like that filename scheme you describe, though it would
tend to produce names that would suit virulent diseases.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Steven D'Aprano
On Fri, 15 Nov 2013 14:43:17 +, Robin Becker wrote:

> Things went wrong when utf8 was not adopted as the standard encoding
> thus requiring two string types, it would have been easier to have a len
> function to count bytes as before and a glyphlen to count glyphs. Now as
> I understand it we have a complicated mess under the hood for unicode
> objects so they have a variable representation to approximate an 8 bit
> representation when suitable etc etc etc.

No no no! Glyphs are *pictures*, you know the little blocks of pixels 
that you see on your monitor or printed on a page. Before you can count 
glyphs in a string, you need to know which typeface ("font") is being 
used, since fonts generally lack glyphs for some code points.

[Aside: there's another complication. Some fonts define alternate glyphs 
for the same code point, so that the design of (say) the letter "a" may 
vary within the one string according to whatever typographical rules the 
font supports and the application calls. So the question is, when you 
"count glyphs", should you count "a" and "alternate a" as a single glyph 
or two?]

You don't actually mean count glyphs, you mean counting code points 
(think characters, only with some complications that aren't important for 
the purposes of this discussion).

UTF-8 is utterly unsuited for in-memory storage of text strings, I don't 
care how many languages (Go, Haskell?) make that mistake. When you're 
dealing with text strings, the fundamental unit is the character, not the 
byte. Why do you care how many bytes a text string has? If you really 
need to know how much memory an object is using, that's where you use 
sys.getsizeof(), not len().

We don't say len({42: None}) to discover that the dict requires 136 
bytes, why would you use len("heåvy") to learn that it uses 23 bytes?

UTF-8 is variable width encoding, which means it's *rubbish* for the in-
memory representation of strings. Counting characters is slow. Slicing is 
slow. If you have mutable strings, deleting or inserting characters is 
slow. Every operation has to effectively start at the beginning of the 
string and count forward, lest it split bytes in the middle of a UTF 
unit. Or worse, the language doesn't give you any protection from this at 
all, so rather than slow string routines you have unsafe string routines, 
and it's your responsibility to detect UTF boundaries yourself. 

In case you aren't familiar with what I'm talking about, here's an 
example using Python 3.2, starting with a Unicode string and treating it 
as UTF-8 bytes:

py> u = "heåvy"
py> s = u.encode('utf-8')
py> for c in s:
... print(chr(c))
...
h
e
Ã
¥
v
y


"Ã¥"? It didn't take long to get moji-bake in our output, and all I did 
was print the (byte) string one "character" at a time. It gets worse: we 
can easily end up with invalid UTF-8:

py> a, b = s[:len(s)//2], s[len(s)//2:]  # split the string in half
py> a.decode('utf-8')
Traceback (most recent call last):
  File "", line 1, in 
UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 2: 
unexpected end of data
py> b.decode('utf-8')
Traceback (most recent call last):
  File "", line 1, in 
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa5 in position 0: 
invalid start byte


No, UTF-8 is okay for writing to files, but it's not suitable for text 
strings. The in-memory representation of text strings should be constant 
width, based on characters not bytes, and should prevent the caller from 
accidentally ending up with moji-bake or invalid strings.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


testing - do not reply

2013-11-15 Thread Pedro

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 4:10 AM, Steven D'Aprano
 wrote:
> No, UTF-8 is okay for writing to files, but it's not suitable for text
> strings.

Correction: It's _great_ for writing to files (and other fundamentally
byte-oriented streams, like network connections). Does a superb job as
the default encoding for all sorts of situations. But, as you say, it
sucks if you want to find the Nth character.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Serhiy Storchaka

15.11.13 17:32, Roy Smith написав(ла):

Anybody remember RAD-50?  It let you represent a 6-character filename
(plus a 3-character extension) in a 16 bit word.  RT-11 used it, not
sure if it showed up anywhere else.


In three 16-bit words.


--
https://mail.python.org/mailman/listinfo/python-list


Re: understanding someone else's program

2013-11-15 Thread Denis McMahon
On Fri, 15 Nov 2013 03:05:04 -0800, C. Ng wrote:

> Hi all,
> 
> Please suggest how I can understand someone else's program where -
> documentation is sparse - in function A, there will be calls to function
> B, C, D and in those functions will be calls to functions R,S,T
> and so on so forth... making it difficult to trace what happens to a
> certain variable
> 
> Am using ERIC4 IDE.

You just have to work through it working out what each line does.

Start with the inputs and give them sensible names, after all you 
presumably know where they come from and what they represent. Then you 
can see what operations are being performed on the input data, and 
presumably if you have enough knowledge in any relevant fields, may be 
able to determine that, for example, when the input 'x' is actually a 
temp in fahrenheit, then the math operation 'x=(x-32)*5/9' is really 
"convert temp from fahrenheit to centigrade".

As you do this add relevant comments to the code. Eventually you'll have 
code with sensible variable names and comments that hopefully describe 
what it does.

-- 
Denis McMahon, denismfmcma...@gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Cousin Stanley

> 
> We don't say len({42: None}) to discover 
> that the dict requires 136 bytes, 
> why would you use len("heåvy") 
> to learn that it uses 23 bytes ?
> 

#!/usr/bin/env python
# -*- coding: utf-8 -*-

"""
illustrate the difference in length of python objects
and the size of their system storage
"""

import sys

s = "heåvy"

d = { 42 :  None }

print
print '   s :  %s' % s
print 'len( s ) :  %d' % len( s )
print '  sys.getsizeof( s ) :  %s ' % sys.getsizeof( s )
print
print
print '   d : ' , d
print 'len( d ) :  %d' % len( d )
print '  sys.getsizeof( d ) :  %d ' % sys.getsizeof( d )


-- 
Stanley C. Kitching
Human Being
Phoenix, Arizona
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Neil Cerutti
On 2013-11-15, Chris Angelico  wrote:
> Other languages _have_ gone for at least some sort of Unicode
> support. Unfortunately quite a few have done a half-way job and
> use UTF-16 as their internal representation. That means there's
> no difference between U+0012, U+0123, and U+1234, but U+12345
> suddenly gets handled differently. ECMAScript actually
> specifies the perverse behaviour of treating codepoints >U+
> as two elements in a string, because it's just too costly to
> change.

The unicode support I'm learning in Go is, "Everything is utf-8,
right? RIGHT?!?" It also has the interesting behavior that
indexing strings retrieves bytes, while iterating over them
results in a sequence of runes.

It comes with support for no encodings save utf-8 (natively) and
utf-16 (if you work at it). Is that really enough?

-- 
Neil Cerutti
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Mark Lawrence

On 15/11/2013 16:36, Gene Heskett wrote:

On Friday 15 November 2013 11:28:19 Joel Goldstick did opine:


On Fri, Nov 15, 2013 at 10:03 AM, Robin Becker 

wrote:

...


became popular.


Really? you cried and laughed over 7 vs. 8 bits?  That's lovely (?).
;).  That eighth bit sure was less confusing than codepoint
translations


no we had 6 bits in 60 bit words as I recall; extracting the nth
character involved division by 6; smart people did tricks with
inverted multiplications etc etc  :(
--


Cool, someone here is older than me!  I came in with the 8080, and I
remember split octal, but sixes are something I missed out on.


Ok, if you are feeling old & decrepit, hows this for a birthday: 10/04/34,
I came into micro computers about RCA 1802 time.  Wrote a program for the
1802 without an assembler, for tape editing in '78 at KRCR-TV in Redding
CA, that was still in use in '94, but never really wrote assembly code
until the 6809 was out in the Radio Shack Color Computers.  os9 on the
coco's was the best teacher about the unix way of doing things there ever
was.  So I tell folks these days that I am 39, with 40 years experience at
being 39. ;-)


Robin Becker



Cheers, Gene



I also used the RCA 1802, but did you use the Ferranti F100L?  Rationale 
for the use of both, mid/late 70s they were the only processors of their 
respective type with military approvals.


Can't remember how we coded on the F100L, but the 1802 work was done on 
the Texas Instruments Silent 700, copying from one cassette tape to 
another.  Set the controls wrong when copying and whoops, you've just 
overwritten the work you've just done.  We could have had a decent 
development environment but it was on a UK MOD cost plus project, so the 
more inefficiently you worked, the more profit your employer made.


--
Python is the second best programming language in the world.
But the best has yet to be invented.  Christian Tismer

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Unicode stdin/stdout (was: Re: python 3.3 repr)

2013-11-15 Thread random832
Of course, the real solution to this issue is to replace sys.stdout on
windows with an object that can handle Unicode directly with the
WriteConsoleW function - the problem there is that it will break code
that expects to be able to use sys.stdout.buffer for binary I/O. I also
wasn't able to get the analogous stdin replacement class to work with
input() in my attempts.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Gene Heskett
On Friday 15 November 2013 13:52:40 Mark Lawrence did opine:

> On 15/11/2013 16:36, Gene Heskett wrote:
> > On Friday 15 November 2013 11:28:19 Joel Goldstick did opine:
> >> On Fri, Nov 15, 2013 at 10:03 AM, Robin Becker 
> > 
> > wrote:
> >>> ...
> >>> 
> > became popular.
>  
>  Really? you cried and laughed over 7 vs. 8 bits?  That's lovely
>  (?). ;).  That eighth bit sure was less confusing than codepoint
>  translations
> >>> 
> >>> no we had 6 bits in 60 bit words as I recall; extracting the nth
> >>> character involved division by 6; smart people did tricks with
> >>> inverted multiplications etc etc  :(
> >>> --
> >> 
> >> Cool, someone here is older than me!  I came in with the 8080, and I
> >> remember split octal, but sixes are something I missed out on.
> > 
> > Ok, if you are feeling old & decrepit, hows this for a birthday:
> > 10/04/34, I came into micro computers about RCA 1802 time.  Wrote a
> > program for the 1802 without an assembler, for tape editing in '78 at
> > KRCR-TV in Redding CA, that was still in use in '94, but never really
> > wrote assembly code until the 6809 was out in the Radio Shack Color
> > Computers.  os9 on the coco's was the best teacher about the unix way
> > of doing things there ever was.  So I tell folks these days that I am
> > 39, with 40 years experience at being 39. ;-)
> > 
> >>> Robin Becker
> > 
> > Cheers, Gene
> 
> I also used the RCA 1802, but did you use the Ferranti F100L?  Rationale
> for the use of both, mid/late 70s they were the only processors of their
> respective type with military approvals.
> 
> Can't remember how we coded on the F100L, but the 1802 work was done on
> the Texas Instruments Silent 700, copying from one cassette tape to
> another.  Set the controls wrong when copying and whoops, you've just
> overwritten the work you've just done.  We could have had a decent
> development environment but it was on a UK MOD cost plus project, so the
> more inefficiently you worked, the more profit your employer made.

BTDT but in 1959-60 era.  Testing the ullage pressure regulators for the 
early birds, including some that gave John Glenn his first ride or 2.  I 
don't recall the brand of paper tape recorders, but they used 12at7's & 
12au7's by the grocery sack full.  One or more got noisy & me being the 
budding C.E.T. that I now am, of course ran down the bad ones and requested 
new ones.  But you had to turn in the old ones, which Stellardyne Labs 
simply recycled back to you the next time you needed a few.  Hopeless 
management IMO, but thats cost plus for you.

At 10k$ a truckload for helium back then, each test lost about $3k worth of 
helium because the recycle catcher tank was so thin walled.  And the 6 
stage cardox re-compressor was so leaky, occasionally blowing up a pipe out 
of the last stage that put about 7800 lbs back in the monel tanks.

I considered that a huge waste compared to the cost of a 12au7, then about 
$1.35, and raised hell, so I got fired.  They simply did not care that a 
perfectly good regulator was being abused to death when it took 10 or more 
test runs to get one good recording for the certification. At those 
operating pressures, the valve faces erode just like the seats in your 
shower faucets do in 20 years.  Ten such runs and you may as well bin it, 
but they didn't.

I am amazed that as many of those birds worked as did.  Of course if it 
wasn't manned, they didn't talk about the roman candles on the launch pads. 
I heard one story that they had to regrade one pads real estate at 
Vandenburg & start all over, seems some ID10T had left the cable to the 
explosive bolts hanging on the cable tower.  Ooops, and theres no off 
switch in many of those once the umbilical has been dropped.

Cheers, Gene
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)

Tehee quod she, and clapte the wyndow to.
-- Geoffrey Chaucer
A pen in the hand of this president is far more
dangerous than 200 million guns in the hands of
 law-abiding citizens.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Odd msg received from list

2013-11-15 Thread Chris “Kwpolska” Warrick
On Fri, Nov 15, 2013 at 12:30 AM, Gregory Ewing
 wrote:
> Verde Denim wrote:
>>
>> The message also listed my
>> account password, which I found odd.
>
>
> You mean the message contained your actual password,
> in plain text? That's not just odd, it's rather worrying
> for at least two reasons. First, what business does a
> message like that have carrying a password, and second,
> it means the server must be keeping passwords in a
> readable form somewhere, which is a really bad idea.

From the info page at https://mail.python.org/mailman/listinfo/python-list:

> You may enter a privacy password below. This provides only mild
> security, but should prevent others from messing with your
> subscription. **Do not use a valuable password** as it will
> occasionally be emailed back to you in cleartext.

> If you choose not to enter a password, one will be automatically
> generated for you, and it will be sent to you once you've confirmed
> your subscription.  You can always request a mail-back of your
> password when you edit your personal options. Once a month, your
> password will be emailed to you as a reminder.

-- 
Chris “Kwpolska” Warrick 
PGP: 5EAAEA16
stop html mail | always bottom-post | only UTF-8 makes sense
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Alister
On Fri, 15 Nov 2013 16:53:58 +, Neil Cerutti wrote:

> On 2013-11-15, Steven D'Aprano 
> wrote:
>> On Thu, 14 Nov 2013 20:03:44 +, Alister wrote:
>>> As a native of England I have to agree it is far to arrogant to expect
>>> everyone else to be able to speak good English when I can barley order
>>> a beer in any other language. (even or especially in the USA)
>>
>> Apparently you can "barley" write UK English either :-)
>>
>> No offence intended, I just thought that was an amusing error to make.
>> The word you're after is "barely", barley is a grain similar to wheat
>> or oats. Also "far too arrogant".

Damn Spell checker, at least it chose a good pun I could almost get away 
with claiming it was deliberate ;-)

But also proves the point that if an Englishman can make simple mistakes 
after nearly half a century of usage then the no native speakers should 
be admired for doing as well as they do,
> 
> I just learned about this kind of error yesterday while browsing the
> programming reddit!
> 
> http://en.wikipedia.org/wiki/Muphry's_law

except I was not correcting/criticising a grammatical error but defending 
those than make them.





-- 
Lawrence Radiation Laboratory keeps all its data in an old gray trunk.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Alister
On Fri, 15 Nov 2013 20:12:27 +, Alister wrote:

> On Fri, 15 Nov 2013 16:53:58 +, Neil Cerutti wrote:
> 
>> On 2013-11-15, Steven D'Aprano 
>> wrote:
>>> On Thu, 14 Nov 2013 20:03:44 +, Alister wrote:
 As a native of England I have to agree it is far to arrogant to
 expect everyone else to be able to speak good English when I can
 barley order a beer in any other language. (even or especially in the
 USA)
>>>
>>> Apparently you can "barley" write UK English either :-)
>>>
>>> No offence intended, I just thought that was an amusing error to make.
>>> The word you're after is "barely", barley is a grain similar to wheat
>>> or oats. Also "far too arrogant".
> 
> Damn Spell checker, at least it chose a good pun I could almost get away
> with claiming it was deliberate ;-)
> 
> But also proves the point that if an Englishman can make simple mistakes
> after nearly half a century of usage then the no native speakers should
> be admired for doing as well as they do,
>> 
>> I just learned about this kind of error yesterday while browsing the
>> programming reddit!
>> 
>> http://en.wikipedia.org/wiki/Muphry's_law
> 
> except I was not correcting/criticising a grammatical error but
> defending those than make them.

and if you haven't seen it before :-

Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in 
waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht 
the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl 
mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn 
mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.





-- 
Liar:
one who tells an unpleasant truth.
-- Oliver Herford
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread xDog Walker
On Friday 2013 November 15 06:58, Grant Edwards wrote:
> There are people (not many in this group) who grew up speaking English
> and really ought to apologize for their writing -- but they never do.

Can you supply an example of the form such an apology might take?

-- 
Yonder nor sorghum stenches shut ladle gulls stopper torque wet 
strainers.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 2.7.x on MacOSX: failed dlopen() on .so's

2013-11-15 Thread Paul Smith
On Thu, 2013-11-14 at 10:36 -0800, Ned Deily wrote:
> In article <1384442536.3496.532.camel@pdsdesk>,
>  Paul Smith  wrote:
> [...]
> > By relocatable I mean "runnable from any location"; i.e., not fixed.  I
> > have a wrapper around the Python executable that can compute the correct
> > root directory and set any environment variables or add flags or
> > whatever might be needed.
> 
> In that case, the python.org installer may not be a good choice.  You should 
> be to accomplish what you want by building your own Python.   You'll probably 
> find you were getting tripped up by unnecessarily setting environment 
> variables.  Good luck!

Thanks Ned.  I got sidetracked for a while but I got back to this now,
and I found my problem.

The makefile I was using to control the build on Linux was stripping the
python executable to make it smaller.

However, stripping the python executable on MacOSX breaks it completely
so it can't load its shared libraries and I get errors as in my original
message.  If I remove the "strip" operation, then everything starts to
work as expected.


-- 
https://mail.python.org/mailman/listinfo/python-list


Sharing Python installation between architectures

2013-11-15 Thread Paul Smith
One thing I always liked about Perl was the way you can create a single
installation directory which can be shared between archictures.  Say
what you will about the language: the Porters have an enormous amount of
experience and expertise producing portable and flexible interpreter
installations.

By this I mean, basically, multiple architectures (Linux, Solaris,
MacOSX, even Windows) sharing the same $prefix/lib/python2.7 directory.
The large majority of the contents there are completely portable across
architectures (aren't they?) so why should I have to duplicate many
megabytes worth of files?

The only parts of the install which are not shareable (as far as I can
tell) are the .so dynamic objects (and the python executable itself
obviously).

If the default sys.path included platform-specific directories as well
as the generic lib-dynload, it would be possible.


I do see that there are "plat-*" directories available in the default
path.  Is it possible to make use of these (say, by renaming each
architecture's lib-dynload to the appropriate plat-* name)?


If that works, the remaining issue is the site-packages directory.
There is no ability (that I can see) to separate out the shareable vs.
non-sharable aspects of the add-on site-packages.


Any comments or suggestions?  Am I overestimating the amount of sharing
that's possible?  Thanks!

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Running python's own unit tests?

2013-11-15 Thread Russell E. Owen
In article <5285223d.50...@timgolden.me.uk>,
 Tim Golden  wrote:

> http://docs.python.org/devguide/

Thank you and the other responders. I was expecting to find the 
information here  under 
Building Python. The developer's guide is a nice resource.

-- Russell

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Implementing #define macros similar to C on python

2013-11-15 Thread Irmen de Jong
On 15-11-2013 3:29, JL wrote:
> One of my favorite tools in C/C++ language is the preprocessor macros.
> 
> One example is switching certain print messages for debugging use only
> 
> #ifdef DEBUG_ENABLE
> DEBUG_PRINT   print
> #else
> DEBUG_PRINT
> 
> Is it possible to implement something similar in python? Thank you.
> 

You could just run cpp (or gcc -E) on your python-with-macros-file to generate 
the final
.py file. But: yuck, eww, gross.

Irmen


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Tim Daneliuk

On 11/15/2013 09:42 AM, Chris Angelico wrote:

On Sat, Nov 16, 2013 at 2:26 AM, Tim Daneliuk  wrote:

On 11/15/2013 02:19 AM, Steven D'Aprano wrote:

Nobody sets out to*design*  a tangled mess. What normally happens is that
a tangled mess is the result of*lack of design*.


This has been an interesting thread - to me anyway - but this bit
above caught my eye.  People write programs for lots of reasons -
personal, academic, scientific, and commercial - but I actually
don't thing the resultant messes are caused by a "lack of
design" most of the time.  In my experience they're caused by only two
things:

2) An evolving set of requirements.


This can be an explanation for a lack of design, but it's no less a
lack. Sometimes, something just grows organically... from a nucleus of
good design, but undesigned growth. Maybe it's time it got redesigned;
or maybe redesigning would take too much effort and it's just not
worth spending that time on something that's going to be phased out by
the next shiny thing in a couple of years anyway. Doesn't change the
fact that the current state is not the result of design, but of
disorganized feature creep. That's not necessarily a terrible thing,
but Steven's point still stands: such lack of design often results in
a tangled mess, and a tangled mess can often be blamed on lack of
design.

ChrisA



A fair point.  Perhaps a better way to say this would be "Code that
is a tangled mess is often so because good design was not possible
during its creation."

The problem, of course, is that in almost all circumstances there is usually
not a lot of economic benefit to redesign and restructure the code
once you *do* know the requirements.  Other projects compete for attention
and fixing old, ugly stuff rarely gets much attention.  This is particularly
true insofar as most organizations do a lousy job of tracking what it
is really costing them to operate that kind of code.  If they did, cleaning
things up would become a much bigger priority.

Oh, and inevitably, the person that wrote the code without stable requirements
and without being given time to go back, refactor, cleanup, and restructure the
code ... gets blamed by the people that have to run and maintain it.



Years ago I worked for a company that did embedded banking software that
ran on high speed check readers.  It was an "application" that had been
undergoing constant feature creep and change during about an 18 month period
because the woman in marketing running the program get getting
Bright New Ideas (tm) to peddle.

The programmer - frustrated by this - began adding increasingly direct,
personal, and biological comments about said marketing person in the
comments of his assembler code.  Anyway, the new feature requests finally
stopped, and she came in one day to briskly inform us that the code
had been sold to one of our customers and she'd need a tape by end of
week.  The guy who'd been writing all this turned sheet white and scrambled
over the next few days to expunge all his nasty comments from thousands
of lines of assembler.  It was most entertaining to watch ...

--

Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

--
https://mail.python.org/mailman/listinfo/python-list


Re: Implementing #define macros similar to C on python

2013-11-15 Thread JL
Thanks! This is the answer which I am seeking. However, I am not able to get 
the following line to work. I am using python 2.7.5

debug_print = print

Can we assign a function into a variable in this manner?

On Friday, November 15, 2013 11:49:52 AM UTC+8, Chris Angelico wrote:
> On Fri, Nov 15, 2013 at 1:29 PM, JL  wrote:
> 
> > One of my favorite tools in C/C++ language is the preprocessor macros.
> 
> >
> 
> > One example is switching certain print messages for debugging use only
> 
> >
> 
> > #ifdef DEBUG_ENABLE
> 
> > DEBUG_PRINT   print
> 
> > #else
> 
> > DEBUG_PRINT
> 
> >
> 
> > Is it possible to implement something similar in python? Thank you.
> 
> 
> 
> There are usually other ways to do things. For instance, you can
> 
> define a function to either do something or do nothing:
> 
> 
> 
> if debug_mode:
> 
> debug_print = print
> 
> else:
> 
> debug_print = lambda: None
> 
> 
> 
> debug_print("This won't be shown unless we're in debug mode!")
> 
> 
> 
> But as Dave says, you could write a preprocessor if you need one.
> 
> 
> 
> ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Terry Reedy

On 11/15/2013 6:28 AM, Robin Becker wrote:

I'm trying to understand what's going on with this simple program

if __name__=='__main__':
 print("repr=%s" % repr(u'\xc1'))
 print("%%r=%r" % u'\xc1')

On my windows XP box this fails miserably if run directly at a terminal

C:\tmp> \Python33\python.exe bang.py
Traceback (most recent call last):
   File "bang.py", line 2, in 
 print("repr=%s" % repr(u'\xc1'))
   File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
 return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\xc1' in
position 6: character maps to 

If I run the program redirected into a file then no error occurs and the
the result looks like this

C:\tmp>cat fff
repr='┴'
%r='┴'

and if I run it into a pipe it works as though into a file.

It seems that repr thinks it can render u'\xc1' directly which is a
problem since print then seems to want to convert that to cp437 if
directed into a terminal.

I find the idea that print knows what it's printing to a bit dangerous,


print() just calls file.write(s), where file defaults to sys.stdout, for 
each string fragment it creates. write(s) *has* to encode s to bytes 
according to some encoding, and it uses the encoding associated with the 
file when it was opened.



but it's the repr behaviour that strikes me as bad.

What is responsible for defining the repr function's 'printable'

> so that repr would give me say an Ascii rendering?

That is not repr's job. Perhaps you are looking for
>>> repr(u'\xc1')
"'Á'"
>>> ascii(u'\xc1')
"'\\xc1'"
The above is with Idle on Win7. It is *much* better than the 
intentionally crippled console for working with the BMP subset of unicode.


--
Terry Jan Reedy


--
https://mail.python.org/mailman/listinfo/python-list


Re: Implementing #define macros similar to C on python

2013-11-15 Thread Terry Reedy

On 11/15/2013 6:36 PM, JL wrote:

Thanks! This is the answer which I am seeking. However, I am not able to get 
the following line to work. I am using python 2.7.5

debug_print = print


Start your file with
from __future__ import print_function
and the above should work.

Oh, and please snip stuff not relevant to your post.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Implementing #define macros similar to C on python

2013-11-15 Thread Mark Lawrence

On 15/11/2013 23:36, JL wrote:

Thanks! This is the answer which I am seeking. However, I am not able to get 
the following line to work. I am using python 2.7.5

debug_print = print

Can we assign a function into a variable in this manner?

On Friday, November 15, 2013 11:49:52 AM UTC+8, Chris Angelico wrote:

On Fri, Nov 15, 2013 at 1:29 PM, JL  wrote:


One of my favorite tools in C/C++ language is the preprocessor macros.







One example is switching certain print messages for debugging use only







#ifdef DEBUG_ENABLE



DEBUG_PRINT   print



#else



DEBUG_PRINT







Is it possible to implement something similar in python? Thank you.




There are usually other ways to do things. For instance, you can

define a function to either do something or do nothing:



if debug_mode:

 debug_print = print

else:

 debug_print = lambda: None



debug_print("This won't be shown unless we're in debug mode!")



But as Dave says, you could write a preprocessor if you need one.



ChrisA


Yes but please don't top post.  Actually print is a statement in Python 
2 so your code should work if you use


from __future__ import print_function

at the top of your code.

Would you also be kind enough to read and action this 
https://wiki.python.org/moin/GoogleGroupsPython to prevent the double 
line spacing shown above, thanks.


--
Python is the second best programming language in the world.
But the best has yet to be invented.  Christian Tismer

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Implementing #define macros similar to C on python

2013-11-15 Thread Irmen de Jong
On 16-11-2013 0:36, JL wrote:
> Thanks! This is the answer which I am seeking. However, I am not able to get 
> the following line to work. I am using python 2.7.5
> 
> debug_print = print
> 
> Can we assign a function into a variable in this manner?

Yes, functions are just another object. But 'print' is only a function as of 
Python 3.
For your version, try adding this as the first line:
from __future__ import print_function

Irmen

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python 3.3 repr

2013-11-15 Thread Steven D'Aprano
On Fri, 15 Nov 2013 17:47:01 +, Neil Cerutti wrote:

> The unicode support I'm learning in Go is, "Everything is utf-8, right?
> RIGHT?!?" It also has the interesting behavior that indexing strings
> retrieves bytes, while iterating over them results in a sequence of
> runes.
> 
> It comes with support for no encodings save utf-8 (natively) and utf-16
> (if you work at it). Is that really enough?

Only if you never need to handle data created by other applications.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Tim Chase
On 2013-11-15 13:43, xDog Walker wrote:
> On Friday 2013 November 15 06:58, Grant Edwards wrote:
> > There are people (not many in this group) who grew up speaking
> > English and really ought to apologize for their writing -- but
> > they never do.  
> 
> Can you supply an example of the form such an apology might take?

"I'm sorry that, despite growing up steeped in the language, I can't
manage to put together two coherent thoughts or practically apply any
of the spelling/grammar/punctuation/capitalization lessons provided
at no cost to me throughout 12+ years of academic instruction."

Harumph.  Non-native speakers get my extensive compassion--English
really is a nutso language, and any attempt to use it for
communicating should be lauded in the face of that challenge.
However, native speakers have a higher bar, IHMO.

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


Bug asking for input number

2013-11-15 Thread Arturo B
Hi! I hope you can help me.

I'm writting a simple piece of code.
I need to keep asking for a number until it has all this specifications:

- It is a number
- It's lenght is 3
- The hundred's digit differs from the one's digit by at least two

My problem is that I enter a valid number like: 123, 321, 159, 346... and it 
keeps asking for a valid number.

Here's mi code:

res = input('Give me a number --> ')
hundreds = int(res[0])
ones = int(res[2])

# checks if the user enters a valid number
while not res.isdigit() or not len(res) == 3 or abs(hundreds - ones) <= 2:
res = input('Enter a valid number --> ')

Thanks for help!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bug asking for input number

2013-11-15 Thread MRAB

On 16/11/2013 02:15, Arturo B wrote:

Hi! I hope you can help me.

I'm writting a simple piece of code.
I need to keep asking for a number until it has all this specifications:

- It is a number
- It's lenght is 3
- The hundred's digit differs from the one's digit by at least two

My problem is that I enter a valid number like: 123, 321, 159, 346... and it 
keeps asking for a valid number.

Here's mi code:

res = input('Give me a number --> ')
hundreds = int(res[0])
ones = int(res[2])

# checks if the user enters a valid number
while not res.isdigit() or not len(res) == 3 or abs(hundreds - ones) <= 2:
 res = input('Enter a valid number --> ')

Thanks for help!


In the loop you're asking for the number but not doing the:

hundreds = int(res[0])
ones = int(res[2])

bit for it.

Also, after the number is entered for the first time, you're not
first checking its length or that it's a number.

It's probably easier just to use a break in a loop:

while True:
res = input('Give me a number --> ')
if len(res) == 3 and res.isdigit() and abs(int(res[0]) - 
int(res[2])) >= 2:

break

--
https://mail.python.org/mailman/listinfo/python-list


Re: Bug asking for input number

2013-11-15 Thread Terry Reedy

On 11/15/2013 9:15 PM, Arturo B wrote:

Hi! I hope you can help me.

I'm writting a simple piece of code.
I need to keep asking for a number until it has all this specifications:

- It is a number
- It's lenght is 3
- The hundred's digit differs from the one's digit by at least two

My problem is that I enter a valid number like: 123, 321, 159, 346... and it 
keeps asking for a valid number.


If you enter a 'valid' number at first try, it works fine.


Here's mi code:

res = input('Give me a number --> ')
hundreds = int(res[0])
ones = int(res[2])

# checks if the user enters a valid number
while not res.isdigit() or not len(res) == 3 or abs(hundreds - ones) <= 2:


Look at that last condition *carefully*


 res = input('Enter a valid number --> ')



--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Bug asking for input number

2013-11-15 Thread Arturo B
MRAB your solution is good thank you I will use it.

Terry Eddy I saw my mistake about for example 2 <= 2, I think it's easier to 
use break in this case thank you!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Suggest an open-source issue tracker, with github integration and kanban boards?

2013-11-15 Thread Jason Friedman
> Can you recommend an open source project (or two) written in Python;
> which covers multi project + sub project issue tracking linked across
> github repositories?
>

Why does it need to be written in Python?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bug asking for input number

2013-11-15 Thread Christopher Welborn

On 11/15/2013 08:15 PM, Arturo B wrote:> Hi! I hope you can help me.
>
> I'm writting a simple piece of code.
> I need to keep asking for a number until it has all this specifications:
>
> - It is a number
> - It's lenght is 3
> - The hundred's digit differs from the one's digit by at least two
>
> My problem is that I enter a valid number like: 123, 321, 159, 346... 
and it keeps asking for a valid number.

>
> Here's mi code:
>
> res = input('Give me a number --> ')
> hundreds = int(res[0])
> ones = int(res[2])
>
> # checks if the user enters a valid number
> while not res.isdigit() or not len(res) == 3 or abs(hundreds - ones) 
<= 2:

>  res = input('Enter a valid number --> ')
>
> Thanks for help!
>


You only set 'hundreds' and 'ones' the first time, when the loop goes 
around those values never change. Also, I don't see any .isdigit() 
before you call int(), which may make it error (maybe you just didn't 
post the full code?). Also, I think your <= is flipped the wrong way.

The difference should be greater than or equal to 2 right?
Try something like this:

def is_valid_input(s):
""" Returns True if a number is a digit,
is 3 digits long,
and hundreds - ones is >= 2
"""
if not (s.isdigit() and (len(s) == 3)):
return False
hundreds = int(s[0])
ones = int(s[2])
return abs(hundreds - ones) >= 2

prompt = 'Give me a number --> '

res = input(prompt)
while not is_valid_input(res):
print('\nInvalid number!: {}\n'.format(res))
res = input(prompt)

...Of course you don't have to make it a function, I just did that 
because it was going to be used more than once. If you need to actually 
work with 'hundreds' and 'ones', you can rewrite it to suit your needs.


--

- Christopher Welborn 
  http://welbornprod.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Bug asking for input number

2013-11-15 Thread Christopher Welborn

Sorry about my previous post, gmane is being really slow. :(

I wouldn't have posted if I knew the question was already answered.


--

- Christopher Welborn 
  http://welbornprod.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Rick Johnson
On Friday, November 15, 2013 2:19:01 AM UTC-6, Steven D'Aprano wrote:

> But with software, coupling is *easy*. By default, code in
> a single process is completely coupled. Think of a chunk
> of machine code running in a single piece of memory. We
> have to build in our own conventions for decoupling code:
> subroutines, local variables, objects, modular code, and
> so forth. Physical objects are inherently decoupled. Code
> is inherently coupled, and we need conventions to decouple
> it. One of those conventions is to prefer local variables
> to global variables, and another is to limit the scope of
> global variables to per module rather than process-wide.

You're thoughts on "coupling" and "decoupling"
of design architecture is correct, but you only argue for
your side :-). Allow me to now argue for my side now.

And i want to leave the "safe" world of general analogies
and enter the dark esoteric word of flawed software design.

And since people only want to give me credit when i talk
about Tkinter, well then, what better example of bad design
is there than Tkinter? Hmm, well there's IDLE but that will
have to wait for another thread.

Let's see... Tkinter's design today is a single module
containing a staggering:

155,626 chars

3,733 lines

30 classes 

16 functions

4 "puesdo-constants" (Python does not support true
constants!)

10 "module level" variables (3 of which are mutated from
nested scopes within the module itself)

Unwise use of a global import for the types module, even
though only a few names are used -- AND there are
better ways to test type nowadays!

Unwisely auto-imports 82 Tkinter constants.

Only OpenGL is more promiscuous than Tkinter!  

But let's stay on subject shall we!


  The Road To Recovery:


The very first thing a wise programmer would do is create a
package called "tkinter". Then, he would export all class
source code to individual sub-modules -- each module being
the class name in lowercase.

AND VOILA! 

Even after only a simple half hour of restructuring, the
code is starting to become maintainable -- IMAGINE THAT!


BUT DON'T GET YOUR ASTROGLIDE OUT YET FELLA! 

WE'VE GOT MORE WORK TO DO!

Just as the programmer thought "all was well" in "toon
town", he quickly realizes that since Python has no
intelligent global variable access, and his sub-modules need
to share data with the main tkinter module (and vise versa),
he will be forced to write:

from tkinter import var1, var2, ..., varN

IN EVERY DAMN SUBMODULE that needs to access or
mutate one of the shared variables or shared
functions.

Can anyone tell me why sharing globals between sub-packages
is really so bad that we have to import things over and
over?

And if so, would you like to offer a cleaner solution for
the problem? 

And don't give me the messy import thing, because that's 
not elegant!

WHY IS IT NOT ELEGANT RICK?

Because when i see code that accesses a variable like this:

var = value

I have no way of knowing whether the mutation is happening
to a local variable, a module level variable, or even a true
global level variable (one which extends beyond the
containing module).

Sure, i could search the file looking for imports or
global declarations, but why not use "self documenting
paths" to global variables?

The best solution is to create a global namespace. You could
name it "G". So when i see a mutation like this:

G.var = value:

I will know that the mutation is happening to a REAL global
variable. But, even that information is lacking. I need
more... What i really want to see is this:

G.tkinter.var = value
  
Boom baby! Every thing i need to know is contained within
that single line without "import everywhere".

   I am accessing a global variable
   I am accessing a global variable for the tkinter package
   The variable's name is "var"
  
It's explicit, but it's not SO explicit that it becomes
excessive, no. I would much rather type just a FEW more
characters than scour source code looking for obscure clues
like global declarations, imports, or whatever foolish
design you can pull out of your arse!

And what about the mysterious "run-time injected
global", how the heck are you planning to handle
that one with imports?

I just want to access globals in an logical and consistent
manner via a clean interface which will alleviate all the
backtracking and detective work that causes us to lose focus
on the main architecture of our software.

Because,

EXPLICIT IS BETTER THAN IMPLICIT.
 
And, 

FOCUS IS BETTER THAN FRUSTRATION!

Is that really too much to ask? 

Must i create a hack (C.py and G.py) for every missing or
broken feature in this damn language?
-- 
https://mail.python.org/mailman/listinfo/pyt

Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 3:01 PM, Rick Johnson
 wrote:
> Let's see... Tkinter's design today is a single module
> containing a staggering:
>
> 155,626 chars
>
> 3,733 lines

Also: I see nothing wrong with a single module having 3-4K lines in
it. Hilfe, the Pike REPL/interactive interpreter, is about that long
and it's not a problem to maintain. The Python decimal module (as
opposed to CDecimal) is twice that, in the installation I have here to
check. My primary C++ module from work was about 5K lines, I think -
of that order, at least.

Python modules don't need to be split up into tiny fragments.
Flat is better than nested.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: PyMyth: Global variables are evil... WRONG!

2013-11-15 Thread Chris Angelico
On Sat, Nov 16, 2013 at 3:01 PM, Rick Johnson
 wrote:
> Because when i see code that accesses a variable like this:
>
> var = value
>
> I have no way of knowing whether the mutation is happening
> to a local variable, a module level variable, or even a true
> global level variable (one which extends beyond the
> containing module).

If it's in a function, and there's no global/nonlocal declaration,
it's local. Otherwise, it's module level. It can't be process-level in
Python, so you don't need to worry about that.

It doesn't get much simpler than that without variable declarations
(in which case it's "scan surrounding scopes till you find a
declaration, that's it" - and it's arguable whether that's simpler or
not). Really Rick, you're clutching at straws here.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Implementing #define macros similar to C on python

2013-11-15 Thread JL
On Saturday, November 16, 2013 8:22:25 AM UTC+8, Mark Lawrence wrote:

> Yes but please don't top post.  Actually print is a statement in Python 
> 2 so your code should work if you use
> from __future__ import print_function
> at the top of your code.
> Would you also be kind enough to read and action this 
> https://wiki.python.org/moin/GoogleGroupsPython to prevent the double 
> line spacing shown above, thanks.

Thank you for the tip. Will try that out. Hope I get the posting etiquette 
right this time.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automation

2013-11-15 Thread Larry Hudson

On 11/15/2013 07:02 AM, Grant Edwards wrote:

On 2013-11-15, Paul Rudin  wrote:

Steven D'Aprano  writes:


A few minor errors is one thing, but when you see people whose posts are
full of error after error and an apparent inability to get English syntax
right, you have to wonder how on earth they expect to be a programmer?


The irritating thing is apparent lack of care. A post is written once
and will be seen (perhaps not read) by many people. People post with the
intention of others reading their words. If they can't be bothered to
take a little care in writing, why should we spend time reading?


Just because English is your second language it doesn't mean you don't
need to pay attention to what keys you're hitting and proof-read a
posting before hitting "send".

And yes, people can _easily_ tell the difference between errors caused
by being lazy/sloppy and errors caused by writing in a second
language.

Not to start another flame-war (I hope), but our Greek friend is a good example of that.  It's 
not surprising he has so much trouble with his code.


However, that's just a side comment.  I wanted to mention my personal peeve...

I notice it's surprisingly common for people who are native English-speakers to use 'to' in 
place of 'too' (to little, to late.), "your" in place of "you're" (Your an idiot!) and 'there' 
in place of 'their' (a foot in there mouth.)  There are similar mis-usages, of course, but those 
three seem to be the most common.


Now, I'm a 76-year-old curmudgeon and maybe overly sensitive, but I felt a need 
to vent a bit.

 -=- Larry -=-

--
https://mail.python.org/mailman/listinfo/python-list