How does CO_FUTURE_DIVISION compiler flag get propagated?

2011-07-02 Thread Terry
I've built a Python app for the iPhone, http://www.sabonrai.com/PythonMath/.

Like embedding Python in another app, it uses PyRun_SimpleString() to
execute commands entered by the user. For evaluating expressions, it
uses PyEval_EvalCode() with the dictionary from the __main__ module.

Future division ("from __future__ import division") works within
scripts executed by import or execfile(). However, it does not work
when entered interactively in the interpreter like this:

>>> from __future__ import division
>>> a=2/3

You get classic (integer) division, but if you enter it as follows,
you get future (float) division.

>>> from __future__ import division;a=2/3

It appears that the CO_FUTURE_DIVISION compiler flag is not being
retained in the interpreter so that later commands get compiled
without that flag.

I found a hint in
http://groups.google.com/group/comp.lang.python/browse_thread/thread/13a90a9f6eb96c73/960e47f572a59711?lnk=gst&q=co_future_division#960e47f572a59711,
but I don't see that PyRun_SimpleStringFlags returns the flags it
uses. I guess I could watch for the user to enter the import command
but that's really brittle.

Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How does CO_FUTURE_DIVISION compiler flag get propagated?

2011-07-02 Thread Terry
On Jul 2, 3:55 pm, Hrvoje Niksic  wrote:
> Terry  writes:
> > Future division ("from __future__ import division") works within
> > scripts executed by import or execfile(). However, it does not work
> > when entered interactively in the interpreter like this:
>
> >>>> from __future__ import division
> >>>> a=2/3
>
> Are you referring to the interactive interpreter normally invoked by
> just running "python"?  That seems to work for me:
>
> Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53)
> [GCC 4.5.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.>>> 2/3
> 0
> >>> from __future__ import division
> >>> 2/3
>
> 0.

Yes, that works for me on my Mac. The problem I'm having is in a
Python interpreter that I built for the iPhone. It uses
PyRun_SimpleString() to execute user entered commands. After you
import future division, it does not seem to remember it on subsequent
commands.
-- 
http://mail.python.org/mailman/listinfo/python-list


.py and .pyc files in read-only directory

2011-10-14 Thread Terry
I'm having a problem with my iPhone/iPad app, Python Math, a Python
2.7 interpreter. All the Python modules are delivered in what Apple
calls the app bundle. They are in a read-only directory. This means
that Python cannot write .pyc files to that directory. (I get a deny
write error when doing this.) I tried using compileall to precompile
all the modules, but now I get an unlink error because Python
apparently wants to rebuild the .pyc files.

I've considered two solutions:
1) Delete all the .py files, delivering only the .pyc, or
2) Redirecting the .pyc files into a separate, writable directory.

Will solution 1) work? I don't know how to do 2) and the only
reference I can find to that are a withdrawn PEP, 304.

Suggestions?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: .py and .pyc files in read-only directory

2011-10-14 Thread Terry
Thanks, that's very useful. And it explains why Python Math wants to rewrite 
the .pyc files: imp.get_magic() returns (null) whereas on my Mac where I 
compiled them, get_magic() returns '\x03\xf3\r\n'.

Now I just have to figure out why I'm getting nothing useful from get_magic().

I assume this would have to be fixed to try solution 1), i.e., leaving out the 
.py files and delivering only the .pyc.

Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Python3.0 has more duplication in source code than Python2.5

2009-02-06 Thread Terry
I used a CPD (copy/paste detector) in PMD to analyze the code
duplication in Python source code. I found that Python3.0 contains
more duplicated code than the previous versions. The CPD tool is far
from perfect, but I still feel the analysis makes some sense.

|Source Code  | NLOC | Dup60   | Dup30   | Rate60| Rate 30
|
Python1.5(Core)   19418   10723023  6%   16%
Python2.5(Core)   35797   16566441  5%   18%
Python3.0(Core)   40737   34609076  8%   22%
Apache(server) 18693   11142553  6%   14%

NLOC: The net lines of code
Dup60: Lines of code that has 60 continuous tokens duplicated to other
code (counted twice or more)
Dup30: 30 tokens duplicated
Rate60: Dup60/NLOC
Rate30: Dup30/NLOC

We can see that the common duplicated rate is tended to be stable. But
Python3.0 is slightly bigger than that. Consider the small increase in
NLOC, the duplication rate of Python3.0 might be too big.

Does that say something about the code quality of Python3.0?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月7日, 下午3时36分, "Martin v. Löwis"  wrote:
> > Does that say something about the code quality of Python3.0?
>
> Not necessarily. IIUC, copying a single file with 2000 lines
> completely could already account for that increase.
>
> It would be interesting to see what specific files have gained
> large numbers of additional files, compared to 2.5.
>
> Regards,
> Martin

But the duplication are always not very big, from about 100 lines
(rare) to less the 5 lines. As you can see the Rate30 is much bigger
than Rate60, that means there are a lot of small duplications.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> Terry schrieb:
>
> > On 2月7日, 下午3时36分, "Martin v. Löwis"  wrote:
> >>> Does that say something about the code quality of Python3.0?
> >> Not necessarily. IIUC, copying a single file with 2000 lines
> >> completely could already account for that increase.
>
> >> It would be interesting to see what specific files have gained
> >> large numbers of additional files, compared to 2.5.
>
> >> Regards,
> >> Martin
>
> > But the duplication are always not very big, from about 100 lines
> > (rare) to less the 5 lines. As you can see the Rate30 is much bigger
> > than Rate60, that means there are a lot of small duplications.
>
> Do you by any chance have a few examples of these? There is a lot of
> idiomatic code in python to e.g. acquire and release the GIL or doing
> refcount-stuff. If that happens to be done with rather generic names as
> arguments, I can well imagine that as being the cause.
>
> Diez

Example 1:
Found a 64 line (153 tokens) duplication in the following files:
Starting at line 73 of D:\DOWNLOADS\Python-3.0\Python\thread_pth.h
Starting at line 222 of D:\DOWNLOADS\Python-3.0\Python
\thread_pthread.h

return (long) threadid;
#else
return (long) *(long *) &threadid;
#endif
}

static void
do_PyThread_exit_thread(int no_cleanup)
{
dprintf(("PyThread_exit_thread called\n"));
if (!initialized) {
if (no_cleanup)
_exit(0);
else
exit(0);
}
}

void
PyThread_exit_thread(void)
{
do_PyThread_exit_thread(0);
}

void
PyThread__exit_thread(void)
{
do_PyThread_exit_thread(1);
}

#ifndef NO_EXIT_PROG
static void
do_PyThread_exit_prog(int status, int no_cleanup)
{
dprintf(("PyThread_exit_prog(%d) called\n", status));
if (!initialized)
if (no_cleanup)
_exit(status);
else
exit(status);
}

void
PyThread_exit_prog(int status)
{
do_PyThread_exit_prog(status, 0);
}

void
PyThread__exit_prog(int status)
{
do_PyThread_exit_prog(status, 1);
}
#endif /* NO_EXIT_PROG */

#ifdef USE_SEMAPHORES

/*
 * Lock support.
 */

PyThread_type_lock
PyThread_allocate_lock(void)
{

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> Terry schrieb:
>
> > On 2月7日, 下午3时36分, "Martin v. Löwis"  wrote:
> >>> Does that say something about the code quality of Python3.0?
> >> Not necessarily. IIUC, copying a single file with 2000 lines
> >> completely could already account for that increase.
>
> >> It would be interesting to see what specific files have gained
> >> large numbers of additional files, compared to 2.5.
>
> >> Regards,
> >> Martin
>
> > But the duplication are always not very big, from about 100 lines
> > (rare) to less the 5 lines. As you can see the Rate30 is much bigger
> > than Rate60, that means there are a lot of small duplications.
>
> Do you by any chance have a few examples of these? There is a lot of
> idiomatic code in python to e.g. acquire and release the GIL or doing
> refcount-stuff. If that happens to be done with rather generic names as
> arguments, I can well imagine that as being the cause.
>
> Diez

Example 2:
Found a 16 line (106 tokens) duplication in the following files:
Starting at line 4970 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c
Starting at line 5015 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c
Starting at line 5073 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c
Starting at line 5119 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c

PyErr_Format(PyExc_TypeError,
"GeneratorExp field \"generators\" must be a list, not a %.200s", tmp-
>ob_type->tp_name);
goto failed;
}
len = PyList_GET_SIZE(tmp);
generators = asdl_seq_new(len, arena);
if (generators == NULL) goto failed;
for (i = 0; i < len; i++) {
comprehension_ty value;
res = obj2ast_comprehension
(PyList_GET_ITEM(tmp, i), &value, arena);
if (res != 0) goto failed;
asdl_seq_SET(generators, i, value);
}
Py_XDECREF(tmp);
tmp = NULL;
} else {
PyErr_SetString(PyExc_TypeError, "required
field \"generators\" missing from GeneratorExp");

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> Terry schrieb:
>
> > On 2月7日, 下午3时36分, "Martin v. Löwis"  wrote:
> >>> Does that say something about the code quality of Python3.0?
> >> Not necessarily. IIUC, copying a single file with 2000 lines
> >> completely could already account for that increase.
>
> >> It would be interesting to see what specific files have gained
> >> large numbers of additional files, compared to 2.5.
>
> >> Regards,
> >> Martin
>
> > But the duplication are always not very big, from about 100 lines
> > (rare) to less the 5 lines. As you can see the Rate30 is much bigger
> > than Rate60, that means there are a lot of small duplications.
>
> Do you by any chance have a few examples of these? There is a lot of
> idiomatic code in python to e.g. acquire and release the GIL or doing
> refcount-stuff. If that happens to be done with rather generic names as
> arguments, I can well imagine that as being the cause.
>
> Diez

Example of a small one (61 token duplicated):
Found a 19 line (61 tokens) duplication in the following files:
Starting at line 132 of D:\DOWNLOADS\Python-3.0\Python\modsupport.c
Starting at line 179 of D:\DOWNLOADS\Python-3.0\Python\modsupport.c

PyTuple_SET_ITEM(v, i, w);
}
if (itemfailed) {
/* do_mkvalue() should have already set an error */
Py_DECREF(v);
return NULL;
}
if (**p_format != endchar) {
Py_DECREF(v);
PyErr_SetString(PyExc_SystemError,
"Unmatched paren in format");
return NULL;
}
if (endchar)
++*p_format;
return v;
}

static PyObject *

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> Terry schrieb:
>
> > On 2月7日, 下午3时36分, "Martin v. Löwis"  wrote:
> >>> Does that say something about the code quality of Python3.0?
> >> Not necessarily. IIUC, copying a single file with 2000 lines
> >> completely could already account for that increase.
>
> >> It would be interesting to see what specific files have gained
> >> large numbers of additional files, compared to 2.5.
>
> >> Regards,
> >> Martin
>
> > But the duplication are always not very big, from about 100 lines
> > (rare) to less the 5 lines. As you can see the Rate30 is much bigger
> > than Rate60, that means there are a lot of small duplications.
>
> Do you by any chance have a few examples of these? There is a lot of
> idiomatic code in python to e.g. acquire and release the GIL or doing
> refcount-stuff. If that happens to be done with rather generic names as
> arguments, I can well imagine that as being the cause.
>
> Diez

Example of a even small one (30 token duplicated):
Found a 11 line (30 tokens) duplication in the following files:
Starting at line 2551 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c
Starting at line 3173 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c

if (PyObject_SetAttrString(result, "ifs", value) == -1)
goto failed;
Py_DECREF(value);
return result;
failed:
Py_XDECREF(value);
Py_XDECREF(result);
return NULL;
}

PyObject*

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> Terry schrieb:
>
> > On 2月7日, 下午3时36分, "Martin v. Löwis"  wrote:
> >>> Does that say something about the code quality of Python3.0?
> >> Not necessarily. IIUC, copying a single file with 2000 lines
> >> completely could already account for that increase.
>
> >> It would be interesting to see what specific files have gained
> >> large numbers of additional files, compared to 2.5.
>
> >> Regards,
> >> Martin
>
> > But the duplication are always not very big, from about 100 lines
> > (rare) to less the 5 lines. As you can see the Rate30 is much bigger
> > than Rate60, that means there are a lot of small duplications.
>
> Do you by any chance have a few examples of these? There is a lot of
> idiomatic code in python to e.g. acquire and release the GIL or doing
> refcount-stuff. If that happens to be done with rather generic names as
> arguments, I can well imagine that as being the cause.
>
> Diez

And I'm not saying that you can not have duplication in code. But it
seems that the stable & successful software releases tend to have
relatively stable duplication rate.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月8日, 上午12时20分, Benjamin Peterson  wrote:
> Terry  gmail.com> writes:
>
> > On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> > > Do you by any chance have a few examples of these? There is a lot of
> > > idiomatic code in python to e.g. acquire and release the GIL or doing
> > > refcount-stuff. If that happens to be done with rather generic names as
> > > arguments, I can well imagine that as being the cause.
> > Starting at line 5119 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c
>
> This isn't really fair because Python-ast.c is auto generated. ;)

Oops! I don't know that! Then the analysis will not be valid, since
too many duplications are from there.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3.0 has more duplication in source code than Python2.5

2009-02-07 Thread Terry
On 2月8日, 上午8时51分, Terry  wrote:
> On 2月8日, 上午12时20分, Benjamin Peterson  wrote:
>
> > Terry  gmail.com> writes:
>
> > > On 2月7日, 下午7时10分, "Diez B. Roggisch"  wrote:
> > > > Do you by any chance have a few examples of these? There is a lot of
> > > > idiomatic code in python to e.g. acquire and release the GIL or doing
> > > > refcount-stuff. If that happens to be done with rather generic names as
> > > > arguments, I can well imagine that as being the cause.
> > > Starting at line 5119 of D:\DOWNLOADS\Python-3.0\Python\Python-ast.c
>
> > This isn't really fair because Python-ast.c is auto generated. ;)
>
> Oops! I don't know that! Then the analysis will not be valid, since
> too many duplications are from there.

Hey!

I have to say sorry because I found I made a mistake. Because Python-
ast.c is auto-generated and shouldn't be counted here, the right
duplication rate of Python3.0 is very small (5%).
And I found the duplications are quite trivial, I wound not say that
all of them are acceptable, but certainly not a strong enough evident
for code quality.

I have made the same analysis to some commercial source code, the
dup60 rate is quite often significantly larger than 15%.
--
http://mail.python.org/mailman/listinfo/python-list


Get all the instances of one class

2008-05-16 Thread Terry
Hi,

Is there a simple way to get all the instances of one class? I mean
without any additional change to the class.

br, Terry
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get all the instances of one class

2008-05-18 Thread Terry
On May 17, 8:04 am, "Gabriel Genellina" <[EMAIL PROTECTED]>
wrote:
> En Fri, 16 May 2008 20:44:00 -0300, Terry <[EMAIL PROTECTED]>
> escribió:
>
> > Is there a simple way to get all the instances of one class? I mean
> > without any additional change to the class.
>
> Try with gc.get_referrers()
>
> py> import gc
> py> class A(object): pass
> ...
> py> a,b,c = A(),A(),A()
> py> A
> 
> py> for item in gc.get_referrers(A): print type(item)
> ...
> 
> 
> 
> 
> 
> 
> 
> 
>
> We need to filter that list, keeping only A's instances:
>
> py> [item for item in gc.get_referrers(A) if isinstance(item,A)]
> [<__main__.A object at 0x00A40DC8>, <__main__.A object at 0x00A40DF0>,
> <__main__.A object at 0x00A40E18>]
>
> --
> Gabriel Genellina

Thanks! This is what I'm looking for.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get all the instances of one class

2008-05-18 Thread Terry
On May 17, 8:04 am, "Gabriel Genellina" <[EMAIL PROTECTED]>
wrote:
> En Fri, 16 May 2008 20:44:00 -0300, Terry <[EMAIL PROTECTED]>
> escribió:
>
> > Is there a simple way to get all the instances of one class? I mean
> > without any additional change to the class.
>
> Try with gc.get_referrers()
>
> py> import gc
> py> class A(object): pass
> ...
> py> a,b,c = A(),A(),A()
> py> A
> 
> py> for item in gc.get_referrers(A): print type(item)
> ...
> 
> 
> 
> 
> 
> 
> 
> 
>
> We need to filter that list, keeping only A's instances:
>
> py> [item for item in gc.get_referrers(A) if isinstance(item,A)]
> [<__main__.A object at 0x00A40DC8>, <__main__.A object at 0x00A40DF0>,
> <__main__.A object at 0x00A40E18>]
>
> --
> Gabriel Genellina

But I saw in the help that we should "Avoid using get_referrers() for
any purpose other than debugging. "
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get all the instances of one class

2008-05-18 Thread Terry
On May 18, 11:35 pm, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote:
> Terry schrieb:
>
>
>
> > On May 17, 8:04 am, "Gabriel Genellina" <[EMAIL PROTECTED]>
> > wrote:
> >> En Fri, 16 May 2008 20:44:00 -0300, Terry <[EMAIL PROTECTED]>
> >> escribió:
>
> >>> Is there a simple way to get all the instances of one class? I mean
> >>> without any additional change to the class.
> >> Try with gc.get_referrers()
>
> >> py> import gc
> >> py> class A(object): pass
> >> ...
> >> py> a,b,c = A(),A(),A()
> >> py> A
> >> 
> >> py> for item in gc.get_referrers(A): print type(item)
> >> ...
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
>
> >> We need to filter that list, keeping only A's instances:
>
> >> py> [item for item in gc.get_referrers(A) if isinstance(item,A)]
> >> [<__main__.A object at 0x00A40DC8>, <__main__.A object at 0x00A40DF0>,
> >> <__main__.A object at 0x00A40E18>]
>
> >> --
> >> Gabriel Genellina
>
> > But I saw in the help that we should "Avoid using get_referrers() for
> > any purpose other than debugging. "
>
> Yes, because using it do is very resource-consuming and shouldn't be
> done. Why don't you tell us what you are after here & then we might come
> up with a better solution?
>
> Diez

I'm developing a message/state_machine based python program (I'm not
using stackless, plan to move to it later).
I want to collect all the state_machines (threads) and send them
'tick' message or 'quit' message.

Now I'm using a static member to collect the machines, just wonderring
if python already provide something like this.
--
http://mail.python.org/mailman/listinfo/python-list


Pyserial - send and receive characters through linux serial port

2008-04-25 Thread terry
Hi,

I am trying to send a character to '/dev/ttyS0' and expect the same
character and upon receipt I want to send another character. I tired
with Pyserial but in vain.

Test Set up:

1. Send '%' to serial port and make sure it reached the serial port.
2. Once confirmed, send another character.

I tried with write and read methods in Pyserial but no luck.

Can you help?

Thanking you all.
T
--
http://mail.python.org/mailman/listinfo/python-list


Question regarding Queue object

2008-04-27 Thread Terry
Hello!

I'm trying to implement a message queue among threads using Queue. The
message queue has two operations:
PutMsg(id, msg) #  this is simple, just combine the id and msg as one
and put it into the Queue.
WaitMsg(ids, msg) # this is the hard part

WaitMsg will get only msg with certain ids, but this is not possible
in Queue object, because Queue provides no method to peek into the
message queue and fetch only matched item.

Now I'm using an ugly solution, fetch all the messages and put the not
used ones back to the queue. But I want a better performance. Is there
any alternative out there?

This is my current solution:

def _get_with_ids(self,wait, timeout, ids):
to = timeout
msg = None
saved = []
while True:
start = time.clock()
msg =self.q.get(wait, to)
if msg and msg['id'] in ids:
break;
# not the expecting message, save it.
saved.append(msg)
to = to - (time.clock()-start)
if to <= 0:
break
# put the saved messages back to the queue
for m in saved:
self.q.put(m, True)
    return msg

br, Terry
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-27 Thread Terry
On Apr 27, 6:27 pm, Terry <[EMAIL PROTECTED]> wrote:
> Hello!
>
> I'm trying to implement a message queue among threads using Queue. The
> message queue has two operations:
> PutMsg(id, msg) #  this is simple, just combine the id and msg as one
> and put it into the Queue.
> WaitMsg(ids, msg) # this is the hard part
>
> WaitMsg will get only msg with certain ids, but this is not possible
> in Queue object, because Queue provides no method to peek into the
> message queue and fetch only matched item.
>
> Now I'm using an ugly solution, fetch all the messages and put the not
> used ones back to the queue. But I want a better performance. Is there
> any alternative out there?
>
> This is my current solution:
>
> def _get_with_ids(self,wait, timeout, ids):
> to = timeout
> msg = None
> saved = []
> while True:
> start = time.clock()
> msg =self.q.get(wait, to)
> if msg and msg['id'] in ids:
> break;
> # not the expecting message, save it.
> saved.append(msg)
> to = to - (time.clock()-start)
> if to <= 0:
> break
> # put the saved messages back to the queue
> for m in saved:
> self.q.put(m, True)
> return msg
>
> br, Terry

I just found that Queue is written in Python, maybe I can override it.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-28 Thread Terry
On Apr 28, 5:30 pm, Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
> David <[EMAIL PROTECTED]> wrote:
> >  Another idea would be to have multiple queues, one per thread or per
> >  message type "group". The producer thread pushes into the appropriate
> >  queues (through an intelligent PutMsg function), and the consumer
> >  threads pull from the queues they're interested in and ignore the
> >  others.
>
> Unfortunately a thread can only wait on one Queue at once (without
> polling).  So really the only efficient solution is one Queue per
> thread.
>
> Make an intelligent PutMsg function which knows which Queue (or
> Queues) each message needs to be put in and all the threads will have
> to do is Queue.get() and be sure they've got a message they can deal
> with.
>
> --
> Nick Craig-Wood <[EMAIL PROTECTED]> --http://www.craig-wood.com/nick


I do have one Queue per thread. The problem is the thread can not peek
into the Queue and select msg with certain ID first.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-28 Thread Terry
On Apr 28, 10:48 pm, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> I've never used it myself but you may find candygram 
> interesting;http://candygram.sourceforge.net, which AFAIK implements 
> Erlang-style
> message queues in Python.

Thank you. I will look at candygram and stackless. I believe my
solution lies in either of them.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-29 Thread Terry
On Apr 29, 3:01 pm, Dennis Lee Bieber <[EMAIL PROTECTED]> wrote:
> On Sun, 27 Apr 2008 03:27:59 -0700 (PDT), Terry <[EMAIL PROTECTED]>
> declaimed the following in comp.lang.python:
>
> > I'm trying to implement a message queue among threads using Queue. The
> > message queue has two operations:
> > PutMsg(id, msg) #  this is simple, just combine the id and msg as one
> > and put it into the Queue.
> > WaitMsg(ids, msg) # this is the hard part
>
> > WaitMsg will get only msg with certain ids, but this is not possible
> > in Queue object, because Queue provides no method to peek into the
> > message queue and fetch only matched item.
>
> > Now I'm using an ugly solution, fetch all the messages and put the not
> > used ones back to the queue. But I want a better performance. Is there
> > any alternative out there?
>
> Create your own queue class -- including locking objects.
>
> Implement the queue itself (I've not looked at how Queue.Queue is
> really done) as a priority queue (that is, a simple list ordered by your
> ID -- new items are inserted after all existing items with the same or
> lower ID number).
>
> Surround list manipulations with a lock based on a Condition.
>
> Now, the trick -- the .get(ID) sequence being something like (this
> is pseudo-code):
>
> while True:
> self.condition.acquire()
> scan self.qlist for first entry with ID
> if found:
> remove entry from self.qlist
> self.condition.release()
> return entry
> self.condition.wait()
>
> -=-=-=-=-   the .put(ID, data) looks like
>
> self.condition.acquire()
> scan self.qlist for position to insert (ID, data)
> self.condition.notifyAll()
> self.condition.release()
>
> -=-=-=-=-
>
> Essentially, if the first pass over the list does not find an entry
> to return, it waits for a notify to occur... and notification will only
> occur when some other thread puts new data into the list.
> --
> WulfraedDennis Lee Bieber   KD6MOG
> [EMAIL PROTECTED]  [EMAIL PROTECTED]
> HTTP://wlfraed.home.netcom.com/
> (Bestiaria Support Staff:   [EMAIL PROTECTED])
> HTTP://www.bestiaria.com/

Yes, now I have a similar solution in my code. But after read the
stackless python, I'm thinking if I can move to stackless, which might
improve the performance of my thread. Because I'm trying to simulate
some behavior of the real world (trading), I believe there will be a
lot of threads in the future in my program.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-29 Thread Terry
On Apr 29, 4:32 pm, [EMAIL PROTECTED] wrote:
> On 27 Apr, 12:27, Terry <[EMAIL PROTECTED]> wrote:
>
>
>
> > Hello!
>
> > I'm trying to implement a message queue among threads using Queue. The
> > message queue has two operations:
> > PutMsg(id, msg) #  this is simple, just combine the id and msg as one
> > and put it into the Queue.
> > WaitMsg(ids, msg) # this is the hard part
>
> > WaitMsg will get only msg with certain ids, but this is not possible
> > in Queue object, because Queue provides no method to peek into the
> > message queue and fetch only matched item.
>
> > Now I'm using an ugly solution, fetch all the messages and put the not
> > used ones back to the queue. But I want a better performance. Is there
> > any alternative out there?
>
> > This is my current solution:
>
> > def _get_with_ids(self,wait, timeout, ids):
> > to = timeout
> > msg = None
> > saved = []
> > while True:
> > start = time.clock()
> > msg =self.q.get(wait, to)
> > if msg and msg['id'] in ids:
> > break;
> > # not the expecting message, save it.
> > saved.append(msg)
> > to = to - (time.clock()-start)
> > if to <= 0:
> > break
> > # put the saved messages back to the queue
> > for m in saved:
> > self.q.put(m, True)
> > return msg
>
> > br, Terry
>
> Wy put them back in the queue?
> You could have a defaultdict with the id as key and a list of
> unprocessed messages with that id as items.
> Your _get_by_ids function could first look into the unprocessed
> messages for items with that ids and then
> look into the queue, putting any unprocessed item in the dictionary,
> for later processing.
> This should improve the performances, with a little complication of
> the method code (but way simpler
> that implementing your own priority-based queue).
>
> Ciao
> -
> FB

Yes, this will improve the performance. And I can see there's a
problem in my current implementation. The order of the message might
be changed if I put the saved message back to the end of the queue.
This may cause some confusion later, though I don't want to depend too
much on the message orders.

And you remind me one thing -- I need to implement 'priority' for
messages, so that the message with highest priority will tend to be
fetched first. OMG, this is going to be much more complicated then I
have expected.

Thanks for your suggestion. And I hope this will also work when I move
to stackless.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-29 Thread Terry
On Apr 29, 5:30 pm, Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
> Terry <[EMAIL PROTECTED]> wrote:
> >  On Apr 28, 5:30 pm, Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
> > > David <[EMAIL PROTECTED]> wrote:
> > > >  Another idea would be to have multiple queues, one per thread or per
> > > >  message type "group". The producer thread pushes into the appropriate
> > > >  queues (through an intelligent PutMsg function), and the consumer
> > > >  threads pull from the queues they're interested in and ignore the
> > > >  others.
>
> > > Unfortunately a thread can only wait on one Queue at once (without
> > > polling).  So really the only efficient solution is one Queue per
> > > thread.
>
> > > Make an intelligent PutMsg function which knows which Queue (or
> > > Queues) each message needs to be put in and all the threads will have
> > > to do is Queue.get() and be sure they've got a message they can deal
> > > with.
>
> >  I do have one Queue per thread. The problem is the thread can not peek
> >  into the Queue and select msg with certain ID first.
>
> My point is don't put messages that the thread doesn't need in the
> queue in the first place.  Ie move that logic into PutMsg.
>
> --
> Nick Craig-Wood <[EMAIL PROTECTED]> --http://www.craig-wood.com/nick

Well, I'm simulating the real world. It's like that you wouldn't drop
or proceed a task when you already started your lunch, just save it
and process it later when you finish your lunch.
Of course the task sender can send the task again and again if he got
not ack from you. But that's  just one possible situation in the real
world, and not an efficient one.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pyserial - send and receive characters through linux serial port

2008-05-02 Thread terry
On Apr 26, 8:21 am, Grant Edwards <[EMAIL PROTECTED]> wrote:
> On 2008-04-25, terry <[EMAIL PROTECTED]> wrote:
>
> > I am trying to send a character to '/dev/ttyS0' and expect the
> > same character and upon receipt I want to send another
> > character. I tired withPyserialbut in vain.
>
> Pyserialworks.  I've been using it almost daily for many
> years.  Either your program is broken, your serial port is
> broken, or the device connected to the serial port is broken.
>
> > Test Set up:
>
> > 1. Send '%' to serial port and make sure it reached the serial port.
> > 2. Once confirmed, send another character.
>
> > I tried with write and read methods inPyserialbut no luck.
>
> > Can you help?
>
> Ah yes, the problem is in line 89 of your program.
>
> We've no way to help if you don't provide details. If you
> really want help, write as small a program as possible that
> exhibits the problem.  I'd like to emphasize _small_. The
> larger the program the less likely people are to look at it,
> and the less likely they are to find the problem if they do
> look at it.
>
> Much of the time the exercise of writing a small demo program
> will lead you to the answer.  If not, then post it, along with
> the output from the program that shows the problem.
>
> Then we can tell you what you did wrong.
>
> --
> Grant Edwards                   grante             Yow! I'm also against
>                                   at               BODY-SURFING!!
>                                visi.com            

Here is the code.

"""Open serial connection"""
def openSerialConnection(self,serpt):
try:
s1 = serial.Serial(serpt,timeout=10)

except:
self.print_u("Failed to open serial port %s. " %serpt)

def enterThroughSerialPort(self,serpt):
s1 = serial.Serial(serpt,timeout=10)
 self.print_u('Sending ..')
 while True:
s1.write('*')
   c = s1.read(1)
   if c:
  self.print_u('Found "*" ')
break
print c
 s1.write('enter\r')
 s1.read('login')

if __name__ == '__main__':
serpt = '/dev/ttyS0'
x.openSerialConnection(serpt)
# funtion to reboot the device goes here ---#
x.enterThroughSerialPort(serpt)

After opening the serial connection, the device is rebooted followed
by sending '*' to serial port and reading back the same. I seem to
have problem while trying to read '*' back from serial port. First of
all I am not sure if serial port received the '*'.

Thanks!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pyserial - send and receive characters through linux serial port

2008-05-02 Thread terry
On May 2, 10:26 am, terry <[EMAIL PROTECTED]> wrote:
> On Apr 26, 8:21 am, Grant Edwards <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
> > On 2008-04-25, terry <[EMAIL PROTECTED]> wrote:
>
> > > I am trying to send a character to '/dev/ttyS0' and expect the
> > > same character and upon receipt I want to send another
> > > character. I tired withPyserialbut in vain.
>
> > Pyserialworks.  I've been using it almost daily for many
> > years.  Either your program is broken, your serial port is
> > broken, or the device connected to the serial port is broken.
>
> > > Test Set up:
>
> > > 1. Send '%' to serial port and make sure it reached the serial port.
> > > 2. Once confirmed, send another character.
>
> > > I tried with write and read methods inPyserialbut no luck.
>
> > > Can you help?
>
> > Ah yes, the problem is in line 89 of your program.
>
> > We've no way to help if you don't provide details. If you
> > really want help, write as small a program as possible that
> > exhibits the problem.  I'd like to emphasize _small_. The
> > larger the program the less likely people are to look at it,
> > and the less likely they are to find the problem if they do
> > look at it.
>
> > Much of the time the exercise of writing a small demo program
> > will lead you to the answer.  If not, then post it, along with
> > the output from the program that shows the problem.
>
> > Then we can tell you what you did wrong.
>
> > --
> > Grant Edwards                   grante             Yow! I'm also against
> >                                   at               BODY-SURFING!!
> >                                visi.com            
>
> Here is the code.
>
> """Open serial connection"""
>         def openSerialConnection(self,serpt):
>             try:
>                 s1 = serial.Serial(serpt,timeout=10)
>
>             except:
>                 self.print_u("Failed to open serial port %s. " %serpt)
>
>         def enterThroughSerialPort(self,serpt):
>             s1 = serial.Serial(serpt,timeout=10)
>              self.print_u('Sending ..')
>              while True:
>                 s1.write('*')
>                c = s1.read(1)
>                if c:
>                   self.print_u('Found "*" ')
>                     break
>             print c
>              s1.write('enter\r')
>              s1.read('login')
>
> if __name__ == '__main__':
>     serpt = '/dev/ttyS0'
>     x.openSerialConnection(serpt)
>     # funtion to reboot the device goes here ---#
>     x.enterThroughSerialPort(serpt)
>
> After opening the serial connection, the device is rebooted followed
> by sending '*' to serial port and reading back the same. I seem to
> have problem while trying to read '*' back from serial port. First of
> all I am not sure if serial port received the '*'.
>
> Thanks!- Hide quoted text -
>
> - Show quoted text -

This is the err message I received:

c = s1.read(1)
File "/usr/local/lib/python2.5/site-packages/serial/serialposix.py",
line 275, in read
ready,_,_ = select.select([self.fd],[],[], self._timeout)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Initializing a subclass with a super object?

2008-05-10 Thread Terry
On May 11, 7:22 am, [EMAIL PROTECTED] wrote:
> Class A inherits from class B.  Can anyone point me in the direction
> of documentation saying how to initialize an object of A, a1, with an
> object of B, b1?

This is called a 'factory'in design patterns. Search 'python factory',
you will get a lot of examples.

br, Terry
--
http://mail.python.org/mailman/listinfo/python-list


How to pickle a lambda function?

2009-08-10 Thread Terry
Hi,

I'm trying to implement something like:

remote_map(fun, list)

to execute the function on a remove machine. But the problem is I
cannot pickle a lambda function and send it to the remote machine.

Is there any possible way to pickle (or other method) any functions
including lambda?

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to pickle a lambda function?

2009-08-11 Thread Terry
On Aug 11, 3:42 pm, Duncan Booth  wrote:
> Terry  wrote:
> > I'm trying to implement something like:
>
> > remote_map(fun, list)
>
> > to execute the function on a remove machine. But the problem is I
> > cannot pickle a lambda function and send it to the remote machine.
>
> > Is there any possible way to pickle (or other method) any functions
> > including lambda?
>
> You can pickle any named functions that are declared at module scope.
>
> You cannot pickle anonymous functions, methods, or functions declared
> nested inside other functions. The function must be present in the same
> module when you unpickle it, and if the definition has changed between
> pickling and unpickling the new definition will be used (just as other
> instances will use the current class definition not the one they were
> pickled with).
>
> You probably could pickle some of the components needed to create your
> lambda and construct a new function from it when unpickling: try the code
> object, the name of the module to be used for the globals, and default
> arguments. I don't think you can pickle the closure so better make sure
> your lambda doesn't need one, and be very careful to ensure that you
> restore the pickle in the same version of Python otherwise the code object
> might break. Best just avoid this and use named functions for anything that
> needs pickling.
>
> --
> Duncan Boothhttp://kupuguy.blogspot.com

Yes, I'm think of pickle (actually marshal) the code object. Otherwise
I have to use string and eval:-(

The reason I need to be able to pickle any function is because I want
my remote machine knows nothing about the function before receiving
it, so I don't need to update the source code in the remote machine
very often.

br, terry
-- 
http://mail.python.org/mailman/listinfo/python-list


ignored test cases in unittest

2009-08-15 Thread Terry
Hi,

I have some 100s unittest cases with my python program. And sometimes,
I did quick-and-dirty work by ignoring some test cases by adding an
'x' (or something else) to the beginning of the case name.
As time pass by, it's very hard for me to find which test cases are
ignored.

It seemed the to me that python unittest module does not support the
counting of ignored test cases directly. Is there any ready solution
for this?

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


flatten a list of list

2009-08-16 Thread Terry
Hi,

Is there a simple way (the pythonic way) to flatten a list of list?
rather than my current solution:

new_list=[]
for l in list_of_list:
new_list.extend(l)

or,

new_list=reduce(lambda x,y:x.extend(y), list_of_list)

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ignored test cases in unittest

2009-08-16 Thread Terry
On Aug 16, 5:25 pm, Duncan Booth  wrote:
> Ben Finney  wrote:
> > Terry  writes:
>
> >> It seemed the to me that python unittest module does not support the
> >> counting of ignored test cases directly. Is there any ready solution
> >> for this?
>
> > One solution I've seen involves:
>
> > * a custom exception class, ‘TestSkipped’
>
> > * raising that exception at the top of test cases you want to
> >   temporarily skip
>
> > * a custom ‘TestResult’ class that knows about a “skipped” result
>
> > * a custom reporter class that knows how you want to report that result
>
> I'd add to that a decorator so you can quickly mark a test case as ignored
> without editing the test itself. Also you could include a reason why it is
> ignored:
>
> �...@ignore("This test takes too long to run")
>  def test_foo(self):
>     ...
>
> That also means you can redefine the decorator easily if you want to try
> running all the ignored tests.
>
> Another decorator useful here is one that asserts that the test will fail.
> If the test passes then maybe someone fixed whatever was making it fail and
> if so you want to consider re-enabling it.
>
> �...@fails("Needs the frobnozz module to be updated")
>  def test_whotsit(self):
>     ...

Thanks for the solutions. I think the decorator idea is what I'm look
for:-)



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: flatten a list of list

2009-08-16 Thread Terry
On Aug 16, 6:59 pm, Chris Rebert  wrote:
> On Sun, Aug 16, 2009 at 6:49 AM, Steven
>
>
>
>
>
> D'Aprano wrote:
> > On Sun, 16 Aug 2009 05:55:48 -0400, Chris Rebert wrote:
> >> On Sun, Aug 16, 2009 at 5:47 AM, Terry wrote:
> >>> Hi,
>
> >>> Is there a simple way (the pythonic way) to flatten a list of list?
> >>> rather than my current solution:
>
> >>> new_list=[]
> >>> for l in list_of_list:
> >>>    new_list.extend(l)
>
> >>> or,
>
> >>> new_list=reduce(lambda x,y:x.extend(y), list_of_list)
>
> >> #only marginally better:
> >> from operator import add
> >> new_list = reduce(add, list_of_list)
>
> > Surely that's going to be O(N**2)?
>
> The OP asked for "simple", not "best", "most proper", or "fastest". My
> comment was intended to mean that the code was marginally *simpler*,
> not faster.
>
> Cheers,
> Chris
> --http://blog.rebertia.com

Well, if possible, I'd like not only to know a simple solution, but
also the 'best', the 'most proper' and the 'fastest':-)

If they are not the same.

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing managers and socket connection.

2009-08-25 Thread Terry
On Aug 25, 10:14 pm, Chris  wrote:
> I've been using multiprocessing managers and I really like the
> functionality.
>
> I have a question about reconnecting to a manager. I have a situation
> where I start on one machine (A) a manager that is listening and then
> on another machine (B) connects to that manager and uses its proxy
> object to call functions on the manager's computer; this all works as
> expected. But, if the manager from A shuts down, B's application won't
> notice because in the MP code it ignores socket error
> errno.ECONNREFUSED. If A becomes available again or is restarted, B
> doesn't automatically reconnect either and continue its operation.
> It's function is basically stopped.
>
> Here is the code from connection.py:
> while 1:
>         try:
>             s.connect(address)
>         except socket.error, e:
>             if e.args[0] != errno.ECONNREFUSED: # connection refused
>                 debug('failed to connect to address %s', address)
>                 raise
>             time.sleep(0.01)
>         else:
>             break
>
> How can I have B automatically reconnect to A and continue its work
> once A is available again?

I think you need to retry repeatedly until successfully connected.

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Return value of multiprocessing manager registerred function

2009-08-25 Thread Terry
Hi,

I'm using the multiprocessing.manager to run procedures remotely. It
all worked fine except I hope to have a different return value type.

The remote function calls always return a proxy, which when I need to
get the value it need to connect to the manager again to fetch it. But
I just need the value, not the proxy.

Can I just return the value instead of a proxy from a manager?

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Return value of multiprocessing manager registerred function

2009-08-31 Thread Terry
On Aug 31, 5:58 pm, jacopo  wrote:
> Hi Terry,
> I have just started working on similar things and I am strugling to
> find examples or documentations. So far I have found only the official
> documentation of the multiprocessing package. Would you be able to
> recommend me some good reference or a book. I dont want to overwhelm
> this newsgroup with questions... yet :)
> Regards, Jacopo
>
> On Aug 26, 4:22 am, Terry  wrote:
>
>
>
> > Hi,
>
> > I'm using the multiprocessing.manager to run proceduresremotely. It
> > all worked fine except I hope to have a different return value type.
>
> > The remote function calls always return a proxy, which when I need to
> > get the value it need to connect to the manager again to fetch it. But
> > I just need the value, not the proxy.
>
> > Can I just return the value instead of a proxy from a manager?
>
> > br, Terry
Hi Jacopo,
Well, I also have had a hard time to find any examples or document
except the official document. I had to read the multiprocessing source
code (they are all in python except the network connection parts). And
I found I need to hack it a little to get it work.
You can share your questions, and maybe it's common to most of us.

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing managers and socket connection.

2009-08-31 Thread Terry
On Aug 26, 7:25 pm, Chris  wrote:
> On Aug 25, 9:11 pm, Terry  wrote:
>
>
>
>
>
> > On Aug 25, 10:14 pm, Chris  wrote:
>
> > > I've been using multiprocessing managers and I really like the
> > > functionality.
>
> > > I have a question about reconnecting to a manager. I have a situation
> > > where I start on one machine (A) a manager that is listening and then
> > > on another machine (B) connects to that manager and uses its proxy
> > > object to call functions on the manager's computer; this all works as
> > > expected. But, if the manager from A shuts down, B's application won't
> > > notice because in the MP code it ignores socket error
> > > errno.ECONNREFUSED. If A becomes available again or is restarted, B
> > > doesn't automatically reconnect either and continue its operation.
> > > It's function is basically stopped.
>
> > > Here is the code from connection.py:
> > > while 1:
> > >         try:
> > >             s.connect(address)
> > >         except socket.error, e:
> > >             if e.args[0] != errno.ECONNREFUSED: # connection refused
> > >                 debug('failed to connect to address %s', address)
> > >                 raise
> > >             time.sleep(0.01)
> > >         else:
> > >             break
>
> > > How can I have B automatically reconnect to A and continue its work
> > > once A is available again?
>
> > I think you need to retry repeatedly until successfully connected.
>
> > br, Terry
>
> I'm having issue after once connected. If the server goes down during
> a long-running connection. I would want to be notified so I could try
> to reconnect. I'm doing more experimenting now and will try to post an
> example.

Hi Chris,

Are you sure that the proxy object keeps a permanent connection to the
server?

br, Terry
-- 
http://mail.python.org/mailman/listinfo/python-list


$$Nike shoes wholesale\retail

2010-10-16 Thread Terry
$$Nike shoes wholesale\retail

Our company mainly deal with the import and export of the brand sports
shoes, clothes, bags , glasses, etc . Products such as Nike Jordan
sell well in America , Canada , as well as Europe and other countries.
Our objective is to supply products of first-class quality and
advanced technology. Customers satisfaction is our greatest pursuit.
We thank you for your attention and wish having a long time business
relationship with all buyers from all over the world.

we take PAYPAL as the method of payment!
please kindly visite our website: http://www.8000stars.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Split a list into two parts based on a filter?

2013-06-12 Thread Terry Reedy

On 6/12/2013 7:39 AM, Roy Smith wrote:


starts.  But, somewhat more seriously, I wonder what, exactly, it is
that freaks people out about:


[(new_songs if s.is_new() else old_songs).append(s) for s in songs]


Clearly, it's not the fact that it build and immediately discards a
list, because that concern is addressed with the generator hack, and I
think everybody (myself included) agrees that's just horrible.


It is an example of comprehension abuse. Comprehensions express and 
condense a stylized pattern of creating collections from another 
collection or collections, possibly filtered. They were not mean to 
replace for statements and turn Python into an fp languages. Indeed, 
they do replace and expand upon the fp map function. Python for loops 
are not evil.



Or, is it the use of the conditional to create the target for append()?
Would people be as horrified if I wrote:

for s in songs:
 (new_songs if s.is_new() else old_songs).append(s)



No. That succinctly expresses and implements the idea 'append each song 
to one of two lists.



or even:

for s in songs:
 the_right_list = new_songs if s.is_new() else old_songs
 the_right_list.append(s)




--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Split a list into two parts based on a filter?

2013-06-12 Thread Terry Reedy

On 6/12/2013 12:57 PM, Fábio Santos wrote:

Why is there no builtin to consume a generator? I find that odd.


There are several builtins than consume generators -- and do something 
useful with the yielded objects. What you mean is "Why is there no 
builtin to uselessly consume a generator?" The question almost answers 
itself. A generator generates objects to be used.



On Wed, Jun 12, 2013 at 5:28 PM, Serhiy Storchaka  wrote:



12.06.13 09:32, Phil Connell написав(ла):

You could equivalently pass the generator to deque() with maxlen=0 -
this consumes the iterator with constant memory usage.



any((new_songs if s.is_new() else old_songs).append(s) for s in songs)


The problem here is that the generator generates and yields an unwanted 
sequence of None (references) from the append operations. The proper 
loop statement


for s in songs:
(new_songs if s.is_new() else old_songs).append(s)

simply ignores the None return of the appends. Since it does not yield 
None over and over, no extra code is needed to ignore what should be 
ignored in the first place.


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: .mat files processing in Python

2013-06-12 Thread Terry Reedy

On 5/27/2013 4:43 PM, Romila Anamaria wrote:


I am beginner in Python programming and I want to make an application


Please post plain test only, not html (with a font size too small to 
read ;-). Don't send attachments, especially not 2 MB files.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Pywart: The problem with "Rick Johnson"

2013-06-12 Thread Terry Reedy

On 6/4/2013 11:45 PM, Mike Hansen wrote:


Is "Rick Johnson" the alter ego of Xah Lee, or is he the result of a
cross breeding experiement with a troll by Saruman at Isengard?


He is a Python programmer competent enough with tkinter to have given 
useful answers to me and others. He occasionally enjoys verbal jousting, 
as in a bar, but pollutes his rants with ad hominem slurs.


In other words, your subject line was funny, as a spot on parody of his 
subject lines. Your content was, to me, not funny, and missed the mark.



--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Version Control Software

2013-06-13 Thread Terry Reedy

On 6/13/2013 6:20 PM, Zero Piraeus wrote:

:

On 13 June 2013 17:53, Grant Edwards  wrote:


Unfortunately, something that requires typing commands would not fly.


I haven't used it (very rarely use GUI dev tools), but Tortoise Hg
<http://tortoisehg.bitbucket.org/> seems to have a decent reputation
for Mercurial (and is at least somewhat cross-platform).


I use the tortoisehg context menus and HgWorkbench (gui access) and am 
mostly happy with it.



--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Pattern Search Regular Expression

2013-06-15 Thread Terry Reedy

On 6/15/2013 12:28 PM, subhabangal...@gmail.com wrote:


Suppose I want a regular expression that matches both "Sent from my iPhone" and 
"Sent from my iPod". How do I write such an expression--is the problem,
"Sent from my iPod"
"Sent from my iPhone"

which can be written as,
re.compile("Sent from my (iPhone|iPod)")

now if I want to slightly to extend it as,

"Taken from my iPod"
"Taken from my iPhone"

I am looking how can I use or in the beginning pattern?

and the third phase if the intermediate phrase,

"from my" if also differs or changes.

In a nutshell I want to extract a particular group of phrases,
where, the beginning and end pattern may alter like,

(i) either from beginning Pattern B1 to end Pattern E1,
(ii) or from beginning Pattern B1 to end Pattern E2,
(iii) or from beginning Pattern B2 to end Pattern E2,


The only hints I will add to those given is that you need a) pattern for 
a word, and b) a way to 'anchor' the pattern to the beginning and ending 
of the string so it will only match the first and last words.


This is a pretty good re practice problem, so go and practice and 
experiment.  Expect to fail 20 times and you should beat your 
expectation ;-). The interactive interpreter, or Idle with its F5 Run 
editor window, makes experimenting easy and (for me) fun.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Timsort in Cpython

2013-06-15 Thread Terry Reedy

On 6/15/2013 4:21 PM, alphons...@gmail.com wrote:


Well. I'm going to have a ton of fun trying to make sense of this.


http://hg.python.org/cpython/file/default/Objects/listsort.txt
is pretty clear (to me) for most of the basics.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Fatal Python error: Py_Initialize: can't initialize sys standard streams

2013-06-15 Thread Terry Reedy

On 6/15/2013 8:03 PM, MRAB wrote:

On 15/06/2013 23:10, alex23 wrote:

\__init__.py", line 123

raise CodecRegistryError,\
^
SyntaxError: invalid syntax



To me that traceback looks like it's Python 3 trying to run code written
for Python 2.


If that is the case, the ^ should be under the ',' (and perhaps it once 
was ;-). If really at the beginning of the line, then the error must be 
on the previous line. Even then, the examples I have tried point to the 
'e' or 'raise'. Take a look in the file.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Version Control Software

2013-06-16 Thread Terry Reedy

On 6/16/2013 1:29 AM, Chris Angelico wrote:

On Sun, Jun 16, 2013 at 3:20 PM, Steven D'Aprano



If you're bringing in the *entire* CPython code base, as shown here:

http://hg.python.org/


This is the python.org collection of repositories, not just cpython.


keep in mind that it includes the equivalent of four independent
implementations:

- CPython 2.x
- CPython 3.x



- Stackless
- Jython


Hrm. Why are there other Pythons in the cpython repository?


There are not. The cpython repository
http://hg.python.org/cpython/
only contains cpython. As I write, the last revision is 84110. Windows 
says that my cpython clone has about 1400 folders, 15000 files, and 500 
million bytes


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Version Control Software

2013-06-16 Thread Terry Reedy

On 6/16/2013 11:48 AM, Lele Gaifax wrote:

Roy Smith  writes:


In article ,
  Chris Kwpolska Warrick  wrote:


(I��m using wc -c to count the bytes in all files there are.  du is
unaccurate with files smaller than 4096 bytes.)


It's not that du is not accurate, it's that it's measuring something
different.  It's measuring how much disk space the file is using.  For
most files, that's the number of characters in the file rounded up to a
full block.


I think “du -c” emits a number very close to “wc -c”.


In Windows Explorer, the Properties box displays both the Size and 'Size 
on disk', in both (KB or MB) and bytes. The block size for the disk I am 
looking at is 4KB, so the Size on disk in KB is a multiple of that.


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Variables versus name bindings [Re: A certainl part of an if() structure never gets executed.]

2013-06-17 Thread Terry Reedy

On 6/17/2013 7:34 AM, Simpleton wrote:

On 17/6/2013 9:51 πμ, Steven D'Aprano wrote:

Now, in languages like Python, Ruby, Java, and many others, there is no
table of memory addresses. Instead, there is a namespace, which is an
association between some name and some value:

global namespace:
 x --> 23
 y --> "hello world"


First of all thanks for the excellent and detailed explanation Steven.

As for namespace:

a = 5

1. a is associated to some memory location
2. the latter holds value 5


This is backwards. If the interpreter puts 5 in a *permanent* 'memory 
location' (which is not required by the language!), then it can 
associate 'a' with 5 by associating it with the memory location. CPython 
does this, but some other computer implementations do not.



So 'a', is a reference to that memory location, so its more like a name
to that memory location, yes? Instead of accessing a memory address with
a use of an integer like "14858485995" we use 'a' instead.

So is it safe to say that in Python a == &a ? (& stands for memory address)

is the above correct?


When you interpret Python code, do you put data in locations with 
integer addresses?


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Variables versus name bindings [Re: A certainl part of an if() structure never gets executed.]

2013-06-17 Thread Terry Reedy

On 6/17/2013 1:17 PM, Νίκος wrote:

On Mon, Jun 17, 2013 at 8:55 AM, Simpleton  wrote:

On 17/6/2013 5:22 μμ, Terry Reedy wrote:



When you interpret Python code, do you put data in locations with
integer addresses?



I lost you here.


Memory in biological brains is not a linear series of bits, or 
characters. How we do associate things is still mostly a puzzle.


Read about holographic memory.


The way some info(i.e. a Unicode string) is saved into the hdd , is the
same way its being saved into the memory too? Same way exactly?


No. A unicode string is a sequence of abstract characters or codepoints. 
They must be encoded somehow to map them to linear byte memory. There 
are still (too) many encodings in use. Most cannot encode *all* unicode 
characters.


CPython is unusual in using one of three different encodings for 
internal unicode strings.



While you said to me to forget about memory locations,


This is excellent advice. One of the *features* of Python is that one 
*can* forget about addresses. One of the *problems* of C is that many 
people *do* forget about memory locations, while virus writers study 
them carefully.


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: decorator to fetch arguments from global objects

2013-06-18 Thread Terry Reedy

On 6/18/2013 5:47 AM, andrea crotti wrote:

Using a CouchDB server we have a different database object potentially
for every request.

We already set that db in the request object to make it easy to pass it
around form our django app, however it would be nice if I could set it
once in the API and automatically fetch it from there.

Basically I have something like

class Entity:
  def save_doc(db)


If save_doc does not use an instance of Entity (self) or Entity itself 
(cls), it need not be put in the class.



 ...

I would like basically to decorate this function in such a way that:
- if I pass a db object use it
- if I don't pass it in try to fetch it from a global object
- if both don't exist raise an exception


Decorators are only worthwhile if used repeatedly. What you specified 
can easily be written, for instance, as


def save_doc(db=None):
  if db is None:
db = fetch_from_global()
  if isinstance(db, dbclass):
save_it()
  else:
raise ValueError('need dbobject')


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: os.putenv() has no effect

2013-06-18 Thread Terry Reedy

On 6/18/2013 12:49 PM, Johannes Bauer wrote:

Hi group,

I've tracked down a bug in my application to a rather strange
phaenomenon: os.putenv() doesn't seem to have any effect on my platform
(x86-64 Gentoo Linux, Python 3.2.3):


os.getenv("PATH")

'/usr/joebin:/usr/local/bin:/usr/bin:/bin:/usr/games/bin:/usr/sbin:/sbin:~/bin'

os.putenv("PATH", "/")
os.getenv("PATH")

'/usr/joebin:/usr/local/bin:/usr/bin:/bin:/usr/games/bin:/usr/sbin:/sbin:~/bin'



os.getenv("FOO")
os.putenv("FOO", "BAR")
os.getenv("FOO")



Does anybody know why this would happen


From the doc: "When putenv() is supported, assignments to items in 
os.environ are automatically translated into corresponding calls to 
putenv(); however, calls to putenv() don’t update os.environ, so it is 
actually preferable to assign to items of os.environ."


Also " Such changes to the environment affect subprocesses started with 
os.system(), popen() or fork() and execv()"


Not obvious fact: getenv gets values from the os.environ copy of the 
environment, which is not affected by putenv. See

http://bugs.python.org/issue1159

> or what I could be doing wrong?

Using putenv(key, value) instead of os.environ[key] = value, which 
suggests that you did not read the full doc entry, which says to use the 
latter ;-).


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Why is regex so slow?

2013-06-18 Thread Terry Reedy

On 6/18/2013 4:30 PM, Grant Edwards wrote:

On 2013-06-18, Antoine Pitrou  wrote:

Roy Smith  panix.com> writes:

You should read again on the O(...) notation. It's an asymptotic complexity,
it tells you nothing about the exact function values at different data points.
So you can have two O(n) routines, one of which always twice faster than the
other.


Or one that is a million times as fast.


And you can have two O(n) routines, one of which is twice as fast for
one value of n and the other is twice as fast for a different value of
n (and that's true for any value of 'twice': 2X 10X 100X).

All the O() tells you is the general shape of the line.  It doesn't
tell you where the line is or how steep the slope is (except in the
case of O(1), where you do know the slope is 0.  It's perfectly
feasible that for the range of values of n that you care about in a
particular application, there's an O(n^2) algorithm that's way faster
than another O(log(n)) algorithm.


In fact, Tim Peters put together two facts to create the current list.sort.
1. O(n*n) binary insert sort is faster than O(n*logn) merge sort, with 
both competently coded in C, for n up to about 64. Part of the reason is 
that binary insert sort is actually O(n*logn) (n binary searches) + 
O(n*n) (n insertions with a shift averaging n/2 items). The multiplier 
for the O(n*n) part is much smaller because on modern CPUs, the shift 
needed for the insertion is a single machine instruction.
2. O(n*logn) sorts have a lower assymtotic complexity because they 
divide the sequence roughly in half about logn times. In other words, 
they are 'fast' because they split a list into lots of little pieces. So 
Tim's aha moment was to think 'Lets stop splitting when pieces are less 
than or equal to 64, rather than splitting all the way down to 1 or 2".


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: collecting variable assignments through settrace

2013-06-18 Thread Terry Reedy

On 6/18/2013 2:38 PM, skunkwerk wrote:

Hi, I'm writing a custom profiler that uses sys.settrace.  I was
wondering if there was any way of tracing the assignments of
variables inside a function as its executed, without looking at
locals() at every single line and comparing them to see if anything
has changed.


The stdlib has an obscure module bdb (basic debugger) that is used in 
both pdb (python debugger) and idlelib.Debugger. The latter can display 
global and local variable names. I do not know if it does anything other 
than rereading globals() and locals(). It only works with a file loaded 
in the editor, so it potentially could read source lines to looks for 
name binding statements (=, def, class, import) and determine the names 
just bound. On the other hand, re-reading is probably fast enough for 
human interaction.


My impression from another issue is that traceing causes the locals dict 
to be updated with each line, so you do not actually have to have to 
call locals() with each line. However, that would mean you have to make 
copies to compare.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing Extensions for Python 3 in C

2013-06-19 Thread Terry Reedy

On 6/18/2013 6:24 AM, Aditya Avinash wrote:

Hi. This is the last place where I want to ask a question. I have
searched for lots of tutorials and documentation on the web but, didn't
find a decent one to develop extensions for Python 3 using a custom
compiler (mingw32, nvcc). Please help me.


I would call those 'alternate compilers' ;-). On Windows, you must 
either use MSVC (best, the same version as used for Python) or restrict 
what you do to avoid runtime incompatibilities.



PS: Don't point me to Python Documentation. It is not good for
beginners. It doesn't elaborate about calls and implementation.


Let Cython take care of the 'calls and implementation' while you write 
in extended Python.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: decorator to fetch arguments from global objects

2013-06-19 Thread Terry Reedy

On 6/19/2013 4:03 AM, Wolfgang Maier wrote:

Wolfgang Maier  biologie.uni-freiburg.de> writes:



andrea crotti  gmail.com> writes:


2013/6/18 Terry Reedy  udel.edu>

Decorators are only worthwhile if used repeatedly. What you specified can

easily be written, for instance, as

def save_doc(db=None):
   if db is None:
 db = fetch_from_global()
   if isinstance(db, dbclass):
 save_it()
   else:
 raise ValueError('need dbobject')


Another suggestion, without knowing too much about your code's architecture:
why not *initialize* your Entity instance with a db_out attribute, so you do
Terry's db checking only in one central place - Entity's __init__ method?


Your pair of posts pretty much say what I was trying to get at. If 
Entity does not prepresent anything, it should not exist, and the 
'methods' should be functions with boilerplate factored out. If entity 
does represent anything, it should be the parameter common to all 
methods, the db. I had not really cognized before that an advantage of 
defining a class is to setup and validate the central object just once.


It is still not clear to me why db should ever by bound to None. That 
must have something to do with the undisclosed context.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: A Beginner's Doubt

2013-06-19 Thread Terry Reedy

On 6/19/2013 9:58 AM, augusto...@gmail.com wrote:

Hello!
This is my first post in this group and the reason why I came across here is 
that, despite my complete lack of knowledge in the programming area, I received 
an order from my teacher to develop a visually interactive program, until 20th 
July, so we can participate in a kind of contest.

My goal is to learn and program it by myself, as good as the time allows me. 
That said, what I seek here is advice from people who definitively have more 
experience than me on topics like: is it possible to develop this kind of 
program in such a short amount of time? What kinds of aspects of Python should 
I focus on learning? What tutorials and websites are out there that can help 
me? What kind of already done packages are out there that I can freely use, so 
I do not need to create all the aspects of the program froms scratch?

It would be wise to give an abstract of the program. I made an information flux 
kind of graphic, but I do not know how to post it in here, so I'll use only 
words:

Full screen window


Do you literally mean a full screen *window*, like a browser maximized, 
with frame and title bar with Minimize, Restore/Maximize, and Close 
buttons? or a full-screen app without the frame, like full-screen games?


Tkinter, Wx, etc, are meant for the former, Pygame, etc, for the latter.

 -> Title and brief introductory text -> 3 Buttons (Credits) 
(Instructions) and (Start)


(Credits) -> Just plain text and a return button
(Instructions) -> Just plain text and a return button
(Start) -> Changes the screen so it displays a side-menu and a Canvas.


If you open Idle and click Help / About IDLE, you will see a dialog box 
with title, text, and two groups of 3 buttons that open plain text, 
including Credits, in a separate window with a close (return) button. It 
you decide to use tkinter, this would give you a start. The code is in 
Lib/idlelib/aboutDialog.py. I do not know how to make the 'dialog' be a 
main window instead, nor how to replace a main window with a new set of 
widgets (as opposed to opening a new window), but I presume its 
possible. If so, I am sure Rick could tell us how.



Side menu -> X number of buttons (maybe 4 or 5)


Is this really required, as opposed to a normal top menu?


Buttons -> Clicked -> Submenu opens -> List of images
 -> Return button -> Back to side menu

Image in List of images -> When clicked AND hold mouse button -> Make copy


I am not sure what you mean by 'copy'. Make an internal image object 
from the disk file?

 -> if: dragged to canvas -> paste the copy in place
 -> if: dragged anywhere else -> delete copy and 
nothing happens


It sounds like the intention is to have multiple images on the canvas at 
once.



On canvas:
Image -> On click and drag can be moved


This could be a problem if images overlap.


   -> Double click -> Opens menu -> Resize, Deform, Rotate, Color, 
Brigthness, Contrast, Color Curve, Saturation


Image operations are what are usually placed on a size menu or floating 
menu box.


Neil mentioned PIL (Python Image Library) because Tk's image support is 
anemic, and does not have any built-in transformations. Pillow, at

https://pypi.python.org/pypi/Pillow/2.0.0
is a friendly fork that include patches to run on Python 3.3, which I 
would otherwise recommend that you use.




Then, somewhere in cavas:


This should be a button on the side menu.


Save option -> Prompt for file and user's name
 -> Prompt if users want printed copy or not -> Print
 -> After saved, display random slideshow in other monitor, device 
or screen with the users' creations.




--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Does upgrade from 2.7.3 to 2.7.5 require uninstall?

2013-06-20 Thread Terry Reedy

On 6/20/2013 2:44 PM, Alister wrote:

On Thu, 20 Jun 2013 11:35:49 -0700, Wanderer wrote:


Do I need to uninstall Python 2.7.3 before installing Python 2.7.5?

Thanks


that will depend on your operating system an possibly the variety of
python


"Python 2.7.3' and 'Python 2.7.5' are by trademark PSF CPython 
implementations of the Python 2.7 language.


The Windows installers *replace* previous micro releases because the new 
bugfix release is presumed to be better than any before. When that turns 
out to be sufficiently false, there is a new bugfix release as soon as 
possible to remove the regressions. Hence 2.7.4 was quickly replaced by 
2.7.5 (and same for recent 3.2 and 3.3 releases). (Such regressions, as 
with any bug, expose deficiencies in the test suite, which also get 
corrected.)


I presume that *nix package managers also replace, but have not used them.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: n00b question on spacing

2013-06-21 Thread Terry Reedy

On 6/21/2013 5:17 PM, Yves S. Garret wrote:

Hi, I have a question about breaking up really long lines of code in Python.

I have the following line of code:
log.msg("Item wrote to MongoDB database %s/%s" %(settings['MONGODB_DB'],
settings['MONGODB_COLLECTION']), level=log.DEBUG, spider=spider)

Given the fact that it goes off very far to the right on my screen is
not terribly
pleasing to my eyes (and can be rude for other developers).

I was thinking of splitting it up like so:
log.msg("Item wrote to MongoDB database %s/%s"
   %(settings['MONGODB_DB'], settings['MONGODB_COLLECTION']),
   level=log.DEBUG, spider=spider)

Is this ok?  Are there any rules in Python when it comes to breaking up
long lines of code?


For function calls, PEP8 suggests either an extra indent or line up as 
follows.

log.msg("Item wrote to MongoDB database %s/%s"
%(settings['MONGODB_DB'], settings['MONGODB_COLLECTION']),
level=log.DEBUG, spider=spider)

The point is to not look like a normal indent to clue reader that these 
are continuation lines and not new statements.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: n00b question on spacing

2013-06-23 Thread Terry Reedy

On 6/22/2013 9:20 PM, MRAB wrote:


[snip]
One vs not-one isn't good enough. Some languages use the singular with
any numbers ending in '1'. Some languages have singular, dual, and
plural. Etc. It's surprising how inventive people can be! :-)


In the Idle output window for file grepping, I just changed the summary 
line 'Found %d hit%s' to 'Hits found: %d' to avoid the pluralization 
problem (even though the language is just English for now).


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is this PEP-able? fwhile

2013-06-25 Thread Terry Reedy

On 6/25/2013 7:17 AM, jim...@aol.com wrote:


for i in range(n) while safe(i): ..


Combined for-while and for-if statements have been proposed before and 
rejected. We cannot continuously add simple compositions to the langauge.



I disagree. The problem IMO is that python 'for's are a different kind
of 'for' in that they have no explicit indexing and no explicit range
test; just a list which has elements drawn from it.  This is amazingly
powerful and concise.  Unfortunately, the "breaks are just gotos"
community often ruins this conciseness by going to 'while' or itertools
(or worse) to avoid adding a break to a 'for' which needs to be
terminated early.


'Break' and 'continue' were intended to be used ;-).


I think suggestions like yours and Fabio's are good ones.  If 'for' has
an 'else', why not a 'while'?


While-else and for-else follow from if-else. Which is to say, the else 
corresponds to the buried if that is part of while and for. The else 
part triggers when the if part is false. The difference from if-else is 
that the if part is tested multiple times.


while condition():
  block()
else:
  finish()

is equivalent to

while True:
  if condition():
block()
continue
  else:
finish()
break

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: re.finditer() skips unicode into selection

2013-06-26 Thread Terry Reedy

On 6/26/2013 3:18 PM, akshay.k...@gmail.com wrote:

I am using the following Highlighter class for Spell Checking to work on my 
QTextEdit.

class Highlighter(QSyntaxHighlighter):
 pattern = ur'\w+'
 def __init__(self, *args):
 QSyntaxHighlighter.__init__(self, *args)
 self.dict = None

 def setDict(self, dict):
 self.dict = dict

 def highlightBlock(self, text):
 if not self.dict:
 return
 text = unicode(text)
 format = QTextCharFormat()
 format.setUnderlineColor(Qt.red)
 format.setUnderlineStyle(QTextCharFormat.SpellCheckUnderline)
 unicode_pattern=re.compile(self.pattern,re.UNICODE|re.LOCALE)

 for word_object in unicode_pattern.finditer(text):
 if not self.dict.spell(word_object.group()):
 print word_object.group()
 self.setFormat(word_object.start(), word_object.end() - 
word_object.start(), format)

But whenever I pass unicode values into my QTextEdit the re.finditer() does not 
seem to collect it.

When I pass "I am a नेपाली" into the QTextEdit. The output is like this:

 I I I a I am I am I am a I am a I am a I am a I am a I am a I am a I am a

It is completely ignoring the unicode.


The whole text is unicode. It is ignoring the non-ascii, as you asked it 
to with re.LOCALE.


With 3.3.2:
import re

pattern = re.compile(r'\w+', re.LOCALE)
text = "I am a नेपाली"

for word in pattern.finditer(text):
print(word.group())
>>>
I
am
a

Delete ', re.LOCALE' and the following are also printed:
न
प
ल

There is an issue on the tracker about the vowel marks in नेपाली being 
mis-seen as word separators, but that is another issue.


Lesson: when you do not understand output, simplify code to see what 
changes. Separating re issues from framework issues is a big step in 
that direction.


? What might be the issue. I am new to PyQt and regex. Im using Python 
2.7 and PyQt4.


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Why is the argparse module so inflexible?

2013-06-27 Thread Terry Reedy

On 6/27/2013 8:54 AM, Andrew Berg wrote:

I've begun writing a program with an interactive prompt, and it needs
to parse input from the user.  I thought the argparse module would be
great for this,


It is outside argparse's intended domain of application -- parsing 
command line arguments. The grammar for a valid string of command line 
arguments is quite restricted.


Argparse is not intended for interactive processing of a domain-specific 
language (DSL). There are other parsers for that. But if the grammar for 
your DSL is restricted to what argparse can handle, using it is an 
interesting idea. But you need non-default usage for the non-default 
context.


> but unfortunately it insists on calling sys.exit() at

any sign of trouble instead of letting its ArgumentError exception
propagate so that I can handle it.


When one tell argparse that something is *required*, that means "I do 
not want to see the user's input unless it passes this condition." After 
seeing an error message, the user can edit the command line and re-enter.


If you do not mean 'required' in the sense above, do not say so.
Catching SystemExit is another way to say 'I did not really mean 
required, in the usual mean of that term.'.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Why is the argparse module so inflexible?

2013-06-27 Thread Terry Reedy

On 6/27/2013 2:18 PM, Dave Angel wrote:

On 06/27/2013 02:05 PM, Terry Reedy wrote:

On 6/27/2013 8:54 AM, Andrew Berg wrote:

I've begun writing a program with an interactive prompt, and it needs
to parse input from the user.  I thought the argparse module would be
great for this,


It is outside argparse's intended domain of application -- parsing
command line arguments. The grammar for a valid string of command line
arguments is quite restricted.

Argparse is not intended for interactive processing of a domain-specific
language (DSL). There are other parsers for that. But if the grammar for
your DSL is restricted to what argparse can handle, using it is an
interesting idea. But you need non-default usage for the non-default
context.

 > but unfortunately it insists on calling sys.exit() at

any sign of trouble instead of letting its ArgumentError exception
propagate so that I can handle it.


When one tell argparse that something is *required*, that means "I do
not want to see the user's input unless it passes this condition." After
seeing an error message, the user can edit the command line and re-enter.

If you do not mean 'required' in the sense above, do not say so.
Catching SystemExit is another way to say 'I did not really mean
required, in the usual mean of that term.'.



That last sentence is nonsense.


Not if you understand what I said.

> If one is parsing the line the user  enters via raw_input(),

input(), in 3.x

> catching SystemExit so the program doesn't abort

is perfectly reasonable.  The user should be returned to his prompt,
which in this case is probably another loop through raw_input().


Right, because 'required' means something a little different in the 
interactive context.


I don't know if all the information in the original ArgumentError 
exception is transferred to the SystemExit exception. I expect not, and 
if so, and if multiple people are using argparse this way, it would be 
reasonable to request on the tracker that its current sys.exit behavior 
become default but optional in 3.4+. There might even be an issue 
already if one searched.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: ? get negative from prod(x) when x is positive integers

2013-06-28 Thread Terry Reedy

On 6/28/2013 10:38 AM, Vincent Davis wrote:

I have a list of a list of integers. The lists are long so i cant really
show an actual example of on of the lists, but I know that they contain
only the integers 1,2,3,4. so for example.
s2 = [[1,2,2,3,2,1,4,4],[2,4,3,2,3,1]]

I am calculating the product, sum, max, min of each list in s2 but I
get negative or 0 for the product for a lot of the lists. (I am doing
this in ipython)


Based on Python2 (from print output). Look at the underlying version to 
make sure it is 2.7. Using Python 3 or something based on it is better 
unless you *really* have to use Python 2.



for x in s2:
 print('len = ', len(x), 'sum = ', sum(x), 'prod = ', prod(x), 'max
= ', max(x), 'min = ', min(x))


prod is not a Python builtin. I am guessing ipython adds it as a C-coded 
builtin because a Python-coded function* would not have the overflow bug 
exhibited below. See for instance

https://en.wikipedia.org/wiki/Integer_overflow

Not having to worry about this, because Python comes with 
multi-precision integers, is a great thing about using Python rather 
than almost any other language.



* I do not remember it this was always true for old enough Pythons.


('len = ', 100, 'sum = ', 247, 'prod = ', 0, 'max = ', 4, 'min = ', 1)
('len = ', 100, 'sum = ', 230, 'prod = ', -4611686018427387904, 'max = ', 4, 
'min = ', 1)


def prod(seq):
res=1
for i in seq:
res *= i
return res

should work for you.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Why is the argparse module so inflexible?

2013-06-28 Thread Terry Reedy

On 6/29/2013 12:12 AM, rusi wrote:

On Saturday, June 29, 2013 7:06:37 AM UTC+5:30, Ethan Furman wrote:

On 06/27/2013 03:49 PM, Steven D'Aprano wrote:

[rant]
I think it is lousy design for a framework like argparse to raise a
custom ArgumentError in one part of the code, only to catch it elsewhere
and call sys.exit. At the very least, that ought to be a config option,
and off by default.

Libraries should not call sys.exit, or raise SystemExit. Whether to quit
or not is not the library's decision to make, that decision belongs to
the application layer. Yes, the application could always catch
SystemExit, but it shouldn't have to.



So a library that is explicitly designed to make command-line scripts easier
and friendlier should quit with a traceback?

Really?


So a library that behaves like an app is OK?


No, Steven is right as a general rule (do not raise SystemExit), but 
argparse was considered an exception because its purpose is to turn a 
module into an app. With the responses I have seen here, I agree that 
this is a bit short-sighted, as inflexible behavior. The tracker issue 
could use more review and comment.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: MeCab UTF-8 Decoding Problem

2013-06-29 Thread Terry Reedy

On 6/29/2013 10:02 AM, Dave Angel wrote:

On 06/29/2013 07:29 AM, fob...@gmail.com wrote:

Hi,


Using Python 2.7 on Linux, presumably?  It'd be better to be explicit.



I am trying to use a program called MeCab, which does syntax analysis
on Japanese text.


It is generally nice to give a link when asking about 3rd party 
software.  https://code.google.com/p/mecab/

In this case, nearly all the non-boilerplate text is Japanese ;-(.

>> The problem I am having is that it returns a byte string

and the problem with bytes is that they can have any encoding.
In Python 2 (indicated by your print *statements*), a byte string is 
just a string.



and if I try to print it, it prints question marks for almost
all characters. However, if I try to use .decide, it throws an error.
Here is my code:


What do the MeCab docs say the tagger.parse byte string represents?
Maybe it's not text at all.  But surely it's not utf-8.


https://mecab.googlecode.com/svn/trunk/mecab/doc/index.html
MeCab: Yet Another Part-of-Speech and Morphological Analyzer
followed by Japanese.


#!/usr/bin/python
# -*- coding:utf-8 -*-

import MeCab
tagger = MeCab.Tagger("-Owakati")
text = 'MeCabで遊んでみよう!'


Parts of this appear in the output, as indicated by spaces.
'MeCabで遊 んで みよ う!'


result = tagger.parse(text)
print result

result = result.decode('utf-8')
print result

And here is the output:

MeCab �� �� ��んで�� �� ��う!


Python normally prints bytes with ascii chars representing either 
themselves or other values with hex escapes. This looks more like 
unicode sent to a terminal with a limited character set. I would add


print type(result)

to be sure.


Traceback (most recent call last):
   File "test.py", line 11, in 
 result = result.decode('utf-8')
   File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
 return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-7:
invalid continuation byte


--
(program exited with code: 1)
Press return to continue

Also my terminal is able to display Japanese characters properly. For
example print '日本語' works perfectly fine.



--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: MeCab UTF-8 Decoding Problem

2013-06-29 Thread Terry Reedy

On 6/29/2013 11:32 AM, Terry Reedy wrote:


I am trying to use a program called MeCab, which does syntax analysis
on Japanese text.


It is generally nice to give a link when asking about 3rd party
software.  https://code.google.com/p/mecab/
In this case, nearly all the non-boilerplate text is Japanese ;-(.


My daughter translated the summary paragraph for me.

MeCab is an open source morphological analysis open source engine 
developed through a collaborative unit project between Kyoto 
University's Informatics Research Department and Nippon Telegraph and 
Telephone Corporation Communication Science Laboratories. Its 
fundamental premise is a design which is general-purpose and not reliant 
on a language, dictionary, or corpus. It uses Conditional Random Fields 
(CRF) for the estimation of the parameters, and has improved performance 
over ChaSen, which uses a hidden Markov model. In addition, on average 
it is faster than ChaSen, Juman, and KAKASI. Incidentally, the creator's 
favorite food is mekabu (thick leaves of wakame, a kind of edible 
seaweed, from near the root of the stalk).



--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Closures in leu of pointers?

2013-06-29 Thread Terry Reedy

On 6/29/2013 3:47 PM, Ian Kelly wrote:

On Sat, Jun 29, 2013 at 1:33 PM, Steven D'Aprano
 wrote:

On Sat, 29 Jun 2013 12:20:45 -0700, cts.private.yahoo wrote:



Huh? What language are you programming in? Python doesn't have implied
scoping in non-intuitive ways.


def f(x):
 def g(y):
 print(x)
 x = y

Within g, the variable x is implicitly local,


The assignment looks pretty explicit to me ;-|. But to each his own on 
'plicitness.



which is non-intuitive
since without the assignment it would not be.


I think the problem people have is this. Python is *not* an interpreted 
language in the traditional sense: read a line, interpret and execute 
it. It is compiled, and the compilation of functions is a two pass 
affair. The first pass classifies all names. The second pass generates 
code based on the classification of the first pass. This is not part of 
the grammar, but implied (required) by the semantic description.


If, in the general case, the compiler requires two passes to understand 
a function body, then *so do people*#. This requirement is what trips up 
people who are either not used to the idea of two-pass compilation or do 
not cognize that the language requires it.


# The alternative for either program or people is a 1-pass + 
backtracking process where all understandings are kept provisional until 
the end of the body and revised as required. 2 passes are simpler.


That said, writing deceptive but functionin code is usually bad style. 
Writing code that looks functional but is buggy is worse.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Closures in leu of pointers?

2013-06-29 Thread Terry Reedy

On 6/29/2013 5:21 PM, Ian Kelly wrote:

On Sat, Jun 29, 2013 at 2:53 PM, Terry Reedy  wrote:

# The alternative for either program or people is a 1-pass + backtracking
process where all understandings are kept provisional until the end of the
body and revised as required. 2 passes are simpler.


Or simply an explicit declaration of scope at the beginning of the
function definition.


One of the reasons I switched to Python was to not have to do that, or 
hardly ever. For valid code, an new declaration is hardly needed. 
Parameters are locals. If the first use of another name binds it (and 
that includes import, class, and def), it is local. If the first use of 
does not bind it, it had better not be local (because if it is, there 
well be an exception). If there are branches, each should be consistent 
with the others. One should only need two readings to understand and fix 
unbound local errors.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Closures in leu of pointers?

2013-06-29 Thread Terry Reedy

On 6/30/2013 1:46 AM, Ian Kelly wrote:


On a related note, I think that generator functions should in some way
be explicitly marked as such in the declaration, rather than needing
to scan the entire function body for a yield statement to determine
whether it's a generator or not.


I agree that one should not have to scan. The doc string, which should 
be present should start 'Return a generator that yields ...' or even 
'Generate ...'. Of course, then non-generator functions should not start 
the same way. The first option should be non-ambiguous.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python list code of conduct

2013-07-02 Thread Terry Reedy

On 7/2/2013 7:46 PM, Roy Smith wrote:

In article ,
  Ned Deily  wrote:


If you find a bug in Python, don't send it to comp.lang.python; file
a bug report in the issue tracker.


I would revise this to "If you are have really found a bug in Python..."
How does a newbie know?


I'm not sure I agree with that one, at least not fully.  It's certainly
true that you shouldn't expect anybody to do anything about a bug unless
you open an issue.

On the other hand, I often find it useful to discuss things that I
believe are bugs on c.l.p first.  Sometimes people will explain to me
that I'm just doing it wrong.  Sometimes the discussion will end up
with, "Yeah, that's a bug".


usually followed by "File a tracker issue" or "I opened an tracker issue 
for this." (I have done that several times, though I sometimes prefer a 
person learn how to do it themselves.)


>   In either case, it serves as a good initial

filter for whether I should file a bug or not, and the discussion is
often educational.


Ask here first.
With a subject line that says 'I think ...' or "Problem with ...'
Advantages:

1. At least half the bugs newbies report are not. The tracker does not 
need the extra clutter.
2. Filing a tracker issue sometimes creates a 'mental investment' in the 
mis-perception, which leads to resentment upon explanation.

3. There are lots of people here ready to help and answer questions.
Any sensible question usually gets multiple responses, usually within a 
few hours or a day. (Invalid tracker reports may sit for days and get 
one short response.)

4. Explanations posted here benefit lots of people, rather than just 1.
5. A question posted here may elicit essential information, like which 
systems or which versions have the problem.
6. If you make an informed post to the tracker backed up by at least 
opinion, at least one tracker responder be in a better mode when responding.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Important features for editors

2013-07-05 Thread Terry Reedy

On 7/4/2013 2:52 PM, Ferrous Cranus wrote:


Like you never downloaded serials/keygens/patch/cracks for warez and
torrents websites.


Morality aside, why would I? Today I bought 8 games on GOG.com for about 
$22 - drm and virus free and easy download and install. If I get 10 
hours of fun from 2 of them, I'll be happy. This is not to mention free 
Python and LibreOffice as my primary work programs - suppported by hg, 
TortoiseHg, 7zip, and others.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: analyzing time

2013-07-05 Thread Terry Reedy

On 7/5/2013 3:18 PM, noydb wrote:


I have a table with a column of type date, with dates and time


This is a datetime in Python parlance.


combined (like '1/6/2013 3:52:69PM'), that spans many months.  How
would I pull out records that are the first and last entries per
day?


Sort on that column. Look at pairs of rows. If the days differ, you have 
the last of the first and the first of the second. One way:


it = 
dt1 = next(it)
d1 = date(dt1)  # whatever that looks like

for row in it:
  dt2 = row
  d2 = date(dt2)
  if d1 != d2:
do_whatever(dt1, dt2)
  dt1, d1 = dt2, d2


Also, if I wanted to find time clusters per day (or per week) -- like
if an entry is made every day around 11am -- is there a way to get at
that temporal statistical cluster?


Make a histogram of time, ignoring date.


Python 2.7, Windows 7.

Any guidance would be greatly appreciated!  Time seems tricky...


Yes

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple recursive sum function | what's the cause of the weird behaviour?

2013-07-06 Thread Terry Reedy

On 7/6/2013 8:37 AM, Russel Walker wrote:

I know this is simple but I've been starring at it for half an hour and trying 
all sorts of things in the interpreter but I just can't see where it's wrong.

def supersum(sequence, start=0):
 result = start
 for item in sequence:
 try:
 result += supersum(item, start)
 except:


Bare except statements cover up too many sins. I and others *strongly* 
recommend that you only catch what you *know* you actually want to (see 
below).



 result += item
 return result


I recommend that you start with at least one test case, and with an edge 
case at that. If you cannot bring yourself to do it before writing a 
draft of the function code, do it immediately after and run. If you do 
not want to use a framework, use assert.


assert supersum([]) == 0
assert supersum([], []) == []

Do the asserts match your intention? The tests amount to a specification 
by example. Any 'kind' of input that is not tested is not guaranteed to 
work.


Back to the except clause: only add try..except xxx when needed to pass 
a test.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Explain your acronyms (RSI?)

2013-07-06 Thread Terry Reedy

"rms has crippling RSI" (anonymous, as quoted by Skip).

I suspect that 'rms' = Richard M Stallman (but why lower case? to insult 
him?). I 'know' that RSI = Roberts Space Industries, a game company 
whose Kickstarter project I supported. Whoops, wrong context. How about 
'Richard Stallman Insanity' (his personal form of megalomania)? That 
makes the phrase is a claim I have read others making.


Lets continue and see if that interpretation works. "should indicate 
that emacs' ergonomics is not right". Aha! Anonymous believes that using 
his own invention, emacs, is what drove Richard crazy. He would not be 
the first self invention victim.


But Skip mentions 'worse for wrists'. So RSI must be a physical rather 
than mental condition. Does 'I' instead stand for Inoperability?, 
Instability?, or what?


Let us try Google. Type in RSI and it offers 'RSI medications' as a 
choice. Sound good, as it will eliminate all the companies with those 
initials. The two standard medical meanings of RSI seem to be Rapid 
Sequence Intubation and Rapid Sequence Induction. But those are 
procedures, not chronic syndromes. So I still do not know what the 
original poster, as quoted by Skip, meant.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Editor Ergonomics [was: Important features for editors]

2013-07-09 Thread Terry Reedy

On 7/9/2013 8:12 AM, Neil Cerutti wrote:

On 2013-07-09, Jason Friedman  wrote:

I am right-handed and use a lefty-mouse about 50% of the time.
It was difficult at first, now I'm almost as fast lefty as
righty. As has been stated by others, changing the muscles
being used reduces the impact on any one of them.


That's the system I've adopted. I use the mouse lefty all day
when working and righty all night when playing.


Me too, more or less.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Stack Overflow bans Mats Peterson (was Re: ....)

2013-07-10 Thread Terry Reedy

On 7/10/2013 3:55 AM, Mats Peterson wrote:

A moderator who calls himself “animuson” on Stack Overflow doesn’t
want to face the truth. He has deleted all my postings regarding Python
regular expression matching being extremely slow compared to Perl.
Additionally my account has been suspended for 7 days. Such a dickwad.


Your opinion of "animuson" is off-topic for this list.

StackOverflow is explicitly for technical questions with technical 
answers. I believe opinions, languages comparisions, and flamebaits in 
general are explicitly banned and subject to removal and possible 
banning. So are subterfuges like phrasing banned topics as questions.


Given your behavior here, I am 90+% sure animuson's action was appropriate.

--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Recursive class | can you modify self directly?

2013-07-10 Thread Terry Reedy

On 7/10/2013 4:58 AM, Russel Walker wrote:


There is the name x and the class instance (the object) which exists
somewhere in memory that x points to. self is just another name that
points to the same object (not self in general but the argument
passed to the self parameter when a method is called). However if the
code inside the method reassigns self to some other object, it
doesn't change the fact that x still refers to the original object.
So self is just a local variable (an argument).


Yes, parameters *names* are the initial local names of the function. 
Calling the first parameter of instancemethods 'self' is a convention, 
not a requirement.


> The name self has no

relevance to to the name x other than the fact that they point to the
same object. So reassigning self has no effect on x. But modifying
the object that self points to, does affect x because it points to
the same object. Is this correct?


Yes. Multiple names for one objects are 'aliases'. Being able to modify 
a object with multiple names in different namespaces is both a boon and 
bug bait.



So when you call x.somemethod() it's not passing x as the self


Python does not pass'x': it is 'call-by-object', not 'call-by-name'.


argument, it's actually passing the object that x points to as the
self argument. And that object has know knowledge of the fact that x
points to it, or does it?


Some objects (modules, classes, functions) have definition names 
(.__name__ attributes) that are used in their representations (as in 
tracebacks). But they have no knowledge of their namespace names.




--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: GeoIP2 for retrieving city and region ?

2013-07-12 Thread Terry Reedy

On 7/12/2013 1:19 PM, Ian Kelly wrote:


Try this:

1) Go to http://incloak.com (or any other free web proxy site).
2) Paste in the URL http://www.geoiptool.com and press Enter
3) See where it thinks you are now.

When I tried it, it placed me on the wrong side of the Atlantic Ocean.


Me to. Thanks for the link.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding other people's code

2013-07-12 Thread Terry Reedy

On 7/12/2013 10:22 AM, L O'Shea wrote:

Hi all, I've been asked to take over a project from someone else and
to extend the functionality of this. The project is written in Python
which I haven't had any real experience with (although I do really
like it) so I've spent the last week or two settling in, trying to
get my head around Python and the way in which this code works.


If the functions are not documented in prose, is there a test suite that 
you can dive into?



--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Beazley 4E P.E.R, Page29: Unicode

2013-07-14 Thread Terry Reedy

On 7/13/2013 11:09 PM, vek.m1...@gmail.com wrote:

http://stackoverflow.com/questions/17632246/beazley-4e-p-e-r-page29-unicode


Is this David Beazley? (You referred to 'DB' later.)


 "directly writing a raw UTF-8 encoded string such as
'Jalape\xc3\xb1o' simply produces a nine-character string U+004A,
U+0061, U+006C, U+0061, U+0070, U+0065, U+00C3, U+00B1, U+006F, which
is probably not what you intended.This is because in UTF-8, the
multi- byte sequence \xc3\xb1 is supposed to represent the single
character U+00F1, not the two characters U+00C3 and U+00B1."

My original question was: Shouldn't this be 8 characters - not 9? He
says: \xc3\xb1 is supposed to represent the single character. However
after some interaction with fellow Pythonistas i'm even more
confused.

With reference to the above para: 1. What does he mean by "writing a
raw UTF-8 encoded string"??


As much respect as I have for DB, I think this is an impossible to parse 
confused statement, fueled by the Python 2 confusion between characters 
and bytes. I suggest forgetting it and the discussion that followed. 
Bytes as bytes can carry any digital information, just as modulated sine 
waves can carry any analog information. In both cases, one can regard 
them as either purely what they are or as encoding information in some 
other form.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Timing of string membership (was Re: hex dump w/ or w/out utf-8 chars)

2013-07-14 Thread Terry Reedy

On 7/14/2013 10:56 AM, Chris Angelico wrote:

On Sun, Jul 14, 2013 at 11:44 PM,   wrote:



timeit.repeat("a = 'hundred'; 'x' in a")

[0.11785943134991479, 0.09850454944486256, 0.09761604599423179]

timeit.repeat("a = 'hundreœ'; 'x' in a")

[0.23955250303158593, 0.2195812612416752, 0.22133896997401692]

sys.version

'3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit (Intel)]'


As issue about finding stings in strings was opened last September and, 
as reported on this list, fixes were applied about last March. As I 
remember, some but not all of the optimizations were applied to 3.3. 
Perhaps some were applied too late for 3.3.1 (3.3.2 is 3.3.1 with some 
emergency patches to correct regressions).


Python 3.4.0a2:
>>> import timeit
>>> timeit.repeat("a = 'hundred'; 'x' in a")
[0.17396483610667152, 0.16277956641670813, 0.1627937074749941]
>>> timeit.repeat("a = 'hundreo'; 'x' in a")
[0.18441108179403187, 0.16277311071618783, 0.16270517215355085]

The difference is gone, again, as previously reported.


jmf has raised an interesting point. Some string membership operations
do seem oddly slow.


He raised it a year ago and action was taken.



# Get ourselves a longish ASCII string with no duplicates - escape
apostrophe and backslash for code later on

asciichars=''.join(chr(i) for i in 
range(32,128)).replace("\\",r"\\").replace("'",r"\'")
haystack=[

("ASCII",asciichars+"\u0001"),
("BMP",asciichars+"\u1234"),
("SMP",asciichars+"\U00012345"),
]

needle=[

("ASCII","\u0002"),
("BMP","\u1235"),
("SMP","\U00012346"),
]

useset=[

("",""),
(", as set","; a=set(a)"),
]

for time,desc in sorted((min(timeit.repeat("'%s' in a"%n,("a='%s'"%h)+s)),"%s in 
%s%s"%(nd,hd,sd)) for nd,n in needle for hd,h in haystack for sd,s in useset):

print("%.10f %s"%(time,desc))

0.1765129367 ASCII in ASCII, as set
0.1767096097 BMP in SMP, as set
0.1778647845 ASCII in BMP, as set
0.1785266004 BMP in BMP, as set
0.1789093307 SMP in SMP, as set
0.1790431465 SMP in BMP, as set
0.1796504863 BMP in ASCII, as set
0.1803854959 SMP in ASCII, as set
0.1810674262 ASCII in SMP, as set


Much of this time is overhead; 'pass' would not run too much faster.


0.1817367850 SMP in BMP
0.1884555160 SMP in ASCII
0.2132371572 BMP in ASCII


For these, 3.3 does no searching because it knows from the internal char 
kind that the answer is No without looking.



0.3137454621 ASCII in ASCII
0.4472624314 BMP in BMP
0.6672795006 SMP in SMP
0.7493052888 ASCII in BMP
0.9261783271 ASCII in SMP
0.9865787412 BMP in SMP


...


Set membership is faster than string membership, though marginally on
something this short. If the needle is wider than the haystack, it
obviously can't be present, so a false return comes back at the speed
of a set check.


Jim ignores these cases where 3.3+ uses the information about the max 
codepoint to do the operation much faster than in 3.2.



Otherwise, an actual search must be done. Searching
for characters in strings of the same width gets slower as the strings
get larger in memory (unsurprising). What I'm seeing of the top-end
results, though, is that the search for a narrower string in a wider
one is quite significantly slower.


50% longer is not bad, even


I don't know of an actual proven use-case for this, but it seems
likely to happen (eg you take user input and want to know if there are
any HTML-sensitive characters in it, so you check ('<' in string or
'&' in string), for instance).


In my editing of code, I nearly always search for words or long names.

 The question is, is it worth

constructing an "expanded string" at the haystack's width prior to
doing the search?


I would not make any assumptions about what Python does or does not do 
without checking the code. All I know is that Python uses a modified 
version of one of the pre-process and skip-forward algorithms 
(Boyer-Moore?, Knuth-Pratt?, I forget). These are designed to work 
efficiently with needles longer than 1 char, and indeed may work better 
with longer needles. Searching for an single char in n chars is O(n). 
Searching for a len m needle is potentially O(m*n) and the point of the 
fancy algorithms is make all searches as close to O(n) as possible.


--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: List comp help

2013-07-14 Thread Terry Reedy

On 7/14/2013 1:10 PM, Joseph L. Casale wrote:

I have a dict of lists. I need to create a list of 2 tuples, where each tuple 
is a key from
the dict with one of the keys list items.

my_dict = {
 'key_a': ['val_a', 'val_b'],
 'key_b': ['val_c'],
 'key_c': []
}
[(k, x) for k, v in my_dict.items() for x in v]


The order of the tuples in not deterministic unless you sort, so if 
everything is hashable, a set may be better.



This works, but I need to test for an empty v like the last key, and create one 
tuple ('key_c', None).
Anyone know the trick to reorganize this to accept the test for an empty v and 
add the else?


When posting code, it is a good idea to includes the expected or desired 
answer in code as well as text.


pairs = {(k, x) for k, v in my_dict.items() for x in v or [None]}
assert pairs == {('key_a', 'val_a'), ('key_a', 'val_b'),
   ('key_b', 'val_c'), ('key_c', None)}


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: a little more explicative error message?

2013-07-16 Thread Terry Reedy

On 7/16/2013 1:44 AM, Vito De Tullio wrote:

Hi

I was writing a decorator and lost half an hour for a stupid bug in my code,
but honestly the error the python interpreter returned to me doesn't
helped...

$ python3
Python 3.3.0 (default, Feb 24 2013, 09:34:27)
[GCC 4.7.2] on linux
Type "help", "copyright", "credits" or "license" for more information.

from functools import wraps
def dec(fun):

...  @wraps
...  def ret(*args, **kwargs):
...   return fun(*args, **kwargs)


At this point, when dec(fun) is called, the interpreter *successfully* 
executes ret = wraps(ret), which works, but is wrong, because wraps 
should be called with the wrapped function fun as its argument, not the 
wrapper function ret. The interpreter should instead execute

ret = partial(update_wrapper, wrapped = fun, ...)(ret)
which will update ret to look like fun.


...  return ret
...

@dec

... def fun(): pass


At this point, the interpreter *successfully* executes fun = dec(fun) = 
(wraps(ret))(fun), which causes ret = wraps(ret) before returning ret. 
Notice that when dec returns, the wraps code is over and done with. 
Instead the interpreter should execute

fun = (partial(update_wrapper, wrapped = fun, ...)(ret))(fun)


...

fun()

Traceback (most recent call last):
   File "", line 1, in 
TypeError: update_wrapper() missing 1 required positional argument:
'wrapper'


Because fun has not been properly wrapped because you left out a call.


Soo... at a first glance, no tricks... can you tell where is the error? :D


Not exactly, but it must have something to do with wraps, so the first 
thing I would do is to look at the wraps doc if I had not before. It 
only takes a couple of minutes.


>>> from functools import wraps
>>> help(wraps)
Help on function wraps in module functools:

wraps(wrapped, assigned=('__module__', '__name__', '__qualname__', 
'__doc__', '__annotations__'), updated=('__dict__',))

Decorator factory to apply update_wrapper() to a wrapper function

Returns a decorator that invokes update_wrapper() with the decorated
   function as the wrapper argument and the arguments to wraps() as
   the remaining arguments. ...

This is pretty clear that wraps is not a decorator but a function that 
returns  decorator, which means that is must be called with an argument 
in the @del statement.


To really understand what is intended to happen so I could write the 
commentary above, I looked at the code to see

def wraps(wrapped, ...):
''
return partial(update_wrapper, wrapped=wrapped,
   assigned=assigned, updated=updated)


As I said, the error is totally mine, I just forgot to pass the function as
parameter to wraps. But... what is "update_wrapper()"? and "wrapper"? There
is no useful traceback or something... just... this.

Ok, the documentation clearly says:

 This is a convenience function to simplify applying partial() to
 update_wrapper().

So, again, shame on me... I just read carefully the doc *after* 20 minutes
trying everything else...  still... I think should be useful if wraps()
intercept this error saying something more explicit about the missing fun
parameter...


How is a function that has already been called and has returned without 
apparent error supposed to catch a later error caused by its return not 
being correct, due to its input not being correct?


Your request is like asking a function f(x) to catch ZeroDivisionError 
if it return a 0 and that 0 is later used as a divisor after f returns, 
as in 1/f(x).


When you call wraps with a callable, there is no way for it to know that 
the callable in intended to the be the wrapper instead of the wrappee, 
unless it were clairvoyant ;-).


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is this a bug?

2013-07-16 Thread Terry Reedy

On 7/16/2013 2:04 PM, Ian Kelly wrote:


The documentation appears to be wrong.  It says:

"""
If a name binding operation occurs anywhere within a code block, all
uses of the name within the block are treated as references to the
current block. This can lead to errors when a name is used within a
block before it is bound. This rule is subtle. Python lacks
declarations and allows name binding operations to occur anywhere
within a code block. The local variables of a code block can be
determined by scanning the entire text of the block for name binding
operations.
"""

I agree that there is a problem.
http://bugs.python.org/issue18478


But this only applies to function blocks, not the general case.  In
general, I believe it is more accurate to say that a variable is local
to the block if its name is found in the locals() dict.


That is not true for functions, where names are classified as local 
*before* being added to the locals dict. (Actually, names are usually 
not added to the locals dict until locals() is called to update it).


It would be better to say that names are local if found in the local 
namespace, and consider that names are added to a function local 
namespace (which is *not* the local() dict) when classified (before 
being bound), but otherwise only when bound.



  That normally

won't be true until the variable has been bound.  Any references prior
to that will look for a global variable.


At module scope, globals() == locals(). But feel free to suggest a 
different fix for the issue than I did.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Help with pygame

2013-07-16 Thread Terry Reedy

On 7/16/2013 1:29 PM, Daniel Kersgaard wrote:

I'm having a little trouble, tried Googling it, but to no avail.

> Currently, I'm working on making a snake game, however
> I'm stuck on a simple border.

To give a variation of the other answers, it would be easier if you drew 
the four sides more symmetrically, in something like the following order:


top (including both top corners)
bottom (including both bottom corners)
left (omitting both left corners)
right (omitting both right corners)

Including the corners with the sides instead of the top and bottom would 
be okay. So would be including one (different) corner with each line. 
Just pick a scheme that does each one once. Using the above, if 0, 0 and 
X, Y are upper left and bottom right corners,

and we use inclusive ranges:

top: 0, 0 to X, 0  # includes corners
bot: 0, Y to X, Y  # includes corners
lef: 0, 1 to 0, Y-1   # excludes corners
rit: X, 1 to X-1, Y-1 # excludes corners

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: What does it take to implement a chat system in Python (Not asking for code just advice before I start my little project)

2013-07-18 Thread Terry Reedy

On 7/18/2013 3:29 AM, Aseem Bansal wrote:


About reading comp.lang.python can you suggest how to read it and
reply?


To read this list as a newsgroup use news.gmane.org. The difference 
between the mailing list interface and newsgroup interface is that the 
latter automatically segregates messages by group and only downloads the 
messages you want to read. Gmane is also a better way to search the archive.



I have never read a newsgroup leave alone participated in one.
I am used to forums like stackoverflow. Any way to read it and reply
by one interface? If not, give any suggestion. I'll use that.


I use Thunderbird. There is almost no difference between replying to 
emails and replying to newsgroup posts.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Converting a list of lists to a single list

2013-07-23 Thread Terry Reedy

On 7/23/2013 5:52 PM, st...@divillo.com wrote:

I think that itertools may be able to do what I want but I have not
been able to figure out how.


A recursive generator suffices.


I want to convert an arbitrary number of lists with an arbitrary
number of elements in each list into a single list as follows.

Say I have three lists:

[[A0,A1,A2], [B0,B1,B2] [C0,C1,C2]]

I would like to convert those to a single list that looks like this:

[A0,B0,C0,C1,C2,B1,C0,C1,C2,B2,C0,C1,C2,

>  A1,B0,C0,C1,C2,B1,C0,C1,C2,B2,C0,C1,C2,
>  A2,B0,C0,C1,C2,B1,C0,C1,C2,B2,C0,C1,C2]

def crossflat(lofl):
if lofl:
first = lofl.pop(0)
for o in first:
   yield o
   yield from crossflat(lofl.copy())

A0, A1, A2 = 100, 101, 102
B0, B1, B2 = 10, 11, 12
C0, C1, C2 = 0, 1, 2
LL = [[A0, A1, A2], [B0, B1, B2], [C0, C1, C2]]
cfLL = list(crossflat(LL))
print(cfLL)
assert cfLL == [
   A0, B0, C0, C1, C2, B1, C0, C1, C2, B2, C0, C1, C2,
   A1, B0, C0, C1, C2, B1, C0, C1, C2, B2, C0, C1, C2,
   A2, B0, C0, C1, C2, B1, C0, C1, C2, B2, C0, C1, C2]

passes

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Converting a list of lists to a single list

2013-07-24 Thread Terry Reedy

On 7/23/2013 7:02 PM, Terry Reedy wrote:

On 7/23/2013 5:52 PM, st...@divillo.com wrote:

I think that itertools may be able to do what I want but I have not
been able to figure out how.


What you want is a flattened product with unchanged components of the 
successive products omitted in the flattening. The omission is the 
difficulty.



A recursive generator suffices.


But see below for how to use itertools.product.


I want to convert an arbitrary number of lists with an arbitrary
number of elements in each list into a single list as follows.


While others answered the Python2-oriented question ("How do I produce a 
list from a list of lists"), I answered the Python-3 oriented question 
of how to produce an iterator from an iterable of iterables. This scales 
better to an input with lots of long sequences. There is usually no need 
to manifest the output as a list, as the typical use of the list will be 
to iterate it.



def crossflat(lofl):
 if lofl:
 first = lofl.pop(0)
 for o in first:
yield o
yield from crossflat(lofl.copy())

A0, A1, A2 = 100, 101, 102
B0, B1, B2 = 10, 11, 12
C0, C1, C2 = 0, 1, 2
LL = [[A0, A1, A2], [B0, B1, B2], [C0, C1, C2]]
cfLL = list(crossflat(LL))
print(cfLL)
assert cfLL == [
A0, B0, C0, C1, C2, B1, C0, C1, C2, B2, C0, C1, C2,
A1, B0, C0, C1, C2, B1, C0, C1, C2, B2, C0, C1, C2,
A2, B0, C0, C1, C2, B1, C0, C1, C2, B2, C0, C1, C2]

passes


Here is filtered flattened product version. I think it clumsier than 
directly producing the items wanted, but it is good to know of this 
approach as a backup.


from itertools import product

def flatprod(iofi):  # iterable of iterables
lofi = list(iofi)
now = [object()] * len(lofi)
for new in product(*lofi):
i = 0
while now[i] == new[i]:
i += 1
yield from new[i:]
now = new

cfLL = list(flatprod(LL))

Same assert as before passes.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Strange behaviour with os.linesep

2013-07-24 Thread Terry Reedy

On 7/23/2013 7:41 PM, Dennis Lee Bieber wrote:

On 23 Jul 2013 15:25:12 GMT, Steven D'Aprano
 declaimed the following:


On Tue, 23 Jul 2013 13:42:13 +0200, Vincent Vande Vyvre wrote:


On Windows a script where de endline are the system line sep, the files
are open with a double line in Eric4, Notepad++ or Gedit but they are
correctly displayed in the MS Bloc-Notes.


I suspect the problem lies with Eric4, Notepad++ and Gedit. Do you
perhaps have to manually tell them that the file uses Windows line
separators?


I suspect the problem likes in the file written. Notepad++ works fine 
with \r\n or \n on input and can produce either on output.



Don't know about those, but SciTE I know has both an menu option for
line ending (, , ), and one for "convert line endings"




--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3: dict & dict.keys()

2013-07-24 Thread Terry Reedy

On 7/24/2013 12:34 PM, Chris Angelico wrote:


Side point: Why is iterating over a dict equivalent to .keys() rather
than .items()? It feels odd that, with both options viable, the
implicit version iterates over half the dict instead of all of it.
Obviously it can't be changed now, even if .items() were the better
choice, but I'm curious as to the reason for the decision.


Both were considered and I think there were and are two somewhat-linked 
practical reasons. First, iterating over keys in more common than 
iterating over items. The more common one should be the default.


Second, people ask much more often if 'key' is in dict than if 'key, 
value' is in dict. This is true as well for keyed reference books such 
as phone books, dictionaries, encyclopedias, and for the same reason. 
This is  coupled with the fact that the default meaning of 'item in 
collection' is that iterating over 'collection' eventually produces 
'item' or a value equal to 'item'.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: RE Module Performance

2013-07-24 Thread Terry Reedy

On 7/24/2013 11:00 AM, Michael Torrie wrote:

On 07/24/2013 08:34 AM, Chris Angelico wrote:

Frankly, Python's strings are a *terrible* internal representation
for an editor widget - not because of PEP 393, but simply because
they are immutable, and every keypress would result in a rebuilding
of the string. On the flip side, I could quite plausibly imagine
using a list of strings;


I used exactly this, a list of strings, for a Python-coded text-only 
mock editor to replace the tk Text widget in idle tests. It works fine 
for the purpose. For small test texts, the inefficiency of immutable 
strings is not relevant.


Tk apparently uses a C-coded btree rather than a Python list. All 
details are hidden, unless one finds and reads the source ;-), but but 
it uses C arrays rather than Python strings.



In this usage, the FSR is beneficial, as it's possible to have
different strings at different widths.


For my purpose, the mock Text works the same in 2.7 and 3.3+.


Maybe, but simply thinking logically, FSR and UCS-4 are equivalent in
pros and cons,


They both have the pro that indexing is direct *and correct*. The cons 
are different.



and the cons of using UCS-2 (the old narrow builds) are
well known.  UCS-2 simply cannot represent all of unicode correctly.


Python's narrow builds, at least for several releases, were in between 
USC-2 and UTF-16 in that they used surrogates to represent all unicodes 
but did not correct indexing for the presence of astral chars. This is a 
nuisance for those who do use astral chars, such as emotes and CJK name 
chars, on an everyday basis.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: RE Module Performance

2013-07-24 Thread Terry Reedy

On 7/24/2013 2:15 PM, Chris Angelico wrote:

On Thu, Jul 25, 2013 at 3:52 AM, Terry Reedy  wrote:



For my purpose, the mock Text works the same in 2.7 and 3.3+.


Thanks for that report! And yes, it's going to behave exactly the same
way, because its underlying structure is an ordered list of ordered
lists of Unicode codepoints, ergo 3.3/PEP 393 is merely a question of
performance. But if you put your code onto a narrow build, you'll have
issues as seen below.


I carefully said 'For my purpose', which is to replace the tk Text 
widget. Up to 8.5, Tk's text is something like Python's narrow-build 
unicode.


If put astral chars into the toy editor, then yes, it would not work on 
narrow builds, but would on 3.3+.


 ...

> If nobody had ever thought of doing a multi-format string

representation, I could well imagine the Python core devs debating
whether the cost of UTF-32 strings is worth the correctness and
consistency improvements... and most likely concluding that narrow
builds get abolished. And if any other language (eg ECMAScript)
decides to move from UTF-16 to UTF-32, I would wholeheartedly support
the move, even if it broke code to do so.


Making a UTF-16 implementation correct requires converting abstract 
'character' array indexes to concrete double byte array indexes. The 
simple O(n) method of scanning the string from the beginning for each 
index operation is too slow. When PEP393 was being discussed, I devised 
a much faster way to do the conversion.


The key idea is to add an auxiliary array of the abstract indexes of the 
astral chars in the abstract array. This is easily created when the 
string is created and can be done afterward with one linear scan (which 
is how I experimented with Python code). The length of that array is the 
number of surrogate pairs in the concrete 16-bit codepoint array. 
Subtracting that number from the length of the concrete array gives the 
length of the abstract array.


Given a target index of a character in the abstract array, use the 
auxiliary array to determine k, the number of astral characters that 
precede the target character. That can be done with either a O(k) linear 
scan or O(log k) binary search. Add 2 * k to the abstract index to get 
the corresponding index in the concrete array. When slicing a string 
with i0 and i1, slice the auxiliary array with k0 and k1 and adjusting 
the contained indexes downward to get the corresponding auxiliary array.



To my mind, exposing UTF-16 surrogates to the application is a bug

> to be fixed, not a feature to be maintained.

It is definitely not a feature, but a proper UTF-16 implementation would 
not expose them except to codecs, just as with the PEP 393 
implementation. (In both cases, I am excluding the sys size function as 
'exposing to the application'.)


> But since we can get the best of both worlds with only

a small amount of overhead, I really don't see why anyone should be
objecting.


I presume you are referring to the PEP 393 1-2-4 byte implementation. 
Given how well it has been optimized, I think it was the right choice 
for Python. But a language that now uses USC2 or defective UTF-16 on all 
platforms might find the auxiliary array an easier fix.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3: dict & dict.keys()

2013-07-24 Thread Terry Reedy

On 7/24/2013 4:34 PM, Prasad, Ramit wrote:


I am still not clear on the advantage of views vs. iterators.


A1: Views are iterables that can be iterated more than once. Therefore, 
they can be passed to a function that re-iterates its inputs, or to 
multiple functions. They support 'x in view' as efficiently as possible. 
Think about how you would write the non-view equivalent of '(0,None) in 
somedict.views())'. When set-like, views support some set operations. 
For .keys, which are always set-like, these operations are easy to 
implement as dicts are based on a hashed array of keys.


Q2: What is the advantage of views vs. lists?

A2: They do not take up space that is not needed. They can be converted 
to lists, to get all the features of lists, but not vice versa.



What makes d.viewkeys() better than d.iterkeys()? Why did they decide
not to rename d.iterkeys() to d.keys() and instead use d.viewkeys()?


This is historically the wrong way to phrase the question. The 2.7 
.viewxyz methods were *not* used to make the 3.x .xyz methods. It was 
the other way around. 3.0 came out with view methods replacing both list 
and iter methods just after 2.6, after a couple of years of design, and 
a year and a half before 2.7. The view methods were backported from 3.1 
to 2.7, with 'view' added to the name to avoid name conflicts, to make 
it easier to write code that would either run on both 2.7 and 3.x or be 
converted with 2to3.


A better question is: 'When 3.0 was designed, why were views invented 
for the .xyz methods rather than just renaming the .iterxyz methods. The 
advantages given above are the answer. View methods replace both list 
and iterator methods and are more flexible than either and directly or 
indirectly have all the advantages of both.


My question is why some people are fussing so much because Python 
developers gave them one thing that is better than either of the two 
things it replaces?


The mis-phrased question above illustrates why people new to Python 
should use the latest 3.x and ignore 2.x unless they must use 2.x 
libraries. 2.7 has all the old stuff, for back compatibility, and as 
much of the new stuff in 3.1 as seemed sensible, for forward 
compatibility. Thus it has lots of confusing duplication, and in this 
case, triplication


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Beginner. 2d rotation gives unexpected results.

2013-07-24 Thread Terry Reedy

On 7/24/2013 5:17 PM, Joshua Landau wrote:


import math as m


GAH!

Why on earth would you do such a thing?


for the same reason people do 'import tkinter as tk': to minimize typing 
and maximize clarity. In this case,

  from math import sin, cos, radians
also works well

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3: dict & dict.keys()

2013-07-25 Thread Terry Reedy

On 7/25/2013 12:21 PM, Ethan Furman wrote:

On 07/25/2013 09:11 AM, Prasad, Ramit wrote:



Hmm, that is a change that makes some sense to me. Does the view
get updated when dictionary changes or is a new view needed? I
assume the latter.


Nope, the former.  That is a big advantage that the views have over
concrete lists: they show the /current/ state, and so are always
up-do-date.


I think 'view' is generally used in CS to mean a live view, as opposed 
to a snapshot. Memoryviews in 3.x are also live views. Dictionary views 
are read-only. I believe memoryviews can be read-write if allowed by the 
object being viewed.


Python slices are snapshots. It has been proposed that they should be 
views to avoid copying memory, but that has been rejected since views 
necessarily keep the underlying object alive. Instead, applications can 
define the views they need. (They might, for instance, allow multiple 
slices in a view, as tk Text widgets do.)


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating a Simple User Interface for a Function

2013-07-25 Thread Terry Reedy

On 7/25/2013 4:58 PM, CTSB01 wrote:


1) I decided to use Python 2.7, and I will be sure to specify this in
all future threads.


Given that you are not using any libraries, let alone one that does not
run on Python 3, I strongly recommend using the latest version (3.3).


2) It is a list of positive integers.  In fact, it is always going to
be a list of positive increasing integers.


Your example below starts with 0, which is not positive.
Perhaps you mean that all integers after a single leading 0 have to be 
positive and increasing.


If you run digits together, then the max int is 9. Do you intend this?


4) Yes, sorry that's what I meant (if I understood correctly).  I was
told elsewhere that I might want to try using tkinter.


If users start the program at a command line, the core of an input 
function would be

  input = (raw)input('Enter digits: ')  # Include "raw" on 2.x
You would need a more elaborate prompt printed first, and input checking 
with the request repeated if the input does not pass the check.


It would be pretty simple to do the equivalent with a tkinter dialog box.


I'd like to be
able to run send a .exe file that the user can just open up and use
with no further setup.


There are programs that will package your code with an interpreter. But 
do give people the option to get just the program without installing a 
duplicate interpreter.



So on top of the user interface I would also it looks like need to
determine how to make Python change a string 01112345 into a list so
that it does that automatically when the user clicks 'run'.


>>> list('01112345')
['0', '1', '1', '1', '2', '3', '4', '5']
>>> '0,1,1,1,2,3,4,5'.split(',')
['0', '1', '1', '1', '2', '3', '4', '5']


Would a shebang still be the right way to go?


On Linux, definitely, whether you have user enter on the command line or
in response to a prompt. On windows, it only helps with 3.3+.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating a Simple User Interface for a Function

2013-07-25 Thread Terry Reedy

Some additional comments.

On 7/25/2013 7:00 PM, Terry Reedy wrote:

On 7/25/2013 4:58 PM, CTSB01 wrote:


1) I decided to use Python 2.7, and I will be sure to specify this in
all future threads.


Given that you are not using any libraries, let alone one that does not
run on Python 3, I strongly recommend using the latest version (3.3).


It would be pretty easy to make your simple code run on both 3.x and 
2.6/7. Start your file (after any docstring or initial comment) with

from __future__ import division, print_function

Use "except XyxError as e:" instead of "except XyzError, e:".


If users start the program at a command line, the core of an input
function would be
   numbers = input('Enter digits: ')  # see below
You would need a more elaborate prompt printed first, and input checking
with the request repeated if the input does not pass the check.


# To run on both 2.x and 3.x, put this after the __future__ import:
try:
input = raw_input
except NameError:
pass


I'd like to be
able to run send a .exe file that the user can just open up and use
with no further setup.


There are programs that will package your code with an interpreter.


A Python pre-built binary is overkill for such a small function. The 
reason for doing so, packaging all dependencies together, does not 
apply. Any binary is limited to what machines it will run on.



do give people the option to get just the program without installing a
duplicate interpreter.


A Python file, especially if designed to run on 2.6, will run on most 
any recent installation.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >