Re: Why not allow empty code blocks?

2016-07-26 Thread Gregory Ewing

BartC wrote:
(Yes everyone uses T*a (pointer to T) instead of T(*a)[] (pointer to 
array of T), because, thanks to how C mixes up deferencing and indexing, 
the former can be accessed as a[i] instead of (*a)[i].


But it's wrong, and leads to errors that the language can't detect. Such 
as when a points to a single element not a block:


This is an implementation issue, not a language issue.
A sufficiently pedantic implementation could and would
detect this kind of error at run time. Most implementations
of C are not that pedantic, but you can't blame the
language for that.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Why not allow empty code blocks?

2016-07-26 Thread Gregory Ewing

BartC wrote:
But otherwise free-flowing English text is not a good comparison with a 
programming language syntax.


I think Python is more like poetry.

--
roses are red
violets are blue
is this the end
of the poem
no-one can tell
because there is no
end marker
thus spake the bdfl
--
https://mail.python.org/mailman/listinfo/python-list


Re: Why not allow empty code blocks?

2016-07-26 Thread Marko Rauhamaa
Gregory Ewing :

> BartC wrote:
>> (Yes everyone uses T*a (pointer to T) instead of T(*a)[] (pointer to
>> array of T), because, thanks to how C mixes up deferencing and
>> indexing, the former can be accessed as a[i] instead of (*a)[i].
>>
>> But it's wrong, and leads to errors that the language can't detect.
>> Such as when a points to a single element not a block:
>
> This is an implementation issue, not a language issue. A sufficiently
> pedantic implementation could and would detect this kind of error at
> run time. Most implementations of C are not that pedantic, but you
> can't blame the language for that.

Well, one of the novelties in C was the intentional blurring of the
lines between arrays and sequences of elements in memory. The
notation:

   int a[3];

declares a as an array. However, the expression:

   a

does not produce an array; instead, it produces a pointer to the first
element of the array. Even:

   *&a

produces a pointer to the array's first element.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why not allow empty code blocks?

2016-07-26 Thread Gregory Ewing

Marko Rauhamaa wrote:

   int a[3];

>

   a

produces a pointer to the array's first element.


Yes, and that makes using true pointer-to-array types
in C really awkward. You're fighting against the way
the language was designed to be used.

If you use a language in an un-idiomatic way, you can't
really complain when you have difficulties as a result.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Caller's module name, function/method name and line number for output to logging

2016-07-26 Thread Malcolm Greene
Hi Peter,

> Fine! Then you can avoid the evil hack I came up with many moons ago:
> https://mail.python.org/pipermail/python-list/2010-March/570941.html

Evil? Damn evil! Love it!

Thank you,
Malcolm
-- 
https://mail.python.org/mailman/listinfo/python-list


Possible to capture cgitb style output in a try/except section?

2016-07-26 Thread Malcolm Greene
Is there a way to capture cgitb's extensive output in an except clause
so that cgitb's detailed traceback output can be logged *and* the except
section can handle the exception so the script can continue running?
 
My read of the cgitb documentation leads me to believe that the only way
I can get cgitb output is to let an exception propagate to the point of
terminating my script ... at which point cgitb grabs the exception and
does its magic.
 
Thank you,
Malcolm
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why not allow empty code blocks?

2016-07-26 Thread BartC

On 26/07/2016 08:21, Gregory Ewing wrote:

BartC wrote:

(Yes everyone uses T*a (pointer to T) instead of T(*a)[] (pointer to
array of T), because, thanks to how C mixes up deferencing and
indexing, the former can be accessed as a[i] instead of (*a)[i].

But it's wrong, and leads to errors that the language can't detect.
Such as when a points to a single element not a block:


This is an implementation issue, not a language issue.
A sufficiently pedantic implementation could and would
detect this kind of error at run time. Most implementations
of C are not that pedantic, but you can't blame the
language for that.


(No, it's a language issue. Yes you might be able to have a sufficiently 
complex (and slow) implementation that at best could detect errors at 
runtime, but that's not much help.


It boils down to this: if you have a pointer type T*, does it point to a 
standalone T object, or to an array or block of T objects?


The language allows a pointer P of type T* pointing to a single T object 
to be accessed as P[i]. Apparently this is not seen as a problem...


(More observations about C here:

https://github.com/bartg/langs/blob/master/C%20Problems.md))

--
Bartc

--
https://mail.python.org/mailman/listinfo/python-list


NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread urschrei
I'm using ctypes to interface with a binary which returns a void pointer 
(ctypes c_void_p) to a nested 64-bit float array:
[[1.0, 2.0], [3.0, 4.0], … ]
then return the pointer so it can be freed

I'm using the following code to de-reference it:

# a 10-element array
shape = (10, 2)
array_size = np.prod(shape)
mem_size = 8 * array_size
array_str = ctypes.string_at(ptr, mem_size)
# convert to NumPy array,and copy to a list
ls = np.frombuffer(array_str, dtype="float64", 
count=array_size).reshape(shape).tolist()
# return pointer so it can be freed
drop_array(ptr)
return ls

This works correctly and consistently on Linux and OSX using NumPy 1.11.0, but 
fails on Windows 32 bit and 64-bit about 50% of the time, returning nonsense 
values. Am I doing something wrong? Is there a better way to do this?

--
s

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Possible to capture cgitb style output in a try/except section?

2016-07-26 Thread Steven D'Aprano
On Tue, 26 Jul 2016 08:11 pm, Malcolm Greene wrote:

> Is there a way to capture cgitb's extensive output in an except clause
> so that cgitb's detailed traceback output can be logged *and* the except
> section can handle the exception so the script can continue running?

Anything that cgitb captures in a traceback can be captured at any time and
processed anyway you like. There's no real magic happening in cgitb, you
can read the source and program your own traceback handler that records any
details you like.

Or you can use the cgitb module as-is, and capture the output:


py> from StringIO import StringIO
py> s = StringIO()
py> import cgitb
py> handler = cgitb.Hook(file=s, format="text").handle
py> try:
... x = 1/0
... except Exception as e:
... handler()
...
py> text = s.getvalue()
py> print text

Python 2.7.2: /usr/local/bin/python2.7
Tue Jul 26 23:37:06 2016

A problem occurred in a Python script.  Here is the sequence of
function calls leading up to the error, in the order they occurred.

[...]

The above is a description of an error in a Python program.  Here is
the original traceback:

Traceback (most recent call last):
  File "", line 2, in 
ZeroDivisionError: division by zero




-- 
Steven
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Python environment on mac

2016-07-26 Thread Crane Ugly
Mac OS X comes with its own version of python and structure to support it.
So far it was good enough for me. Then I started to use modules that 
distributed through MacPorts and this is where I get lost.
I do not quite understand how Python environment is set. Or how to set it in a 
way of using, say MacPorts distribution alone.
For example: standard location for pip utility is /usr/local/bin/pip. MacPorts 
structure has it too but as a link
lrwxr-xr-x 1 root admin 67 May 23 22:32 /opt/local/bin/pip-2.7 -> 
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/pip
Which means that the standard utility will be used.
The things is that depending on a way I run pip I get different results:
$ pip list|grep pep8
pep8 (1.7.0)
$ sudo pip list|grep pep8
$
pep8 was installed through macports.
In second case pip is using stripped environment and pointing to standard Mac 
OS Python repository.
But in a way to install anything with pip I have to use sudo.
In my profile I have variable PYTHONPATH:
PYTHONPATH=/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
It is pointing to macports structure. But when I use sudo (in case of using 
pip) it get stripped.
How to setup and maintain python environment in a trustful way? So it is clear 
where all installed modules are?

Leonid
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Possible to capture cgitb style output in a try/except section?

2016-07-26 Thread Peter Otten
Malcolm Greene wrote:

> Is there a way to capture cgitb's extensive output in an except clause
> so that cgitb's detailed traceback output can be logged *and* the except
> section can handle the exception so the script can continue running?
>  
> My read of the cgitb documentation leads me to believe that the only way
> I can get cgitb output is to let an exception propagate to the point of
> terminating my script ... at which point cgitb grabs the exception and
> does its magic.

I see Steven has already answered while I was composing an example script. 
Rather than throwing it away I'll give it below:

#!/usr/bin/env python3
import cgitb
cgitb.enable()

print("Content-type: text/html\r\n\r\n")
print("")
print("hello world")
for expr in [
"1 + 1",
"1 / 0", # handled in the except clause
"2 * 2",
"1 x 2", # handled by sys.excepthook set via cgitb.enable()
"3 * 3"  # not reached
]:
try:
print(expr, "=", eval(expr))
except ZeroDivisionError:
cgitb.Hook().handle()
print("")

print("")


-- 
https://mail.python.org/mailman/listinfo/python-list


reshape with xyz ordering

2016-07-26 Thread Heli
Hi, 

I sort a file with 4 columns (x,y,z, somevalue) and I sort it using 
numpy.lexsort.

ind=np.lexsort((val,z,y,x))

myval=val[ind]

myval is a 1d numpy array sorted by x,then y, then z and finally val.

how can I reshape correctly myval so that I get a 3d numpy array maintaining 
the xyz ordering of the data?


my val looks like the following:

x,y,z, val
0,0,0,val1
0,0,1,val2
0,0,2,val3
...

Thanks a lot for your help, 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread Peter Otten
ursch...@gmail.com wrote:

> I'm using ctypes to interface with a binary which returns a void pointer
> (ctypes c_void_p) to a nested 64-bit float array:
> [[1.0, 2.0], [3.0, 4.0], … ]
> then return the pointer so it can be freed
> 
> I'm using the following code to de-reference it:
> 
> # a 10-element array
> shape = (10, 2)
> array_size = np.prod(shape)
> mem_size = 8 * array_size
> array_str = ctypes.string_at(ptr, mem_size)
> # convert to NumPy array,and copy to a list
> ls = np.frombuffer(array_str, dtype="float64",
> count=array_size).reshape(shape).tolist()
> # return pointer so it can be freed
> drop_array(ptr)
> return ls
> 
> This works correctly and consistently on Linux and OSX using NumPy 1.11.0,
> but fails on Windows 32 bit and 64-bit about 50% of the time, returning
> nonsense values. Am I doing something wrong? Is there a better way to do
> this?

I'd verify that the underlying memory has not been freed by the "binary" 
when you are doing the ctypes/numpy processing. You might get the correct 
values only when you are "lucky" and the memory has not yet been reused for 
something else, and you are "lucky" on Linux/OSX more often than on 
Windows...

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why not allow empty code blocks?

2016-07-26 Thread Antoon Pardon
Op 24-07-16 om 21:00 schreef Chris Angelico:
> A skilled craftsman in any field will choose to use quality tools.
> They save time, and time is money.[/quote]

Sure, but sometimes there is a flaw somewhere. A flaw whose
consequences can be reduced by using an extra tool. If that
is the case the real solution seems to get rid of the flaw
and not to use the tool.

So if someone argues there is a flaw somewhere, pointing to
tools doesn't contradict that.

-- 
Antoon.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: reshape with xyz ordering

2016-07-26 Thread Peter Otten
Heli wrote:

> I sort a file with 4 columns (x,y,z, somevalue) and I sort it using
> numpy.lexsort.
> 
> ind=np.lexsort((val,z,y,x))
> 
> myval=val[ind]
> 
> myval is a 1d numpy array sorted by x,then y, then z and finally val.
> 
> how can I reshape correctly myval so that I get a 3d numpy array
> maintaining the xyz ordering of the data?
> 
> 
> my val looks like the following:
> 
> x,y,z, val
> 0,0,0,val1
> 0,0,1,val2
> 0,0,2,val3
> ...
> 
> Thanks a lot for your help,

I'm not sure I understand the question. Does

shape = [max(t) + 1 for t in [x, y, z]]
cube = myval.reshape(shape)

give what you want?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread sth
On Tuesday, 26 July 2016 15:21:14 UTC+1, Peter Otten  wrote:
> 
> > I'm using ctypes to interface with a binary which returns a void pointer
> > (ctypes c_void_p) to a nested 64-bit float array:
> > [[1.0, 2.0], [3.0, 4.0], … ]
> > then return the pointer so it can be freed
> > 
> > I'm using the following code to de-reference it:
> > 
> > # a 10-element array
> > shape = (10, 2)
> > array_size = np.prod(shape)
> > mem_size = 8 * array_size
> > array_str = ctypes.string_at(ptr, mem_size)
> > # convert to NumPy array,and copy to a list
> > ls = np.frombuffer(array_str, dtype="float64",
> > count=array_size).reshape(shape).tolist()
> > # return pointer so it can be freed
> > drop_array(ptr)
> > return ls
> > 
> > This works correctly and consistently on Linux and OSX using NumPy 1.11.0,
> > but fails on Windows 32 bit and 64-bit about 50% of the time, returning
> > nonsense values. Am I doing something wrong? Is there a better way to do
> > this?
> 
> I'd verify that the underlying memory has not been freed by the "binary" 
> when you are doing the ctypes/numpy processing. You might get the correct 
> values only when you are "lucky" and the memory has not yet been reused for 
> something else, and you are "lucky" on Linux/OSX more often than on 
> Windows...

I'm pretty sure the binary isn't freeing the memory prematurely; I wrote it and 
I'm testing it, and the Python tests run 10^6 loops of:
array retrieval -> numpy array allocation + copying to list -> passing original 
array back to be freed.

I'm not completely ruling it out (it's difficult to test a .dylib / .so using 
valgrind), but getting the ctypes / numpy side right would at least allow me to 
eliminate one potential source of problems.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python environment on mac

2016-07-26 Thread CFK
There are two variables you will need to set; PATH and PYTHONPATH. You set
your PYTHONPATH correctly, but for executables like pip, you need to set
the PATH as well. You MUST do that for each account! The reason it didn't
work as root is because once you su to root, it replaces your PYTHONPATH
and PATH (and all other environment variables) with root's. sudo shouldn't
have that problem.

BE VERY CAREFUL CHANGING THESE VARIABLES FOR ROOT! I managed to wedge a
system until I reverted my environment.

Thanks,
Cem Karan

On Jul 26, 2016 9:58 AM, "Crane Ugly"  wrote:

> Mac OS X comes with its own version of python and structure to support it.
> So far it was good enough for me. Then I started to use modules that
> distributed through MacPorts and this is where I get lost.
> I do not quite understand how Python environment is set. Or how to set it
> in a way of using, say MacPorts distribution alone.
> For example: standard location for pip utility is /usr/local/bin/pip.
> MacPorts structure has it too but as a link
> lrwxr-xr-x 1 root admin 67 May 23 22:32 /opt/local/bin/pip-2.7 ->
> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/pip
> Which means that the standard utility will be used.
> The things is that depending on a way I run pip I get different results:
> $ pip list|grep pep8
> pep8 (1.7.0)
> $ sudo pip list|grep pep8
> $
> pep8 was installed through macports.
> In second case pip is using stripped environment and pointing to standard
> Mac OS Python repository.
> But in a way to install anything with pip I have to use sudo.
> In my profile I have variable PYTHONPATH:
>
> PYTHONPATH=/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
> It is pointing to macports structure. But when I use sudo (in case of
> using pip) it get stripped.
> How to setup and maintain python environment in a trustful way? So it is
> clear where all installed modules are?
>
> Leonid
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread Christian Gollwitzer

Am 26.07.16 um 17:09 schrieb sth:

it's difficult to test a .dylib / .so using valgrind


Why is it difficult? If you have a python script such that

python mytests.py

loads the .so and runs the tests, then

valgrind --tool=memcheck python mytests.py

should work. This should immediately spit out an error in case that 
numpy accesses deleted memory.


Of course debug information should be enabled when compiling the .so for 
more useful output, and it could be helpful to have it for numpy/python, 
too, but that's not a requirement.


Christian
--
https://mail.python.org/mailman/listinfo/python-list


Re: Possible to capture cgitb style output in a try/except section?

2016-07-26 Thread Malcolm Greene
Hi Steven and Peter,

Steven: Interestingly (oddly???) enough, the output captured by hooking
the cgitb handler on my system appears to be shorter than the default
cgitb output. You can see this yourself via this tiny script:

import cgitb
cgitb.enable(format='text')
x = 1/0

The solution I came up with (Python 3.5.1) was to use your StringIO
technique with the Hook's 'file=' parameter.

import io

cgitb_buffer = io.StringIO()
cgitb.Hook(file=cgitb_buffer, format='text)
return cgitb_buffer.getvalue()

Peter:  Your output was helpful in seeing the difference I mentioned
above.

Thank you both for your help!

Malcolm
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 3.5 glob.glob() 2nd param (*) and how to detect files/folders beginning with "."?

2016-07-26 Thread Malcolm Greene
In reading Python 3.5.1's glob.glob documentation[1] I'm puzzled by the
following:
 
1. The signature for glob.glob() is "glob.glob(pathname, *,
   recursive=False)". What is the meaning of the 2nd parameter listed
   with an asterisk?
 
2. Is there a technique for using glob.glob() to recognize files and
   folders that begin with a period, eg. ".profile"? The documentation
   states: "If the directory contains files starting with . they won’t
   be matched by default.". Any suggestions on what the non-default
   approach is to match these type of files?
 
Thank you,
Malcolm
 
[1] https://docs.python.org/3/library/glob.html
 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.5 glob.glob() 2nd param (*) and how to detect files/folders beginning with "."?

2016-07-26 Thread Jussi Piitulainen
Malcolm Greene  writes:

> In reading Python 3.5.1's glob.glob documentation[1] I'm puzzled by the
> following:
>  
> 1. The signature for glob.glob() is "glob.glob(pathname, *,
>recursive=False)". What is the meaning of the 2nd parameter listed
>with an asterisk?

It's not a parameter. It's special syntax to indicate that the remaining
parameters are keyword-only.

> 2. Is there a technique for using glob.glob() to recognize files and
>folders that begin with a period, eg. ".profile"? The documentation
>states: "If the directory contains files starting with . they won’t
>be matched by default.". Any suggestions on what the non-default
>approach is to match these type of files?

Glob with a pattern that starts with a dot. Glob twice if you want both
kinds. Or look into that fnmatch that is referenced from glob
documentation and said not to consider leading dots special.

> [1] https://docs.python.org/3/library/glob.html

.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.5 glob.glob() 2nd param (*) and how to detect files/folders beginning with "."?

2016-07-26 Thread Peter Otten
Malcolm Greene wrote:

> 2. Is there a technique for using glob.glob() to recognize files and
>folders that begin with a period, eg. ".profile"? The documentation
>states: "If the directory contains files starting with . they won’t
>be matched by default.". Any suggestions on what the non-default
>approach is to match these type of files?

I don't think there is a clean way. You can monkey-patch if you don't need 
the default behaviour elsewhere in your application:


$ touch foo .bar baz
$ python3
Python 3.4.3 (default, Oct 14 2015, 20:28:29) 
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import glob
>>> glob.glob("*")
['foo', 'baz']
>>> glob._ishidden = lambda path: False
>>> glob.glob("*")
['.bar', 'foo', 'baz']


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread sth
On Tuesday, 26 July 2016 16:36:33 UTC+1, Christian Gollwitzer  wrote:
> Am 26.07.16 um 17:09 schrieb sth:
> > it's difficult to test a .dylib / .so using valgrind
> 
> Why is it difficult? If you have a python script such that
> 
>   python mytests.py
> 
> loads the .so and runs the tests, then
> 
>   valgrind --tool=memcheck python mytests.py
> 
> should work. This should immediately spit out an error in case that 
> numpy accesses deleted memory.
> 
> Of course debug information should be enabled when compiling the .so for 
> more useful output, and it could be helpful to have it for numpy/python, 
> too, but that's not a requirement.
> 
>   Christian

Valgrind isn't giving me any errors; it's reporting possibly-lost memory, but 
this is constant -- if I call the foreign functions once or 10^6 times, the 
number of bytes it reports possibly lost (and the number of bytes it reports 
still reachable) remain the same.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.5 glob.glob() 2nd param (*) and how to detect files/folders beginning with "."?

2016-07-26 Thread Malcolm Greene
Hi Jussi,

You answered my questions - thank you!
Malcolm


> 1. The signature for glob.glob() is "glob.glob(pathname, *,
>recursive=False)". What is the meaning of the 2nd parameter listed
>with an asterisk?

It's not a parameter. It's special syntax to indicate that the remaining
parameters are keyword-only.

>  2. Is there a technique for using glob.glob() to recognize files and
>folders that begin with a period, eg. ".profile"? The documentation
>states: "If the directory contains files starting with . they won’t
>be matched by default.". Any suggestions on what the non-default
>approach is to match these type of files?

Glob with a pattern that starts with a dot. Glob twice if you want both
kinds. Or look into that fnmatch that is referenced from glob
documentation and said not to consider leading dots special.

-- 
https://mail.python.org/mailman/listinfo/python-list


making executables smaller

2016-07-26 Thread Carter Temm
Hi,
I’m writing a couple different projects at the moment, and when I compile it 
into a single executable using pyinstaller, it becomes extremely large. I’m 
guessing this is because of the modules used. Because I’m not that skilled at 
python, I put stuff like for example, import sys. I imagine the final project 
could be made smaller by specifying from something import something_else. but 
the thing is, I don’t know what smaller I could import with these set of 
modules. Is there a program that could tell me this. Sorry if this question is 
really basic, but it’d be helpful.
-- 
https://mail.python.org/mailman/listinfo/python-list


FW: error in python IDLE

2016-07-26 Thread Nakirekanti Jahnavi


Sent from Mail for Windows 10

From: Nakirekanti Jahnavi
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread eryk sun
 On Tue, Jul 26, 2016 at 12:06 PM,   wrote:
> I'm using ctypes to interface with a binary which returns a void pointer 
> (ctypes c_void_p) to a nested 64-bit float array:

If this comes from a function result, are you certain that its restype
is ctypes.c_void_p? I commonly see typos here such as setting
"restypes" instead of "restype".

> [[1.0, 2.0], [3.0, 4.0], … ]
> then return the pointer so it can be freed
>
> I'm using the following code to de-reference it:
>
> # a 10-element array
> shape = (10, 2)
> array_size = np.prod(shape)
> mem_size = 8 * array_size
> array_str = ctypes.string_at(ptr, mem_size)
> # convert to NumPy array,and copy to a list
> ls = np.frombuffer(array_str, dtype="float64", 
> count=array_size).reshape(shape).tolist()
> # return pointer so it can be freed
> drop_array(ptr)
> return ls
>
> This works correctly and consistently on Linux and OSX using NumPy 1.11.0, 
> but fails on
> Windows 32 bit and 64-bit about 50% of the time, returning nonsense values. 
> Am I doing
> something wrong? Is there a better way to do this?

numpy.ctypeslib facilitates working with ctypes functions, pointers
and arrays via the factory functions as_array, as_ctypes, and
ndpointer.

ndpointer creates a c_void_p subclass that overrides the default
from_param method to allow passing arrays as arguments to ctypes
functions and also implements the _check_retval_ hook to automatically
convert a pointer result to a numpy array.

The from_param method validates an array argument to ensure it has the
proper data type, shape, and memory layout. For example:

g = ctypes.CDLL(None) # Unix only
Base = np.ctypeslib.ndpointer(dtype='B', shape=(4,))

# strchr example
g.strchr.argtypes = (Base, ctypes.c_char)
g.strchr.restype = ctypes.c_char_p

d = np.array(list(b'012\0'), dtype='B')
e = np.array(list(b'0123\0'), dtype='B') # wrong shape

>>> g.strchr(d, b'0'[0])
b'012'
>>> g.strchr(e, b'0'[0])
Traceback (most recent call last):
  File "", line 1, in 
ctypes.ArgumentError: argument 1: :
array must have shape (4,)

The _check_retval_ hook of an ndpointer calls numpy.array on the
result of a function. Its __array_interface__ property is used to
create a copy with the defined data type and shape. For example:

g.strchr.restype = Base

>>> d.ctypes._as_parameter_ # source address
c_void_p(24657952)
>>> a = g.strchr(d, b'0'[0])
>>> a
array([48, 49, 50,  0], dtype=uint8)
>>> a.ctypes._as_parameter_ # it's a copy
c_void_p(19303504)

As a copy, the array owns its data:

>>> a.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : True
  OWNDATA : True
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False

You can subclass the ndpointer type to have _check_retval_ instead
return a view of the result (i.e. copy=False), which may be desirable
for a large result array but probably isn't worth it for small arrays.
For example:

class Result(Base):
@classmethod
def _check_retval_(cls, result):
return np.array(result, copy=False)

g.strchr.restype = Result

>>> a = g.strchr(d, b'0'[0])
>>> a.ctypes._as_parameter_ # it's NOT a copy
c_void_p(24657952)

Because it's not a copy, the array view doesn't own the data, but note
that it's not a read-only view:

>>> a.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : True
  OWNDATA : False
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread sth
On Tuesday, 26 July 2016 19:10:46 UTC+1, eryk sun  wrote:
> On Tue, Jul 26, 2016 at 12:06 PM,  sth wrote:
> > I'm using ctypes to interface with a binary which returns a void pointer 
> > (ctypes c_void_p) to a nested 64-bit float array:
> 
> If this comes from a function result, are you certain that its restype
> is ctypes.c_void_p? I commonly see typos here such as setting
> "restypes" instead of "restype".
> 
> > [[1.0, 2.0], [3.0, 4.0], … ]
> > then return the pointer so it can be freed
> >
> > I'm using the following code to de-reference it:
> >
> > # a 10-element array
> > shape = (10, 2)
> > array_size = np.prod(shape)
> > mem_size = 8 * array_size
> > array_str = ctypes.string_at(ptr, mem_size)
> > # convert to NumPy array,and copy to a list
> > ls = np.frombuffer(array_str, dtype="float64", 
> > count=array_size).reshape(shape).tolist()
> > # return pointer so it can be freed
> > drop_array(ptr)
> > return ls
> >
> > This works correctly and consistently on Linux and OSX using NumPy 1.11.0, 
> > but fails on
> > Windows 32 bit and 64-bit about 50% of the time, returning nonsense values. 
> > Am I doing
> > something wrong? Is there a better way to do this?
> 
> numpy.ctypeslib facilitates working with ctypes functions, pointers
> and arrays via the factory functions as_array, as_ctypes, and
> ndpointer.
> 
> ndpointer creates a c_void_p subclass that overrides the default
> from_param method to allow passing arrays as arguments to ctypes
> functions and also implements the _check_retval_ hook to automatically
> convert a pointer result to a numpy array.
> 
> The from_param method validates an array argument to ensure it has the
> proper data type, shape, and memory layout. For example:
> 
> g = ctypes.CDLL(None) # Unix only
> Base = np.ctypeslib.ndpointer(dtype='B', shape=(4,))
> 
> # strchr example
> g.strchr.argtypes = (Base, ctypes.c_char)
> g.strchr.restype = ctypes.c_char_p
> 
> d = np.array(list(b'012\0'), dtype='B')
> e = np.array(list(b'0123\0'), dtype='B') # wrong shape
> 
> >>> g.strchr(d, b'0'[0])
> b'012'
> >>> g.strchr(e, b'0'[0])
> Traceback (most recent call last):
>   File "", line 1, in 
> ctypes.ArgumentError: argument 1: :
> array must have shape (4,)
> 
> The _check_retval_ hook of an ndpointer calls numpy.array on the
> result of a function. Its __array_interface__ property is used to
> create a copy with the defined data type and shape. For example:
> 
> g.strchr.restype = Base
> 
> >>> d.ctypes._as_parameter_ # source address
> c_void_p(24657952)
> >>> a = g.strchr(d, b'0'[0])
> >>> a
> array([48, 49, 50,  0], dtype=uint8)
> >>> a.ctypes._as_parameter_ # it's a copy
> c_void_p(19303504)
> 
> As a copy, the array owns its data:
> 
> >>> a.flags
>   C_CONTIGUOUS : True
>   F_CONTIGUOUS : True
>   OWNDATA : True
>   WRITEABLE : True
>   ALIGNED : True
>   UPDATEIFCOPY : False
> 
> You can subclass the ndpointer type to have _check_retval_ instead
> return a view of the result (i.e. copy=False), which may be desirable
> for a large result array but probably isn't worth it for small arrays.
> For example:
> 
> class Result(Base):
> @classmethod
> def _check_retval_(cls, result):
> return np.array(result, copy=False)
> 
> g.strchr.restype = Result
> 
> >>> a = g.strchr(d, b'0'[0])
> >>> a.ctypes._as_parameter_ # it's NOT a copy
> c_void_p(24657952)
> 
> Because it's not a copy, the array view doesn't own the data, but note
> that it's not a read-only view:
> 
> >>> a.flags
>   C_CONTIGUOUS : True
>   F_CONTIGUOUS : True
>   OWNDATA : False
>   WRITEABLE : True
>   ALIGNED : True
>   UPDATEIFCOPY : False

The restype is a ctypes Structure instance with a single __fields__ entry 
(coords), which is a Structure with two fields (len and data) which are the FFI 
array's length and the void pointer to its memory:
https://github.com/urschrei/pypolyline/blob/master/pypolyline/util.py#L109-L117

I'm only half-following your explanation of how ctypeslib works, but it seems 
clear that I'm doing something wrong.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: making executables smaller

2016-07-26 Thread Wildman via Python-list
On Tue, 26 Jul 2016 12:22:16 -0500, Carter Temm wrote:

> Hi,
> I’m writing a couple different projects at the moment, and when I
> compile it into a single executable using pyinstaller, it becomes
> extremely large. I’m guessing this is because of the modules used.
> Because I’m not that skilled at python, I put stuff like for example,
> import sys. I imagine the final project could be made smaller by
> specifying from something import something_else. but the thing is,
> I don’t know what smaller I could import with these set of modules.
> Is there a program that could tell me this. Sorry if this question
> is really basic, but it’d be helpful.

Try importing only the modules you actually use.
For example, instead of:

import os

Try using:

import os.path
import os.environ
import os.whatever

-- 
 GNU/Linux user #557453
The cow died so I don't need your bull!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: FW: error in python IDLE

2016-07-26 Thread Terry Reedy

On 7/26/2016 1:51 PM, Nakirekanti Jahnavi wrote:

Sent from Mail for Windows 10

From: Nakirekanti Jahnavi


The above is all I see.
This is a text-only, no-attachment list.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


ANN: PyQt v5.7 Released

2016-07-26 Thread Phil Thompson
PyQt v5.7 has been released. These are the Python bindings for the Qt 
application toolkit and runs on Linux, OS X, Windows, iOS and Android.

Also released for the first time under the GPL are PyQtChart, 
PyQtDataVisualization and PyQtPurchasing.

PyQtChart are the bindings for the Qt Charts library. This implements a set of 
classes for creating and manipulating 2D charts.

PyQtDataVisualization are the bindings for the Qt Data Visualization library. 
This implements a set of classes for representing data in 3D and allowing the 
user to interact with the view.

PyQtPurchasing are the bindings for the Qt Purchasing library. This implements 
a set of classes that allow applications to support in-app purchases from the 
Mac App Store on OS X, the App Store on iOS, and Google Play on Android.

Wheels are available from PyPI and include the relevent Qt libraries - nothing 
else needs to be installed.

Source packages and more information can be found at 
https://www.riverbankcomputing.com/.

Phil Thompson
-- 
https://mail.python.org/mailman/listinfo/python-list


Is there a documented pylibpd (pure data wrapper/library) API?

2016-07-26 Thread Michael Sperone
Hi everyone,

I'm starting using the libpd wrapper/library for python (pylibpd), to load
and use my pure data patches within Python.  I was wondering if there was
any online documentation for this python version?  At the least I'm looking
for a list of classes and methods.

Thank you!
Mike
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: NumPy frombuffer giving nonsense values when reading C float array on Windows

2016-07-26 Thread eryk sun
 On Tue, Jul 26, 2016 at 6:31 PM, sth  wrote:
>
> The restype is a ctypes Structure instance with a single __fields__ entry 
> (coords), which

Watch the underscores with ctypes attributes. Your code spells it
correctly as "_fields_".

> is a Structure with two fields (len and data) which are the FFI array's 
> length and the void
> pointer to its memory:
> https://github.com/urschrei/pypolyline/blob/master/pypolyline/util.py#L109-L117

_FFIArray.__init__ isn't properly keeping a reference to the wrapped
numpy array:

def __init__(self, seq, data_type = c_double):
ptr = POINTER(data_type)
nparr = np.array(seq, dtype=np.float64)
arr = nparr.ctypes.data_as(ptr)
self.data = cast(arr, c_void_p)
self.len = len(seq)

arr doesn't have a reference to nparr, so self.data doesn't have a
reference to the numpy array when it goes out of scope. For example,
we can trigger a segfault here:

>>> nparr = np.array(range(2**24), dtype=np.float64)
>>> ptr = ctypes.POINTER(ctypes.c_double)
>>> arr = nparr.ctypes.data_as(ptr)
>>> arr[0], arr[2**24-1]
(0.0, 16777215.0)

>>> del nparr
>>> arr[0], arr[2**24-1]
Segmentation fault (core dumped)

> I'm only half-following your explanation of how ctypeslib works, but it seems 
> clear that I'm doing
> something wrong.

The library requires the data pointers wrapped in a struct with the
length, so I think what you're doing to wrap and unwrap arbitrary
sequences is generally fine. But you don't need the bytes copy from
string_at. You can cast to a double pointer and create a numpy array
from that, all without copying any data. The only copy made is for
tolist(). For example:

def _void_array_to_nested_list(res, func, args):
""" Dereference the FFI result to a list of coordinates """
try:
shape = res.coords.len, 2
ptr = cast(res.coords.data, POINTER(c_double))
array = np.ctypeslib.as_array(ptr, shape)
return array.tolist()
finally:
drop_array(res.coords)

If you're not hard coding the double data type, consider adding a
simple C type string to the _FFIArray struct. For example:

>>> ctypes.c_double._type_
'd'

You just need a dict to map type strings to simple C types.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Depending on enum34 from a library

2016-07-26 Thread Ethan Furman

On 07/24/2016 01:10 PM, Vasiliy Faronov wrote:


I'm building a Python library where I want to use Python 3.4-style
enums. Because I need to support Python 2.7, I'm considering using
enum34 [1]. But I'm not sure how to do this:

If I simply depend on enum34, it will install a module named `enum`
even in Python 3.4+ environments, and that could shadow the stdlib
module. Personally I would be very surprised if installing a library
caused an unrelated stdlib module to be replaced with a third-party
one, even if it's "just" a backport. However, in my environments,
`sys.path` is such that stdlib comes before site-packages -- perhaps
this is normal enough that I can rely on it?

Or I could change my setup.py to include enum34 in `install_requires`
only if running on Python 2.7. But that will make my distribution's
metadata dynamic, which doesn't sound like a good idea [2]. At the
very least, I think it will prevent me from building a universal
wheel.


enum34 is kept up-to-date, as far as I am able, with the current version of
enum in the stdlib.  As a happy side-effect if it is found before the stdlib
version Python will still run -- at least for Python 3.4 and 3.5; 3.6 is
adding some new features that I am not planning on adding to enum34.

For my own code I use the 'install_requires' option, plus some code that
checks the Python version to see if enum34 is needed.

Your other option is to use aenum [1], my other package, instead; it does
not risk conflict with enum, but has the same basic behavior (plus lots of
advanced behavior you can opt in to).

--
~Ethan~

[1] https://pypi.python.org/pypi/aenum
--
https://mail.python.org/mailman/listinfo/python-list


Re: making executables smaller

2016-07-26 Thread Carter Temm
OK. So I guess the question should be, how can I make these executables smaller 
in general?

Sent from my iPhone

> On Jul 26, 2016, at 5:13 PM, Dennis Lee Bieber  wrote:
> 
> On Tue, 26 Jul 2016 12:22:16 -0500, Carter Temm 
> declaimed the following:
> 
>> Hi,
>> I’m writing a couple different projects at the moment, and when I compile it 
>> into a single executable using pyinstaller, it becomes extremely large. I’m 
>> guessing this is because of the modules used. Because I’m not that skilled 
>> at python, I put stuff like for example, import sys. I imagine the final 
>> project could be made smaller
> by specifying from something import something_else. but the thing is, I don’t 
> know what smaller I could import with these set of modules. Is there a 
> program that could tell me this. Sorry if this question is really basic, but 
> it’d be helpful.
> 
>"from module import name" still has to include the entire module --
> since that is the file. It is effectively the same as doing
> 
> import module
> name = module.name
> del module
> 
>Also -- anything that creates an executable file for Python will be
> including the Python interpreter ITSELF. That may be a lot of the bloat you
> see.
> -- 
>Wulfraed Dennis Lee Bieber AF6VN
>wlfr...@ix.netcom.comHTTP://wlfraed.home.netcom.com/
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pyinstaller

2016-07-26 Thread Tom Brown
I used pyinstaller quite a bit 3 years ago. I could brush off the cobwebs
and see if I can help if you have not solved it already.

What is the issue you are having?

-Tom

On Jun 21, 2016 16:57, "Larry Martell"  wrote:

> Anyone here have any experience with pyinstaller? I am trying to use
> it, but I'm not having much success. I tried posting to the
> pyinstaller ML but it said my post had to be approved first, and that
> hasn't happened in a while. I'll post details if someone here thinks
> they can help.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pyinstaller

2016-07-26 Thread Larry Martell
On Tue, Jul 26, 2016 at 8:49 PM, Tom Brown  wrote:
> I used pyinstaller quite a bit 3 years ago. I could brush off the cobwebs
> and see if I can help if you have not solved it already.
>
> What is the issue you are having?

If I import the requests module, then when I run the executable I get:

ImportError: No module named 'requests.packages.chardet'

I tried to post to the pyinstaller group, but it said my post had to
be approved by the moderator, and it apparently never was. I have no
idea who the moderator is, so there was no one I could contact about
that. I posted an issue to github
(https://github.com/pyinstaller/pyinstaller/issues/2060) and some
suggestions were made, but none fixed the problem. I am on RHEL 7.2
with Python 2.7.5, and it's reproducible, just by having a 1 line
script that has "import requests". Thanks for any help you could
provide.




>
> On Jun 21, 2016 16:57, "Larry Martell"  wrote:
>>
>> Anyone here have any experience with pyinstaller? I am trying to use
>> it, but I'm not having much success. I tried posting to the
>> pyinstaller ML but it said my post had to be approved first, and that
>> hasn't happened in a while. I'll post details if someone here thinks
>> they can help.
>> --
>> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python environment on mac

2016-07-26 Thread Cameron Simpson

On 26Jul2016 06:52, Crane Ugly  wrote:

Mac OS X comes with its own version of python and structure to support it.
So far it was good enough for me. Then I started to use modules that 
distributed through MacPorts and this is where I get lost.
I do not quite understand how Python environment is set. Or how to set it in a 
way of using, say MacPorts distribution alone.
For example: standard location for pip utility is /usr/local/bin/pip. MacPorts 
structure has it too but as a link
lrwxr-xr-x 1 root admin 67 May 23 22:32 /opt/local/bin/pip-2.7 -> 
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/pip
Which means that the standard utility will be used.


No, I think that means it uses the MacPorts one. Note: /opt/local vs 
/usr/local.



The things is that depending on a way I run pip I get different results:
$ pip list|grep pep8
pep8 (1.7.0)
$ sudo pip list|grep pep8
$
pep8 was installed through macports.
In second case pip is using stripped environment and pointing to standard Mac 
OS Python repository.
But in a way to install anything with pip I have to use sudo.
In my profile I have variable PYTHONPATH:
PYTHONPATH=/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
It is pointing to macports structure. But when I use sudo (in case of using 
pip) it get stripped.
How to setup and maintain python environment in a trustful way? So it is clear 
where all installed modules are?


My personal habit is to use virtualenv. You could build a virtualenv based of 
the system Python or the MacPorts one (or make one of each). Then you can use 
pip (as yourself, no sudo - avoid that if possible) from the appropriate 
environment to install into that environment. Complete separation, and complete 
control for you.


The executables inside a virtualenv ("python", "pip" etc) are stubs that adjust 
PYTHONPATH etc themselves and then invoke the python one which that particular 
virtualenv was based. This means that by executing that _specific_ executable 
you automatically and correctly use that specific virtualenv. Without having to 
hand maintain your own $PYTHONPATH, and therefore with needing to adjust it 
depending which setup you want to use.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: making executables smaller

2016-07-26 Thread Steven D'Aprano
On Wed, 27 Jul 2016 03:22 am, Carter Temm wrote:

> Hi,
> I’m writing a couple different projects at the moment, and when I compile
> it into a single executable using pyinstaller, it becomes extremely large.

What do you consider "extremely large"? Ten gigabytes? 500 kilobytes? Give
us a clue.


> I’m guessing this is because of the modules used.

Maybe yes, maybe no.

On my system, Python 3.3 is a little bit less than 6 MB, and the std lib
under 130MB:


[steve@ando ~]$ du -hs /usr/local/bin/python3.3
5.7M/usr/local/bin/python3.3
[steve@ando ~]$ du -hs /usr/local/lib/python3.3/
129M/usr/local/lib/python3.3/

But nearly 50MB of that is the test suite:

[steve@ando test]$ du -hs /usr/local/lib/python3.3/test/
48M /usr/local/lib/python3.3/test/

I expect that on Windows or Mac OS X the sizes will be roughly the same.

I'm not an expert on the pyinstaller internals, but I would expect that it
would be able to drop the test suite, but will need to include the rest of
the std lib as well as the interpreter, plus whatever files you have
written. So I expect that a frozen executable will be of the order of 80MB,
give or take. There's probably a bit of overhead needed to hold it all
together, so let's say 100MB in round figures.

If you're getting less than 100MB, that doesn't sound too bad to me, not for
an interpreted language like Python. Anything less than that sounds really
good to me. If you are getting under 20MB, that's *fantastic*. What's the
problem with that? You can fit close to forty of these on a 4GB DVD-RW or
USB stick. You're not hoping to fit your application on a floppy disk are
you? Its 2016, not 1980.

If you're working with really constrained environments, like an embedded
device, then check out µPy or PyMite:

https://micropython.org/

http://deanandara.com/PyMite/

although I fear PyMite may not be under active development.



> Because I’m not that 
> skilled at python, I put stuff like for example, import sys. I imagine the
> final project could be made smaller by specifying from something import
> something_else. 

No, it doesn't work like that. For starters, sys is built into the
interpreter, so it's unlikely you'll be able to remove it. But more
generally, if you use anything from a module, the entire module must be
included.

Even if *you* don't use a module, perhaps one of the modules you do use will
in turn use that module. 


Of course, you can avoid all this overhead completely by *not* freezing the
executable down to a single file. Just distribute your Python code as a .py
file, or a .pyc file. There are other options as well, such as distributing
it bundled into a zip file. What benefit do you get from using PyInstaller?


> but the thing is, I don’t know what smaller I could import 
> with these set of modules. Is there a program that could tell me this.
> Sorry if this question is really basic, but it’d be helpful.



-- 
Steven
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: reshape with xyz ordering

2016-07-26 Thread Nobody
On Tue, 26 Jul 2016 07:10:18 -0700, Heli wrote:

> I sort a file with 4 columns (x,y,z, somevalue) and I sort it using
> numpy.lexsort.
> 
> ind=np.lexsort((val,z,y,x))
> 
> myval=val[ind]
> 
> myval is a 1d numpy array sorted by x,then y, then z and finally val.
> 
> how can I reshape correctly myval so that I get a 3d numpy array
> maintaining the xyz ordering of the data?

Is it guaranteed that the data actually *is* a 3-D array that's been
converted to a list of x,y,z,val tuples?

In other words, does every possible combination of x,y,z for 0<=x<=max(x),
0<=y<=max(y), 0<=z<=max(z) occur exactly once?

If so, then see Peter's answer. If not, then how do you wish to handle
a) (x,y,z) tuples which never occur (missing values), and
b) (x,y,z) tuples which occur more than once?

If the data "should" to be a 3-D array but you first wish to ensure that
it actually is, you can use e.g.

nx,ny,nz = max(x)+1,max(y)+1,max(z)+1
if val.shape != (nx*ny*nz,):
raise ValueError
i = (x*ny+y)*nz+z
found = np.zeros(val.shape, dtype=bool)
found[i] = True
if not np.all(found):
raise ValueError
ind = np.lexsort((val,z,y,x))
myval = val[ind].reshape((nx,ny,nz))

-- 
https://mail.python.org/mailman/listinfo/python-list


Python Print Error

2016-07-26 Thread Cai Gengyang
How to debug this error message ?

print('You will be ' + str(int(myAge) + 1) + ' in a year.')
Traceback (most recent call last):
  File "", line 1, in 
print('You will be ' + str(int(myAge) + 1) + ' in a year.')
ValueError: invalid literal for int() with base 10: ''
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Print Error

2016-07-26 Thread Jussi Piitulainen
Cai Gengyang writes:

> How to debug this error message ?
>
> print('You will be ' + str(int(myAge) + 1) + ' in a year.')
> Traceback (most recent call last):
>   File "", line 1, in 
> print('You will be ' + str(int(myAge) + 1) + ' in a year.')
> ValueError: invalid literal for int() with base 10: ''

The last line is the error message:

ValueError: invalid literal for int() with base 10: ''

It's first component, ValueError, names a class of errors. It gives you
an idea (once you get used to error messages) of what might be wrong.

The second component describes this particular error:

invalid literal for int() with base 10

This refers to the argument to int(), and says it's an invalid
"literal", which is a bit obscure term that refers to a piece in
programming language syntax; it also informs you that the argument is
invalid in base 10, but this turns out to be irrelevant.

You should suspect that myAge is not a string of digits that form a
written representation of an integer (in base 10).

The third component shows you the invalid literal:

''

So the error message is telling you that you tried to call int(''), and
'' was an invalid thing to pass to int().

(Aside: Do not take "int()" literally. It's not the name of the
function, nor is it the actual call that went wrong. It's just a
shorthand indication that something went wrong in calling int, and the
argument is shown as a separate component of the message.)

Next you launch the interpreter and try it out:

>>> int('')
Traceback (most recent call last):
  File "", line 1, in 
ValueError: invalid literal for int() with base 10: ''

You can be pretty confident that the error is, indeed, just this. - Try
also int('ace') and int('ace', 16). That's where "base 10" is relevant.

The lines before the last are the traceback. They attempt to give you an
indication of where in the program the offending piece of code occurs,
but they need to do it dynamically by showing what called what. Scan
backwards and only pay attention to lines that you recognize (because
you are not studying an obscure bug in Python itself but a trivial
incident in your own program).

Often it happens that the real error in your program is somewhere else,
and the exception is only a symptom. In this case, you need to find
where myAge gets that value, or fails to get the value that is should
have.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Print Error

2016-07-26 Thread Frank Millman
"Cai Gengyang"  wrote in message 
news:c704cb09-ce62-4d83-ba72-c02583580...@googlegroups.com...



How to debug this error message ?



print('You will be ' + str(int(myAge) + 1) + ' in a year.')
Traceback (most recent call last):
  File "", line 1, in 
print('You will be ' + str(int(myAge) + 1) + ' in a year.')
ValueError: invalid literal for int() with base 10: ''


Very easily :-)

The traceback is telling you everything you need to know.

You are supplying an invalid literal for int().

There is only one place where you call int() - int(myAge) - so myAge must be 
an invalid literal.


You could print it out and see what it is, but the traceback is already 
giving you that information for free.


Can you see the '' at the end of the message. That is the contents of the 
invalid literal.


HTH

Frank Millman



--
https://mail.python.org/mailman/listinfo/python-list


Re: pyinstaller

2016-07-26 Thread Christian Gollwitzer

Am 27.07.16 um 03:15 schrieb Larry Martell:

On Tue, Jul 26, 2016 at 8:49 PM, Tom Brown  wrote:

I used pyinstaller quite a bit 3 years ago. I could brush off the cobwebs
and see if I can help if you have not solved it already.

What is the issue you are having?


If I import the requests module, then when I run the executable I get:

ImportError: No module named 'requests.packages.chardet'


That's a classic issue. pyinstaller does static analysis of the program, 
which modules must be included. If the code computes a module 
dynamically, it can not always succeed. The solution is to tell 
pyinstaller to add this module. In previous versions, you could add 
these by "pyinstaller --hidden-import=requests.packages.chardet" or 
similar. If this doesn't work, you need to edit the spec file. See here:


https://pythonhosted.org/PyInstaller/when-things-go-wrong.html#listing-hidden-imports

Chrstian
--
https://mail.python.org/mailman/listinfo/python-list