exec() an locals() puzzle

2022-07-20 Thread george trojan
I wish I could understand the following behaviour:

1. This works as I expect it to work:

def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
exec('y *= 2')
print('ok:', eval('y'))
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
ok: 2

2. I can access the value of y with eval() too:

def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
u = eval('y')
print(u)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1, 'y': 1}
1

3. When I change variable name u -> y, somehow locals() in the body of
the function loses an entry:

def f():
i = 1
print(locals())
exec('y = i; print(y); print(locals())')
print(locals())
y = eval('y')
print(y)
f()

{'i': 1}
1
{'i': 1, 'y': 1}
{'i': 1}

---NameError
Traceback (most recent call last)
Input In [1], in ()  7 print(y)  8 # y
= eval('y')  9 #print('ok:', eval('y'))---> 10 f()

Input In [1], in f()  4 exec('y = i; print(y); print(locals())')
   5 print(locals())> 6 y = eval('y')  7 print(y)

File :1, in 
NameError: name 'y' is not defined1.

Another thing: within the first exec(), the print order seems
reversed. What is going on?

BTW, I am using python 3.8.13.

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: exec() an locals() puzzle

2022-07-21 Thread george trojan
Thanks. That cdef-locals concept is consistent with the following example:

def f():
i = 1
def g(): print('i' in globals(), 'i' in locals())
def h(): print('i' in globals(), 'i' in locals()); i
g()
h()
f()

False False
False True

It is a mystery, which may be why the documentation for globals() and
locals() is 2-line long.


Le mer. 20 juill. 2022, à 19 h 31, Martin Di Paola <
martinp.dipa...@gmail.com> a écrit :

> I did a few tests
>
> # test 1
> def f():
>  i = 1
>  print(locals())
>  exec('y = i; print(y); print(locals())')
>  print(locals())
>  a = eval('y')
>  print(locals())
>  u = a
>  print(u)
> f()
>
> {'i': 1}
> 1
> {'i': 1, 'y': 1}
> {'i': 1, 'y': 1}
> {'i': 1, 'y': 1, 'a': 1}
> 1
>
> # test 2
> def f():
>  i = 1
>  print(locals())
>  exec('y = i; print(y); print(locals())')
>  print(locals())
>  a = eval('y')
>  print(locals())
>  y = a
>  print(y)
> f()
> {'i': 1}
> 1
> {'i': 1, 'y': 1}
> {'i': 1}
> Traceback (most recent call last):
> NameError: name 'y' is not defined
>
>
> So test 1 and 2 are the same except that the variable 'y' is not
> present/present in the f's code.
>
> When it is not present, exec() modifies the f's locals and adds an 'y'
> to it but when the variable 'y' is present in the code (even if not
> present in the locals()), exec() does not add any 'y' (and the next
> eval() then fails)
>
> The interesting part is that if the 'y' variable is in the f's code
> *and* it is defined in the f's locals, no error occur but once again the
> exec() does not modify f's locals:
>
> # test 3
> def f():
>  i = 1
>  y = 42
>  print(locals())
>  exec('y = i; print(y); print(locals())')
>  print(locals())
>  a = eval('y')
>  print(locals())
>  y = a
>  print(y)
> f()
> {'i': 1, 'y': 42}
> 1
> {'i': 1, 'y': 1}
> {'i': 1, 'y': 42}
> {'i': 1, 'y': 42, 'a': 42}
> 42
>
> Why does this happen? No idea.
>
> I may be related with this:
>
> # test 4
> def f():
>  i = 1
>  print(locals())
>  exec('y = i; print(y); print(locals())')
>  print(locals())
>  print(y)
> f()
> Traceback (most recent call last):
> NameError: name 'y' is not defined
>
> Despite exec() adds the 'y' variable to f's locals, the variable is not
> accessible/visible from f's code.
>
> So, a few observations (by no means this is how the vm works):
>
> 1) each function has a set of variables defined by the code (let's call
> this "code-defined locals" or "cdef-locals").
> 2) each function also has a set of "runtime locals" accessible from
> locals().
> 3) exec() can add variables to locals() (runtime) set but it cannot add
> any to cdef-locals.
> 4) locals() may be a superset of cdef-locals (but entries in cdef-locals
> which value is still undefined are not shown in locals())
> 5) due rule 4, exec() cannot add a variable to locals() if it is already
>   present in the in cdef-locals.
> 6) when eval() runs, it uses locals() set for lookup
>
> Perhaps rule 5 is to prevent exec() to modify any arbitrary variable of
> the caller...
>
> Anyways, nice food for our brains.
>
> On Wed, Jul 20, 2022 at 04:56:02PM +, george trojan wrote:
> >I wish I could understand the following behaviour:
> >
> >1. This works as I expect it to work:
> >
> >def f():
> >i = 1
> >print(locals())
> >exec('y = i; print(y); print(locals())')
> >print(locals())
> >exec('y *= 2')
> >print('ok:', eval('y'))
> >f()
> >
> >{'i': 1}
> >1
> >{'i': 1, 'y': 1}
> >{'i': 1, 'y': 1}
> >ok: 2
> >
> >2. I can access the value of y with eval() too:
> >
> >def f():
> >i = 1
> >print(locals())
> >exec('y = i; print(y); print(locals())')
> >print(locals())
> >u = eval('y')
> >print(u)
> >f()
> >
> >{'i': 1}
> >1
> >{'i': 1, 'y': 1}
> >{'i': 1, 'y': 1}
> >1
> >
> >3. When I change variable name u -> y, somehow locals() in the body of
> >the function loses an entry:
> >
> >def f():
> >i = 1
> >print(locals())
> >exec('y = i; print(y); print(locals())')
> >print(locals())
> >y = eval('y')
> >print(y)
> >f()
> >
> >{'i': 1}
> >1
> >{'i': 1, 'y': 1}
> >{'i': 1}
> >
>
> >---NameError
> >Traceback (most recent call last)
> >Input In [1], in ()  7 print(y)  8 # y
> >= eval('y')  9 #print('ok:', eval('y'))---> 10 f()
> >
> >Input In [1], in f()  4 exec('y = i; print(y); print(locals())')
> >   5 print(locals())> 6 y = eval('y')  7 print(y)
> >
> >File :1, in 
> >NameError: name 'y' is not defined1.
> >
> >Another thing: within the first exec(), the print order seems
> >reversed. What is going on?
> >
> >BTW, I am using python 3.8.13.
> >
> >George
> >--
> >https://mail.python.org/mailman/listinfo/python-list
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


How to make sphinx to recognize functools.partial?

2016-04-03 Thread George Trojan

Yet another sphinx question. I am a beginner here.

I can't make sphinx to recognize the following (abbreviated) code:

'''
module description

:func:`~pipe` and :func:`~spipe` read data passed by LDM's `pqact`.

'''

def _pipe(f, *args):
'''doc string'''
 pass

def _get_msg_spipe():
'''another doc'''
 pass

spipe = functools.partial(_pipe, _get_msg_spipe)
spipe._doc_ = '''
Loop over data feed on `stdin` from LDM via SPIPE.
'''

The word spipe is rendered correctly in html, but the link is not created.

I did put a print statement in sphinx/util/inspect.py, it appears that 
spipe definition is not recognized. I am running sphinx 1.3.5, according 
to CHANGELOG functools.partial support was added in 1.2.1.


George
--
https://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing performance question

2019-02-20 Thread george trojan
def create_box(x_y):
return geometry.box(x_y[0] - 1, x_y[1],  x_y[0], x_y[1] - 1)

x_range = range(1, 1001)
y_range = range(1, 801)
x_y_range = list(itertools.product(x_range, y_range))

grid = list(map(create_box, x_y_range))

Which creates and populates an 800x1000 “grid” (represented as a flat list
at this point) of “boxes”, where a box is a shapely.geometry.box(). This
takes about 10 seconds to run.

Looking at this, I am thinking it would lend itself well to
parallelization. Since the box at each “coordinate" is independent of all
others, it seems I should be able to simply split the list up into chunks
and process each chunk in parallel on a separate core. To that end, I
created a multiprocessing pool:

pool = multiprocessing.Pool()

And then called pool.map() rather than just “map”. Somewhat to my surprise,
the execution time was virtually identical. Given the simplicity of my
code, and the presumable ease with which it should be able to be
parallelized, what could explain why the performance did not improve at all
when moving from the single-process map() to the multiprocess map()?

I am aware that in python3, the map function doesn’t actually produce a
result until needed, but that’s why I wrapped everything in calls to
list(), at least for testing.


The reason multiprocessing does not speed things up is the overhead of
pickling/unpickling objects. Here are results on my machine, running
Jupyter notebook:

def create_box(xy):
return geometry.box(xy[0]-1, xy[1], xy[0], xy[1]-1)

nx = 1000
ny = 800
xrange = range(1, nx+1)
yrange = range(1, ny+1)
xyrange = list(itertools.product(xrange, yrange))

%%time
grid1 = list(map(create_box, xyrange))

CPU times: user 9.88 s, sys: 2.09 s, total: 12 s
Wall time: 10 s

%%time
pool = multiprocessing.Pool()
grid2 = list(pool.map(create_box, xyrange))

CPU times: user 8.48 s, sys: 1.39 s, total: 9.87 s
Wall time: 10.6 s

Results exactly as yours. To see what is going on, I rolled out my own
chunking that allowed me to add some print statements.

%%time
def myfun(chunk):
g = list(map(create_box, chunk))
print('chunk', chunk[0], datetime.now().isoformat())
return g

pool = multiprocessing.Pool()
chunks = [xyrange[i:i+100*ny] for i in range(0, nx*ny, 100*ny)]
print('starting', datetime.now().isoformat())
gridlist = list(pool.map(myfun, chunks))
grid3 = list(itertools.chain(*gridlist))
print('done', datetime.now().isoformat())

starting 2019-02-20T23:03:50.883180
chunk (1, 1) 2019-02-20T23:03:51.674046
chunk (701, 1) 2019-02-20T23:03:51.748765
chunk (201, 1) 2019-02-20T23:03:51.772458
chunk (401, 1) 2019-02-20T23:03:51.798917
chunk (601, 1) 2019-02-20T23:03:51.805113
chunk (501, 1) 2019-02-20T23:03:51.807163
chunk (301, 1) 2019-02-20T23:03:51.818911
chunk (801, 1) 2019-02-20T23:03:51.974715
chunk (101, 1) 2019-02-20T23:03:52.086421
chunk (901, 1) 2019-02-20T23:03:52.692573
done 2019-02-20T23:04:02.477317
CPU times: user 8.4 s, sys: 1.7 s, total: 10.1 s
Wall time: 12.9 s

All ten subprocesses finished within 2 seconds. It took about 10 seconds to
get back and assemble the partial results. The objects have to be packed,
sent through network and unpacked. Unpacking is done by the main (i.e.
single) process. This takes almost the same time as creating the objects
from scratch. Essentially the process does the following:

%%time
def f(b):
g1 = b[0].__new__(b[0])
g1.__setstate__(b[2])
return g1
buf = [g.__reduce__() for g in grid1]
grid4 = [f(b) for b in buf]

CPU times: user 20 s, sys: 411 ms, total: 20.4 s
Wall time: 20.3 s

The first line creates the pickle (not exactly, as pickled data is a single
string, not a list). The second line is what pickle.loads() does.

I do not think numpy will help here. The Python function box() has to be
called 800k times. This will take time. np.vectorize(), as the
documentation states, is provided only for convenience, it is implemented
with a for loop. IMO vectorization would have to be done on C level.

Greetings from Anchorage

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing performance question

2019-02-20 Thread george trojan
I don't know whether this is a toy example, having grid of this size is not
uncommon. True, it would make more sense to do distribute more work on each
box, if there was any. One has to find a proper balance, as with many other
things in life. I simply  responded to a question by the OP.

George

On Thu, 21 Feb 2019 at 01:30, DL Neil 
wrote:

> George
>
> On 21/02/19 1:15 PM, george trojan wrote:
> > def create_box(x_y):
> >  return geometry.box(x_y[0] - 1, x_y[1],  x_y[0], x_y[1] - 1)
> >
> > x_range = range(1, 1001)
> > y_range = range(1, 801)
> > x_y_range = list(itertools.product(x_range, y_range))
> >
> > grid = list(map(create_box, x_y_range))
> >
> > Which creates and populates an 800x1000 “grid” (represented as a flat
> list
> > at this point) of “boxes”, where a box is a shapely.geometry.box(). This
> > takes about 10 seconds to run.
> >
> > Looking at this, I am thinking it would lend itself well to
> > parallelization. Since the box at each “coordinate" is independent of all
> > others, it seems I should be able to simply split the list up into chunks
> > and process each chunk in parallel on a separate core. To that end, I
> > created a multiprocessing pool:
>
>
> I recall a similar discussion when folk were being encouraged to move
> away from monolithic and straight-line processing to modular functions -
> it is more (CPU-time) efficient to run in a straight line; than it is to
> repeatedly call, set-up, execute, and return-from a function or
> sub-routine! ie there is an over-head to many/all constructs!
>
> Isn't the 'problem' that it is a 'toy example'? That the amount of
> computing within each parallel process is small in relation to the
> inherent 'overhead'.
>
> Thus, if the code performed a reasonable analytical task within each box
> after it had been defined (increased CPU load), would you then notice
> the expected difference between the single- and multi-process
> implementations?
>
>
>
>  From AKL to AK
> --
> Regards =dn
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: tuples in conditional assignment (Ben Finney)

2015-11-24 Thread George Trojan

Ben Finney writes:

Ben Finney 
Date:
11/24/2015 04:49 AM

To:
python-list@python.org


George Trojan  writes:


The following code has bitten me recently:


t=(0,1)
x,y=t if t else 8, 9
print(x, y)

(0, 1) 9

You can simplify this by taking assignment out of the picture::

 >>> t = (0, 1)
 >>> t if t else 8, 9
 ((0, 1), 9)

So that's an “expression list” containing a comma. The reference for
expressions tells us::

 An expression list containing at least one comma yields a tuple. The
 length of the tuple is the number of expressions in the list.

 https://docs.python.org/3/reference/expressions.html#expression-lists>


I was assuming that a comma has the highest order of evaluation

You were? The operator precedence rules don't even mention comma as an
operator, so why would you assume that?

 
https://docs.python.org/3/reference/expressions.html#operator-precedence>
What threw me  off was the right column in the table stating that 
binding or tuple display has the highest precedence. Somewhere else 
there is a statement that a comma makes a tuple, not the parentheses, so 
the parentheses on the left did not register with me.



that is the expression 8, 9 should make a tuple. Why this is not the
case?

I'm not sure why it's the case that you assumed that

My practical advice: I don't bother trying to remember the complete
operator precedence rules. My simplified precedence rules are:

* ‘+’, ‘-’ have the same precedence.
* ‘*’, ‘/’, ‘//’ have the same precedence.
* For anything else: Use parentheses to explicitly declare the
   precedence I want.

Related: When an expression has enough clauses that it's not *completely
obvious* what's going on, break it up by assigning some sub-parts to
temporary well-chosen descriptive names (not ‘t’).
t was just to illustrate my problem, not the actual code. My lesson is 
to use parentheses in all cases, maybe with an exception for an obvious 
y, x = x, y. In my new C code I always write


if (a) {
f();
}

instead of a valid 2-liner

if (a)
   f();

Too many times I added indented g() call, and since I do more Python 
than C, the error was not glaringly obvious.


-- \ “It is far better to grasp the universe as it really is than to | 
`\ persist in delusion, however satisfying and reassuring.” —Carl | 
_o__) Sagan | Ben Finney

George
--
https://mail.python.org/mailman/listinfo/python-list


Python 3.1 test issue

2015-12-16 Thread George Trojan
I installed Python 3.1 on RHEL 7.2.  The command make test hangs (or 
takes a lot of time) on test_subprocess.


[396/397] test_subprocess
^C
Test suite interrupted by signal SIGINT.
5 tests omitted:
test___all__ test_distutils test_site test_socket test_warnings
381 tests OK.
4 tests altered the execution environment:
test___all__ test_distutils test_site test_warnings
11 tests skipped:
test_devpoll test_kqueue test_msilib test_ossaudiodev
test_startfile test_tix test_tk test_ttk_guionly test_winreg
test_winsound test_zipfile64
make: *** [test] Error 1

CPU was at 100% all the time for process

gtrojan  15758  8907 94 17:29 pts/6 00:06:47 
/home/gtrojan/Downloads/Python-3.5.1/python -R -bb -E -Wdefault 
-Werror::BytesWarning -X faulthandler -m test.regrtest --slaveargs 
[["test_socket", 0, false], {"huntrleaks": false, "match_tests": null, 
"failfast": false, "output_on_failure": false, "use_resources": 
["curses", "network", "decimal", "cpu", "subprocess", "urlfetch"], 
"pgo": false, "timeout": null}]

gtrojan  22889   336  0 17:36 pts/11   00:00:00 grep --color=auto 15758

Is this a problem?

George
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.1 test issue

2015-12-18 Thread George Trojan

On 12/16/2015 8:07 PM, Terry Reedy wrote:

On 12/16/2015 1:22 PM, George Trojan wrote:

I installed Python 3.1 on RHEL 7.2.


According to the output below, you installed 3.5.1.  Much better than 
the years old 3.1.



This was not my only mistake. I ran the test on Fedora 19, not RHEL 7.2.

The command make test hangs (or
takes a lot of time on test_subprocess
[396/397] test_subprocess



[["test_socket", 0, false],


This appears to pass arguments to a specific test, test_socket. I 
would guess this is done by setting sys.argv. This is the first I knew 
about this.  What follows of a dict of options.  Most could have been 
set with normal --option flags.  Most are the defaults.


>  {"huntrleaks": false,
>   "match_tests": null,
>   "failfast": false,
>   "output_on_failure": false,
>   "use_resources":

["curses", "network", "decimal", "cpu", "subprocess", "urlfetch"],
  "pgo": false,

>   "timeout": null
>  }
> ]

The relevant non-default is 'use_resources'.  In particular, 'cpu' 
runs 'certain CPU-heavy tests', and 'subprocess' runs all subprocess 
tests. I ran both 'python -m test -usubprocess test_subprocess' and

'python -m test -ucpu -usubprocess test_subprocess
and both took about the same time and less than a minute.

The only thing that puzzles me is that I don't see '"randomize": true' 
in the dict above, but test_subprocess is 317 in the default 
alphabetical order, not 396.


You might try re-running with defaults: python -m test.


Thanks, after several trials I settled for command ("make test" hangs 
for Python3.4.1 too):


 dilbert@gtrojan> /usr/local/src/Python-3.4.1/python 
Lib/test/test_socket.py


which was the culprit. After checking the code, I found that creating 
RDS socket did not raise an exception.


>>> import socket
>>> s = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0)
>>>

It does on other RH  based systems (Centos, RHEL, Fedora != 19). FC 19 
has /proc/sys/net/rds, other systems don't. So this is probably not a 
Python issue, but buggy implementation of RDS.


George
--
https://mail.python.org/mailman/listinfo/python-list


when to use == and when to use is

2014-03-10 Thread George Trojan
I know this question has been answered: 
http://stackoverflow.com/questions/6570371/when-to-use-and-when-to-use-is , 
but I still have doubts. Consider the following code:


class A:
def __init__(self, a):
self._a = a
#def __eq__(self, other):
#return self._a != other._a

obj_0 = A(0)
obj_1 = A(1)
obj_2 = A(2)

obj = obj_1

if obj == obj_0:
print(0)
elif obj == obj_1:
print(1)
elif obj == obj_2:
print(2)

if obj is obj_0:
print(0)
elif obj is obj_1:
print(1)
elif obj is obj_2:
print(2)

Both if statements work, of course. Which is more efficient? My use-case 
scenario are matplotlib objects, the __eq__ operator might involve a bit 
of work. The "if" statement is a selector in a callback. I know that obj 
is one of obj_0, ..., or none of them. I do not care if obj_1 is equal 
to obj_2.


George
--
https://mail.python.org/mailman/listinfo/python-list


anaconda bug?

2015-03-16 Thread George Trojan
I am not sure it is just me or there is a bug in anaconda. I installed 
miniconda from http://conda.pydata.org/miniconda.html, then several 
python3.4.3 packages. Then created virtual environment. When I switch to 
that environment I can not create tk root window. Here is the traceback:


(venv-3.4.3) $ python
Python 3.4.3 |Continuum Analytics, Inc.| (default, Mar  6 2015, 12:03:53)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tkinter
>>> tkinter.Tk()
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/gtrojan/miniconda3/lib/python3.4/tkinter/__init__.py", 
line 1851, in __init__
self.tk = _tkinter.create(screenName, baseName, className, 
interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: Can't find a usable init.tcl in the following 
directories:
/opt/anaconda1anaconda2anaconda3/lib/tcl8.5 
/home/gtrojan/venv-3.4.3/lib/tcl8.5 /home/gtrojan/lib/tcl8.5 
/home/gtrojan/venv-3.4.3/library /home/gtrojan/library 
/home/gtrojan/tcl8.5.18/library /home/tcl8.5.18/library


It looks like the search path for tcl/tk libraries is messed up. The 
same command works fine when I exit virtual environment though. I do not 
have a problem with python 3.4.2 built from source on the same system.


For a workaround, I set TCL_LIBRARY and TK_LIBRARY in activate:

# bug in anaconda?
_OLD_TCL_LIBRARY="$TCL_LIBRARY"
TCL_LIBRARY="/home/gtrojan/miniconda3/lib/tcl8.5"
export TCL_LIBRARY
_OLD_TK_LIBRARY="$TK_LIBRARY"
TK_LIBRARY="/home/gtrojan/miniconda3/lib/tk8.5"
export TK_LIBRARY

I have found somewhat similar bug report: 
https://github.com/conda/conda/issues/348.


George



--
https://mail.python.org/mailman/listinfo/python-list


Re: Re: anaconda bug?

2015-03-17 Thread George Trojan

On 03/16/2015 11:47 PM, memilanuk wrote:

Might be just you...

monte@machin-shin:~$ python
Python 3.4.3 |Continuum Analytics, Inc.| (default, Mar  6 2015, 12:03:53)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tkinter
>>> tkinter.Tk()

>>>


Just for the heck of it I created a new venv (using conda create -n 
test) and tried it again.  Same thing.


How are you creating your venv?

Monte




Hmm. I tried on the different system (Fedora 20), with Python 3.4.2. 
Same results:


dilbert@gtrojan> python
Python 3.4.2 |Continuum Analytics, Inc.| (default, Oct 21 2014, 17:16:37)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tkinter
>>> tkinter.Tk()

>>>
dilbert@gtrojan> which pyvenv
/usr/local/miniconda3/bin/pyvenv
dilbert@gtrojan> pyvenv --system-site-packages ~/test
dilbert@gtrojan> source ~/test/bin/activate
(test) dilbert@gtrojan> python
Python 3.4.2 |Continuum Analytics, Inc.| (default, Oct 21 2014, 17:16:37)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tkinter
>>> tkinter.Tk()
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/miniconda3/lib/python3.4/tkinter/__init__.py", line 
1851, in __init__
self.tk = _tkinter.create(screenName, baseName, className, 
interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: Can't find a usable init.tcl in the following 
directories:
/opt/anaconda1anaconda2anaconda3/lib/tcl8.5 
/home/gtrojan/test/lib/tcl8.5 /home/gtrojan/lib/tcl8.5 
/home/gtrojan/test/library /home/gtrojan/library 
/home/gtrojan/tcl8.5.15/library /home/tcl8.5.15/library


This probably means that Tcl wasn't installed properly.
>>>

I suspect faulty logic: pyvenv does not copy/links the tcl/tk libraries 
to the newly created directory. When I run python directly, the second 
directory to search is /usr/local/miniconda3/lib/tcl8.5, where conda 
puts its tcl version. In virtual environment, the path is replaced and 
tkinter fails. So the other fix would be to manually create symlinks 
after running pyvenv, or modify Continuum Analytics pyvenv to do that.
There is no issue with pyvenv when Python is built from the source, the 
first directory in the path is where tcl is found by configure and that 
does not change in virtual environment.


I found another similar bug report here: 
https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/Q9xvJT8khTs

Looks this has not been fixed.

George


--
https://mail.python.org/mailman/listinfo/python-list


problem with netCDF4 OpenDAP

2015-08-14 Thread George Trojan

Subject:
problem with netCDF4 OpenDAP
From:
Tom P 
Date:
08/13/2015 10:32 AM

To:
python-list@python.org


I'm having a problem trying to access OpenDAP files using netCDF4.
The netCDF4 is installed from the Anaconda package. According to their 
changelog, openDAP is supposed to be supported.


netCDF4.__version__
Out[7]:
'1.1.8'

Here's some code:

url = 
'http://www1.ncdc.noaa.gov/pub/data/cmb/ersst/v3b/netcdf/ersst.201507.nc'

nc = netCDF4.Dataset(url)

I get the error -
netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.__init__ 
(netCDF4/_netCDF4.c:9551)()


RuntimeError: NetCDF: file not found


However if I download the same file, it works -
url = '/home/tom/Downloads/ersst.201507.nc'
nc = netCDF4.Dataset(url)
print nc
 . . . .

Is it something I'm doing wrong?

Same thing here:

(devenv-3.4.1) dilbert@gtrojan> python
Python 3.4.1 (default, Jul  7 2014, 15:47:25)
[GCC 4.8.3 20140624 (Red Hat 4.8.3-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import netCDF4
>>> url = 
'http://www1.ncdc.noaa.gov/pub/data/cmb/ersst/v3b/netcdf/ersst.201507.nc'

>>> nc = netCDF4.Dataset(url)
syntax error, unexpected WORD_WORD, expecting SCAN_ATTR or SCAN_DATASET 
or SCAN_ERROR
context: 2.0//EN">404 Not FoundNot 
FoundThe requested URL 
/pub/data/cmb/ersst/v3b/netcdf/ersst.201507.nc.dds was not found on this 
server.

Traceback (most recent call last):
  File "", line 1, in 
  File "netCDF4.pyx", line 1466, in netCDF4.Dataset.__init__ 
(netCDF4.c:19738)

RuntimeError: NetCDF: file not found
>>>

It looks like NCDC does not support OpeNDAP (note the .dds extension in 
the error message). It is not a Python/netCDF4 issue.


(devenv-3.4.1) dilbert@gtrojan> ncdump -h 
http://www1.ncdc.noaa.gov/pub/data/cmb/ersst/v3b/netcdf/ersst.201507.nc
syntax error, unexpected WORD_WORD, expecting SCAN_ATTR or SCAN_DATASET 
or SCAN_ERROR
context: 2.0//EN">404 Not FoundNot 
FoundThe requested URL 
/pub/data/cmb/ersst/v3b/netcdf/ersst.201507.nc.dds was not found on this 
server.
ncdump: 
http://www1.ncdc.noaa.gov/pub/data/cmb/ersst/v3b/netcdf/ersst.201507.nc: 
NetCDF: file not found



George

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unbuffered stderr in Python 3

2015-11-03 Thread George Trojan

On 11/03/2015 05:00 PM, python-list-requ...@python.org wrote:

On Mon, 02 Nov 2015 18:52:55 +1100, Steven D'Aprano wrote:


In Python 2, stderr is unbuffered.

In most other environments (the shell, C...) stderr is unbuffered.

It is usually considered a bad, bad thing for stderr to be buffered. What
happens if your application is killed before the buffer fills up? The
errors in the buffer will be lost.

So how come Python 3 has line buffered stderr? And more importantly, how
can I turn buffering off?

It's probably related to the fact that std{in,out,err} are Unicode
streams.

> type(sys.stderr)

> type(sys.stderr.buffer)

> type(sys.stderr.buffer.raw)


It appears that you can turn it off with:

sys.stderr = io.TextIOWrapper(sys.stderr.buffer.raw)
or:
sys.stderr = io.TextIOWrapper(sys.stderr.detach().detach())

This results in a sys.stderr which appears to work and whose
.line_buffering property is False.



This does set line buffering, but does not change the behaviour:

(devenv-3.4.1) dilbert@gtrojan> cat x.py
import sys
import time
if sys.version>'3':
import io
sys.stderr = io.TextIOWrapper(sys.stderr.detach().detach())
#sys.stderr = io.TextIOWrapper(sys.stderr.buffer.raw)
print(sys.stderr.line_buffering)
sys.stderr.write('a')
time.sleep(10)

This is python2.7.5. a is printed before ^C.

(devenv-3.4.1) dilbert@gtrojan> /bin/python x.py
a^CTraceback (most recent call last):

Here buffer is flushed on close, after typing ^C.

(devenv-3.4.1) dilbert@gtrojan> python x.py
False
^CaTraceback (most recent call last):

George
--
https://mail.python.org/mailman/listinfo/python-list


Re: Unbuffered stderr in Python 3

2015-11-03 Thread George Trojan




 Forwarded Message 
Subject:Re: Unbuffered stderr in Python 3
Date:   Tue, 03 Nov 2015 18:03:51 +
From:   George Trojan 
To: python-list@python.org



On 11/03/2015 05:00 PM, python-list-requ...@python.org wrote:

On Mon, 02 Nov 2015 18:52:55 +1100, Steven D'Aprano wrote:


In Python 2, stderr is unbuffered.

In most other environments (the shell, C...) stderr is unbuffered.

It is usually considered a bad, bad thing for stderr to be buffered. What
happens if your application is killed before the buffer fills up? The
errors in the buffer will be lost.

So how come Python 3 has line buffered stderr? And more importantly, how
can I turn buffering off?

It's probably related to the fact that std{in,out,err} are Unicode
streams.

> type(sys.stderr)

> type(sys.stderr.buffer)

> type(sys.stderr.buffer.raw)


It appears that you can turn it off with:

sys.stderr = io.TextIOWrapper(sys.stderr.buffer.raw)
or:
sys.stderr = io.TextIOWrapper(sys.stderr.detach().detach())

This results in a sys.stderr which appears to work and whose
.line_buffering property is False.



This does set line buffering, but does not change the behaviour:

(devenv-3.4.1) dilbert@gtrojan> cat x.py
import sys
import time
if sys.version>'3':
import io
sys.stderr = io.TextIOWrapper(sys.stderr.detach().detach())
#sys.stderr = io.TextIOWrapper(sys.stderr.buffer.raw)
print(sys.stderr.line_buffering)
sys.stderr.write('a')
time.sleep(10)

This is python2.7.5. a is printed before ^C.

(devenv-3.4.1) dilbert@gtrojan> /bin/python x.py
a^CTraceback (most recent call last):

Here buffer is flushed on close, after typing ^C.//

(devenv-3.4.1) dilbert@gtrojan> python x.py
False
^CaTraceback (most recent call last):

George

Found it. write_through must be set to True.

(devenv-3.4.1) dilbert@gtrojan> cat x.py
import sys
import time
if sys.version>'3':
import io
sys.stderr = io.TextIOWrapper(sys.stderr.detach().detach(), 
write_through=True)

#sys.stderr = io.TextIOWrapper(sys.stderr.buffer.raw)
print(sys.stderr.line_buffering)
sys.stderr.write('a')
time.sleep(10)
(devenv-3.4.1) dilbert@gtrojan> python x.py
False
a^CTraceback (most recent call last):

/

/
--
https://mail.python.org/mailman/listinfo/python-list


tuples in conditional assignment

2015-11-23 Thread George Trojan

The following code has bitten me recently:

>>> t=(0,1)
>>> x,y=t if t else 8, 9
>>> print(x, y)
(0, 1) 9

I was assuming that a comma has the highest order of evaluation, that is 
the expression 8, 9 should make a tuple. Why this is not the case?


George


--
https://mail.python.org/mailman/listinfo/python-list


strange numpy behaviour

2014-10-08 Thread George Trojan

This does not look right:

dilbert@gtrojan> python3.4
Python 3.4.1 (default, Jul  7 2014, 15:47:25)
[GCC 4.8.3 20140624 (Red Hat 4.8.3-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> a=np.ma.array([0, 1], dtype=np.int8, mask=[1, 0])
>>> a
masked_array(data = [-- 1],
 mask = [ True False],
   fill_value = 99)

>>> a.data
array([0, 1], dtype=int8)
>>> a.filled()
array([63,  1], dtype=int8) <--- Why 63?
>>>

What do you think?

George

--
https://mail.python.org/mailman/listinfo/python-list


python 2.5 build problem on AIX 5.3

2006-10-11 Thread George Trojan
Is there a known problem with ctypes module on AIX? Google did not help 
me to find anything. Here is part of the output from "make":

building '_ctypes' extension
xlc_r -q64 -DNDEBUG -O -I. 
-I/gpfs/m2/home/wx22gt/tools/src/Python-2.5-cc/./Include 
-Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi 
-I/gpfs/m2/home/wx22gt/tools/src/Python-2.5-cc/Modules/_ctypes/libffi/src 
-I/u/wx22gt/tools/Python-2.5-cc/include -I./Include -I. 
-I/gpfs/m2/home/wx22gt/tools/src/Python-2.5-cc/Include 
-I/gpfs/m2/home/wx22gt/tools/src/Python-2.5-cc -c 
/gpfs/m2/home/wx22gt/tools/src/Python-2.5-cc/Modules/_ctypes/libffi/src/powerpc/aix.S
 
-obuild/temp.aix-5.3-2.5/gpfs/m2/home/wx22gt/tools/src/Python-2.5-cc/Modules/_ctypes/libffi/src/powerpc/aix.o
Assembler:
line 1: 1252-162 Invalid -m flag assembly mode operand: PWR5X.
 Valid values are:COM PWR PWR2 PPC 601 603 604 PPC64 620 A35 
PWR4 PWR5 970 PPC970 PWR6 ANY

export OBJECT_MODE=64
cd $TOOLS/src/Python-2.5-cc
./configure --prefix=$TOOLS/Python-2.5-cc --with-gcc="xlc_r -q64" 
--with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64"

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Compiling python2.5 on IBM AIX

2007-07-17 Thread George Trojan
[EMAIL PROTECTED] wrote:
>> I haven't compiled it myself, but I'm told that the installation I
>> work with was compiled with:
>>
>> export PATH=$PATH:/usr/vacpp/bin:/usr/vacpp/lib
>> ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-
>> ipv6 AR="ar -X64"
>> make
>> make install
> 
> I've tried with the followong configuration :
> --
> export PATH=$PATH:/usr/vacpp/bin:/usr/vacpp/lib
> ./configure --prefix=${BASE} --with-gcc="xlc_r -q64" --with-cxx="xlC_r
> -q64" --disable-ipv6 AR="ar -X64" LDFLAGS="-L\${BASE}/lib/" PPFLAGS="-I
> \${BASE}/include/"
> 
> make
> -
> 
> but it doesn't compile either :
> 
> make
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Modules/python.o ./Modules/python.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Modules/_typesmodule.o Modules/_typesmodule.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/acceler.o Parser/acceler.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/grammar1.o Parser/grammar1.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/listnode.o Parser/listnode.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/node.o Parser/node.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/parser.o Parser/parser.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/parsetok.o Parser/parsetok.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/bitset.o Parser/bitset.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/metagrammar.o Parser/metagrammar.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/firstsets.o Parser/firstsets.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/grammar.o Parser/grammar.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/pgen.o Parser/pgen.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/myreadline.o Parser/myreadline.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Parser/tokenizer.o Parser/tokenizer.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Objects/abstract.o Objects/abstract.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Objects/boolobject.o Objects/boolobject.c
> xlc_r -q64 -c  -DNDEBUG -O  -I. -I./Include   -DPy_BUILD_CORE -
> o Objects/bufferobject.o Objects/bufferobject.c
> "Objects/bufferobject.c", line 22.15: 1506-275 (S) Unexpected text ','
> encountered.
> make: 1254-004 The error code from the last command is 1.
> 
> any idea ?
> 
> thanks
> 
> --
> BL
> 
It works fine with my compiler:
 > what $(which xlc)
/usr/vac/bin/xlc:
 61  1.14  src/bos/usr/ccs/lib/libc/__threads_init.c, 
libcthrd, bos510 7/11/00 12:04:14

  Licensed Materials - Property of IBM.
  IBM XL C/C++ Enterprise Edition V8.0 for AIX
  (5724-M12)
  IBM XL C Enterprise Edition V8.0 for AIX
  (5724-M11)
  (C) Copyright IBM Corp. 1991, 2005 and by others.
  All Rights Reserved.
  US Government Users Restricted Rights -
  Use, duplication or disclosure restricted by
  GSA ADP Schedule Contract with IBM Corp.
  -
  Version: 08.00..0010
  Intermediate Language 060405.07
  Driver 060518a
  Date: Thu May 18 22:08:53 EDT 2006
  -

I suspect the trailing comma is the issue. Googling for "xlc enumerator 
trailing comma" gave me 
http://sources.redhat.com/ml/gdb/1999-q1/msg00136.html
which says "AIX 4.2.0.0 xlc gives an error for trailing commas in enum 
declarations".

George
-- 
http://mail.python.org/mailman/listinfo/python-list


setting environment from shell file in python

2007-05-29 Thread George Trojan
Is there a utility to parse a Bourne shell environment file, such as 
.bashrc, and set the environmental variables from within a Python 
script? What I mean is an equivalent to shell dot command.
I don't think it would be too difficult to write a shell parser, but if 
such thing already exists...

George
-- 
http://mail.python.org/mailman/listinfo/python-list


frange() question

2007-09-20 Thread George Trojan
A while ago I found somewhere the following implementation of frange():

def frange(limit1, limit2 = None, increment = 1.):
 """
 Range function that accepts floats (and integers).
 Usage:
 frange(-2, 2, 0.1)
 frange(10)
 frange(10, increment = 0.5)
 The returned value is an iterator.  Use list(frange) for a list.
 """
 if limit2 is None:
 limit2, limit1 = limit1, 0.
 else:
 limit1 = float(limit1)
 count = int(math.ceil(limit2 - limit1)/increment)
 return (limit1 + n*increment for n in range(count))

I am puzzled by the parentheses in the last line. Somehow they make 
frange to be a generator:
 >> print type(frange(1.0, increment=0.5))

But I always thought that generators need a keyword "yield". What is 
going on here?

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dialog with a process via subprocess.Popen blocks forever

2007-03-01 Thread George Trojan
[EMAIL PROTECTED] wrote:
> Okay, here is what I want to do:
> 
> I have a C Program that I have the source for and want to hook with
> python into that. What I want to do is: run the C program as a
> subprocess.
> The C programm gets its "commands" from its stdin and sends its state
> to stdout. Thus I have some kind of dialog over stdin.
> 
> So, once I start the C Program from the shell, I immediately get its
> output in my terminal. If I start it from a subprocess in python and
> use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
> also get it immediately.
> 
> BUT If I use PIPE for both (so I can .write() on the stdin and .read()
> from the subprocess' stdout stream (better: file descriptor)) reading
> from the subprocess stdout blocks forever. If I write something onto
> the subprocess' stdin that causes it to somehow proceed, I can read
> from its stdout.
> 
> Thus a useful dialogue is not possible.
> 
> Regards,
> -Justin
> 
> 
> 
Have you considered using pexpect: http://pexpect.sourceforge.net/ ?

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pexpect and telnet not communicating properly

2009-01-27 Thread George Trojan

David Anderson wrote:

I am trying to automate the following session - to talk to my router:

===
telnet speedtouch
Trying 192.168.1.254...
Connected to speedtouch.
Escape character is '^]'.
Username : Administrator
Password :


 __  SpeedTouch 780
 ___/_/\
/ /\  6.1.7.2
  _/__   /  \
_/   /\_/___ \  Copyright (c) 1999-2006, THOMSON
   //   /  \   /\ \
   ___//___/\ / _\/__
  /  / \   \// //\
   __/  /   \   \  // // _\__
  / /  / \___\// // /   /\
 /_/__/___/ // /___/  \
 \ \  \___\ \\ \   \  /
  \_\  \  /  /\\ \\ \___\/
 \  \/  /  \\ \\  /
  \_/  /\\ \\/
   /__/  \\  /
   \   _  \  /_\/
\ //\  \/___\/
 //  \  \  /
 \\  /___\/
  \\/


_{Administrator}=>?
Following commands are available :

help : Displays this help information
menu : Displays menu
?: Displays this help information
exit : Exits this shell.
..   : Exits group selection.
saveall  : Saves current configuration.
ping : Send ICMP ECHO_REQUEST packets.
traceroute   : Send ICMP/UDP packets to trace the ip path.

Following command groups are available :

firewallservice autopvc connection  cwmp
dhcpdns dsd dyndns  eth
adslatm config  debug   env
exprgrp hostmgr ids igmp
interface   ip  ipqos   label   language
mbusmemmmlp nat ppp
pptpscript  snmpsntpsoftware
system  systemlog   upgrade upnpuser
voice   wireless

{Administrator}=>exit



I am using the following code:

#!/usr/bin/env python

import pexpect
import sys

child = pexpect.spawn('telnet 192.168.1.254')
fout = file('mylog.txt','w')
child.logfile = fout

child.expect('sername : ')
child.sendline('Administrator')
child.expect('assword : ')
child.sendline('')


child.expect('_{Administrator}=>')
child.sendline('?')
child.expect('_{Administrator}=>')
child.expect('exit')



This times out after child.sendline('Administrator')

mylog.txt contains:
Trying 192.168.1.254...

Connected to 192.168.1.254.

Escape character is '^]'.

Username : Administrator
Administrator


Can anyone help me in finding out what I am doing wrong?

Regards
David


To debug, add lines

print self.before
print self.after

after each child.expect(). Also, add timeout=2 to the argument list of 
child.expect().


I wrote a thin wrapper that serves me well, at least till now.


class Connection(object):
'''Establishes connection to Cisco modem server.
A wrapper around fdexpect spawn.
'''
def __init__(self, host, port, user, passwd, **kwds):
self.pipe = None
self.socket = None
self.host = host
self.port = port
self.user = user
self.passwd = passwd
self.logger = kwds.get('logger')
self._last = ''

def __getattr__(self, name):
if name not in ['open', 'close', 'send', 'sendline', 'expect']:
return getattr(self.pipe, name)

def open(self):
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.connect((self.host, self.port))
self.pipe = pexpect.fdspawn(self.socket)
self.expect('Username:', timeout=2)
self.sendline(self.user)
self.expect('Password:', timeout=1)
self.sendline(self.passwd)
self.send('ATZ\r')
self.expect('OK', timeout=1)

def send(self, s):
self._last = s
return self.pipe.send(s)

def sendcr(self, s):
self._last = s
return self.pipe.send(s+'\r')

def sendline(self, s):
self._last = s
return self.pipe.sendline(s)

def expect(self, pattern, **kwds):
rc = self.pipe.expect(pattern, **kwds)
if self.logger:
self.logger.debug('sent "%s", received\n\t1. before "%s"\n\t' \
'2. match "%s"\n\t3. after "%s"\n', self._last,
self.before, self.match.group(0), self.after)
return rc

def close(self):
self.pipe.close()
self.pipe = None
self.socket.close()
 

supervisor 3.0a6 and Python2.6

2009-03-17 Thread George Trojan
Supervisor does not work with Python2.6. While running with the test 
configuration, supervisord prints traceback:

2009-03-17 15:12:31,927 CRIT Traceback (most recent call last):
  File 
"/usr/local/Python-2.6/lib/python2.6/site-packages/supervisor-3.0a6-py2.6.egg/supervisor/xmlrpc.py", 
line 367, in continue_request

pushproducer(DeferredXMLRPCResponse(request, value))
  File "/usr/local/Python-2.6/lib/python2.6/asynchat.py", line 190, in 
push_with_producer

self.initiate_send()
  File "/usr/local/Python-2.6/lib/python2.6/asynchat.py", line 227, in 
initiate_send

data = first.more()
AttributeError: class NOT_DONE_YET has no attribute 'more'

The last entry on http://supervisord.org/news/ is exactly one year ago, 
obviously it predates Python2.6. Two questions:

1. Is supervisor still developed?
2. Is there a quick fix to the above behaviour? If not, what would be a 
reasonable alternative to supervisor? I know of upstart and daemontools.


George
--
http://mail.python.org/mailman/listinfo/python-list


Re: supervisor 3.0a6 and Python2.6

2009-03-19 Thread George Trojan

Raymond Cote wrote:

George Trojan wrote:

1. Is supervisor still developed?
I note that, although the information on the site is pretty old, there 
have been some respository checkins in Feb and March of this year:

   <http://lists.supervisord.org/pipermail/supervisor-checkins/>
-r


I found answers to both questions (the other one was: Is there a quick 
fix to the above behaviour?) - accidentally the same day, on March 17 
there was a post on supervisor-users group: 
http://lists.supervisord.org/pipermail/supervisor-users/2009-March/000311.html. 
The answer by one of supervisor's developer led me to an interesting 
thread on python-dev: asyncore fixes in Python 2.6 broke Zope's version 
of medusa dated March 4: 
http://mail.python.org/pipermail/python-dev/2009-March/thread.html#86739


George
--
http://mail.python.org/mailman/listinfo/python-list


Re: iterating "by twos"

2008-07-29 Thread George Trojan

kj wrote:

Is there a special pythonic idiom for iterating over a list (or
tuple) two elements at a time?

I mean, other than

for i in range(0, len(a), 2):
frobnicate(a[i], a[i+1])

?

I think I once saw something like

for (x, y) in forgotten_expression_using(a):
frobnicate(x, y)

Or maybe I just dreamt it!  :)

Thanks!
I saw the same thing, forgot where though. But I put it in my library. 
Here it is:


# x.py
import itertools

def pairs(seq):
is1 = itertools.islice(iter(seq), 0, None, 2)
is2 = itertools.islice(iter(seq), 1, None, 2)
return itertools.izip(is1, is2)

s = range(9)
for x, y in pairs(s):
print x, y

[EMAIL PROTECTED]> python x.py
0 1
2 3
4 5
6 7
--
http://mail.python.org/mailman/listinfo/python-list


Re: tkinter: Round Button - Any idea?

2008-08-25 Thread George Trojan

akineko wrote:

Hello everyone,

I'm trying to implement a virtual instrument, which has buttons and
displays, using Tkinter+Pmw.
One of items on the virtual instrument is a round button.
This is imitating a tact switch.

Tkinter has a Button class, which I can assign a button image.
However, when the button is depressed, the rectangle is sunken (rather
than the round button).

Probably I need to prepare two images, normal one and depressed one,
and implement all button behaviours by myself.
But before reinventing everything by myself, I'm wondering if there is
an easy way to implement or there is existing widget example of such
button.
So far, I couldn't find any good examples.

Any suggestions are highly appreciated.

Best regards,
Aki Niimura


Try 
http://search.cpan.org/~darnold/Tk-StyledButton-0.10/lib/Tk/StyledButton.pod. 
This is Perl, but it might be worthwhile to take a look at implementation.


George
--
http://mail.python.org/mailman/listinfo/python-list


asyncore based port splitter code questions

2010-01-04 Thread George Trojan
The following code is a attempt at port splitter: I want to forward data 
coming on tcp connection to several host/port addresses. It sort of 
works, but I am not happy with it. asyncore based code is supposed to be 
simple, but I need while loops and a lot of try/except clauses. Also, I 
had to add suspend/activate_channel methods in the Writer class that use 
variables with leading underscores. Otherwise the handle_write() method 
is called in a tight loop. I designed the code by looking at Python 2.3 
source for asyncore and originally wanted to use add_channel() and 
del_channel() methods. However in Python 2.6 del_channel() closes the 
socket in addition to deleting it from the map. I do not want to have 
one connection per message, the traffic may be high and there are no 
message delimiters. The purpose of this exercise is to split incoming 
operational data so I can test a new version of software.
Comments please - I have cognitive dissonance about the code, my little 
yellow rubber duck is of no help here.

The code is run as:

python2.6 afwdport.py 50002 localhost 50003 catbert 50001

where 50002 is the localhost incoming data port, (localhost, 50003) and 
(catbert, 50001) are destinations.


George

import asyncore, os, socket, sys, time

TMOUT = 10

#--
def log_msg(msg):
print >> sys.stderr, '%s: %s' % (time.ctime(), msg)

#--
class Reader(asyncore.dispatcher):
def __init__(self, sock, writers):
asyncore.dispatcher.__init__(self, sock)
self.writers = writers

def handle_read(self):
data = self.recv(1024)
for writer in self.writers:
writer.add_data(data)

def handle_expt(self):
self.handle_close()

def handle_close(self):
log_msg('closing reader connection')
self.close()

def writable(self):
return False

#--
class Writer(asyncore.dispatcher):
def __init__(self, address):
asyncore.dispatcher.__init__(self)
self.address = address
self.data = ''
self.mksocket()

def suspend_channel(self, map=None):
fd = self._fileno
if map is None:
map = self._map
if fd in map:
del map[fd]

def activate_channel(self):
if self._fileno not in self._map:
self._map[self._fileno] = self

def mksocket(self):
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.set_reuse_addr()
self.connect(self.address)
log_msg('connected to %s' % str(self.address))

def add_data(self, data):
self.data += data
self.activate_channel()

def handle_write(self):
while self.data:
log_msg('sending data to %s' % str(self.address))
sent = self.send(self.data)
self.data = self.data[sent:]
self.suspend_channel()

def handle_expt(self):
err = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
log_msg(asyncore._strerror(err))
self.handle_close()

def handle_close(self):
log_msg('closing writer connection')
self.close()
# try to reconnect
time.sleep(TMOUT)
self.mksocket()

def readable(self):
return False

#--
class Dispatcher(asyncore.dispatcher):
def __init__(self, port, destinations):
asyncore.dispatcher.__init__(self)
self.address = socket.gethostbyname(socket.gethostname()), port
self.writers = [Writer(_) for _ in destinations]
self.reader = None
self.handle_connect()

def handle_connect(self):
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind(self.address)
self.listen(1)
log_msg('listening on %s' % str(self.address))

def handle_accept(self):
conn, addr = self.accept()
log_msg('connection from %s' % str(addr))
# current read connection not closed for some reason
if self.reader:
self.reader.close()
self.reader = Reader(conn, self.writers)

def cleanup(self):
try:
if self.reader:
self.reader.close()
except socket.error, e:
log_msg('error closing reader connection %s' % e)
# writer might have unwatched connections
for w in self.writers:
try:
w.close()
except socket.error, e:
log_msg('error closing writer connection %s' % e)

#--
def main(port, destinations):
disp = None
try:
# asyncore.loop() exits when input connection closes
while True:
try:
disp = Dispatcher(port, de

Re: asyncore based port splitter code questions

2010-01-08 Thread George Trojan

Thanks for your help. Some comments below.

George

Giampaolo Rodola' wrote:


On 4 Gen, 18:58, George Trojan  wrote:


Secondly, to temporarily "sleep" your connections *don't* remove
anything from your map.
The correct way of doing things here is to override readable() and
writable() methods and make them return False as long as you want your
connection to hang.


Good advice.


Now I'm going to comment some parts of your code.


class Reader(asyncore.dispatcher):
 def __init__(self, sock, writers):
 asyncore.dispatcher.__init__(self, sock)
 self.writers = writers

 def handle_read(self):
 data = self.recv(1024)
 for writer in self.writers:
 writer.add_data(data)

[...]

 def handle_write(self):
 while self.data:
 log_msg('sending data to %s' % str(self.address))
 sent = self.send(self.data)
 self.data = self.data[sent:]
 self.suspend_channel()



By looking at how you are appending data you want to send in a buffer,
it looks like you might want to use asynchat.async_chat rather than
asyncore.dispatcher.
async_chat.push() and async_chat.push_with_producer() methods already
take care of buffers logic and make sure that all the data gets sent
to the other peer without going lost.

Actually there's no reason to use asyncore.dispatcher class directly
except for creating a socket which listens on an interface and then
passes the connection to another class inheriting from
asynchat.async_chat which will actually handle that session.


My understanding is that asynchat is used for bi-directional connection, 
I don't see how it applies to my case (forwarding data). However I 
rewrote the Writer class following some of asynchat code.


So my recommendation is to use asynchat.async_chat whenever possible.



You really don't want to use time.sleep() there.
It blocks everything.


 while True:
 try:
 disp = Dispatcher(port, destinations)
asyncore.loop(timeout=TMOUT, use_poll=True)
 except socket.error, (errno, e):
 if errno == 98:
 log_msg('sleeping %d s: %s', (30, e))
 time.sleep(30)


Same as above.

I wanted to reconnect after the os cleans up half-closed sockets. 
Otherwise the program exits immediately with a message:

terminating - uncaught exception: [Errno 98] Address already in use



As a final note I would recommend to take a look at pyftpdlib code
which uses asyncore/asynchat as part of its core:
http://code.google.com/p/pyftpdlib
It can be of some help to figure out how things should be done.


Thanks for good example to study.



--- Giampaolo
http://code.google.com/p/pyftpdlib/

--
http://mail.python.org/mailman/listinfo/python-list


html code generation

2010-01-20 Thread George Trojan
I need an advice on table generation. The table is essentially a fifo, 
containing about 200 rows. The rows are inserted every few minutes or 
so. The simplest solution is to store row data per line and write 
directly html code:

line = "value1value2>... "
each run of the program would read the previous table into a list of 
lines, insert the first row and drop the last one, taking care of table 
header and trailer.

Is there a more classy solution?

George
--
http://mail.python.org/mailman/listinfo/python-list


site.py confusion

2010-01-25 Thread George Trojan
Inspired by the 'Default path for files' thread I tried to use 
sitecustomize in my code. What puzzles me is that the site.py's main() 
is not executed. My sitecustomize.py is

def main():
print 'In Main()'
main()
and the test program is
import site
#site.main()
print 'Hi'
The output is
$ python try.py
Hi
When I uncomment the site.main() line the output is
$ python try.py
In Main()
Hi
If I change import site to import sitecustomize the output is as above. 
 What gives?
Adding to the confusion, I found 
http://code.activestate.com/recipes/552729/ which contradicts 
http://docs.python.org/library/site.html


George
--
http://mail.python.org/mailman/listinfo/python-list


Re: site.py confusion

2010-01-27 Thread George Trojan

Arnaud Delobelle wrote:

George Trojan  writes:


Inspired by the 'Default path for files' thread I tried to use
sitecustomize in my code. What puzzles me is that the site.py's main()
is not executed. My sitecustomize.py is
def main():
print 'In Main()'
main()
and the test program is
import site
#site.main()
print 'Hi'
The output is
$ python try.py
Hi


That's normal as site.py is automatically imported on initialisation.
So when you do:

import site

the module is not re-executed as it already has been imported.
Try this:

 file: foo.py
print 'Foo'
 end

--- Interactive session
>>> import foo # First time, print statement executed
Foo
>>> import foo # Second time, print statement not executed
>>>


When I uncomment the site.main() line the output is
$ python try.py
In Main()
Hi


Now you explicitely call site.main(), so it executes it!


If I change import site to import sitecustomize the output is as
above. What gives?


It's normal, this time it's the first time you import it so its content
is executed.

HTH

I understand that importing a module repeatedly does nothing. Also, I 
made a typo in my previous posting - I meant sitecustomize.main(), not 
site.main(). My understanding of the code in site.py is that when the 
module is imported, main() is executed. main() calls execsitecustomize() 
that attempts to import sitecustomize. That action should trigger 
execution of code in sitecustomize.py, which is located in the current 
directory. But that does not work. I changed execsitecustomize() to


def execsitecustomize():
"""Run custom site specific code, if available."""
try:
import sitecustomize
except ImportError:
pass
import sys
print sys.path
raise

That gave me the explanation why the above happens: when site is 
imported, the current directory is not yet prepended to sys.path.


$ python2.6 -v try.py
...
 ['/usr/local/Python-2.6.3/lib/python26.zip', 
'/usr/local/Python-2.6.3/lib/python2.6', 
'/usr/local/Python-2.6.3/lib/python2.6/plat-linux2', 
'/usr/local/Python-2.6.3/lib/python2.6/lib-tk', 
'/usr/local/Python-2.6.3/lib/python2.6/lib-old', 
'/usr/local/Python-2.6.3/lib/python2.6/lib-dynload', 
'/usr/local/Python-2.6.3/lib/python2.6/site-packages']

'import site' failed; traceback:
Traceback (most recent call last):
  File "/usr/local/Python-2.6.3/lib/python2.6/site.py", line 516, in 


main()
  File "/usr/local/Python-2.6.3/lib/python2.6/site.py", line 507, in main
execsitecustomize()
  File "/usr/local/Python-2.6.3/lib/python2.6/site.py", line 472, in 
execsitecustomize

import sitecustomize
...

This also explains the recipe http://code.activestate.com/recipes/552729/

I wanted to have library location specific to application without having 
to explicitly change sys.path in all App-Top-Dir/bin/*.py. I thought 
creating bin/sitecustomize.py would do the trick.


I guess the documentation should mention this fact. The comment in 
recipe 552729 is:


Since Python 2.5 the automatic import of the module "sitecustomize.py" 
in the directory of the main program is not supported any more (even if 
the documentation says that it is).


George
--
http://mail.python.org/mailman/listinfo/python-list


numpy f2py question

2009-10-02 Thread George Trojan
I have a problem with numpy's vectorize class and f2py wrapped old 
FORTRAN code. I found that the function _get_nargs() in 
site-packages/numpy/lib/function_base.py tries to find the number of 
arguments for a function from an error message generated when the 
function is invoked with no arguments. However, with numpy 1.3.0 the 
error message is different that the code expects. Is there any known 
solution? I don';t know where the message is coming from, a cursory 
check of numpy files did not yield any hits. A ctypes issue?
I did find an  unanswered seven weeks old related posting 
http://article.gmane.org/gmane.comp.python.numeric.general/32168/match=f2py+number+arguments 
though I don't think this is gfortran related. Mine is 4.1.2.


George
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy f2py question

2009-10-05 Thread George Trojan

sturlamolden wrote:

On 2 Okt, 22:41, George Trojan  wrote:

I have a problem with numpy's vectorize class and f2py wrapped old
FORTRAN code. I found that the function _get_nargs() in
site-packages/numpy/lib/function_base.py tries to find the number of
arguments for a function from an error message generated when the
function is invoked with no arguments. However, with numpy 1.3.0 the
error message is different that the code expects. Is there any known
solution? I don';t know where the message is coming from, a cursory
check of numpy files did not yield any hits. A ctypes issue?
I did find an  unanswered seven weeks old related 
postinghttp://article.gmane.org/gmane.comp.python.numeric.general/32168/matc...
though I don't think this is gfortran related. Mine is 4.1.2.


The wrappers generated by f2py just calls PyArg_ParseTupleAndKeywords
and return NULL if it fails. The error message from
PyArg_ParseTupleAndKeywords does not fit the regex used by _get_nargs:

re.compile(r'.*? takes exactly (?P\d+) argument(s|) \((?
P\d+) given\)')

It should have said

re.compile(r'.*? takes (exactly|at least) (?P\d+) argument(s|)
\((?P\d+) given\)')

It is not just an f2py issue. It affects any function that uses
keyword arguments. I'll file this as a bug on NumPy trac.

S.M.






That's not enough. f2py generated library ftest.so providing wrapper 
around Fortran function


integer function f2(a, b)
integer, intent(in) :: a, b

run from

try:
   ftest.f2()
except Exception, e:
   print e

produces traceback
Required argument 'a' (pos 1) not found
so the proposed fix will miss this case.

I tracked the issue to changes (from 2.5.2 to 2.6.2) of Python/getargs.c 
code. Still not sure what the proper fix should be.


George
--
http://mail.python.org/mailman/listinfo/python-list


a simple unicode question

2009-10-19 Thread George Trojan
A trivial one, this is the first time I have to deal with Unicode. I am 
trying to parse a string s='''48° 13' 16.80" N'''. I know the charset is 
"iso-8859-1". To get the degrees I did

>>> encoding='iso-8859-1'
>>> q=s.decode(encoding)
>>> q.split()
[u'48\xc2\xb0', u"13'", u'16.80"', u'N']
>>> r=q.split()[0]
>>> int(r[:r.find(unichr(ord('\xc2')))])
48

Is there a better way of getting the degrees?

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: a simple unicode question

2009-10-20 Thread George Trojan
Thanks for all suggestions. It took me a while to find out how to 
configure my keyboard to be able to type the degree sign. I prefer to 
stick with pure ASCII if possible.
Where are the literals (i.e. u'\N{DEGREE SIGN}') defined? I found 
http://www.unicode.org/Public/5.1.0/ucd/UnicodeData.txt

Is that the place to look?

George

Scott David Daniels wrote:

Mark Tolonen wrote:

Is there a better way of getting the degrees?


It seems your string is UTF-8.  \xc2\xb0 is UTF-8 for DEGREE SIGN.  If 
you type non-ASCII characters in source code, make sure to declare the 
encoding the file is *actually* saved in:


# coding: utf-8

s = '''48° 13' 16.80" N'''
q = s.decode('utf-8')

# next line equivalent to previous two
q = u'''48° 13' 16.80" N'''

# couple ways to find the degrees
print int(q[:q.find(u'°')])
import re
print re.search(ur'(\d+)°',q).group(1)



Mark is right about the source, but you needn't write unicode source
to process unicode data.  Since nobody else mentioned my favorite way
of writing unicode in ASCII, try:

IDLE 2.6.3
 >>> s = '''48\xc2\xb0 13' 16.80" N'''
 >>> q = s.decode('utf-8')
 >>> degrees, rest = q.split(u'\N{DEGREE SIGN}')
 >>> print degrees
48
 >>> print rest
 13' 16.80" N

And if you are unsure of the name to use:
 >>> import unicodedata
 >>> unicodedata.name(u'\xb0')
'DEGREE SIGN'

--Scott David Daniels
scott.dani...@acm.org

--
http://mail.python.org/mailman/listinfo/python-list


Re: errno 107 socket.recv issue

2010-02-09 Thread George Trojan

Argument mismatch?

Jordan Apgar wrote:


servernegotiator = ServerNegotiator(host,HostID, port, key)

class ServerNegotiator:
def __init__(self, host, port, hostid, rsa_key, buf = 512):


--
http://mail.python.org/mailman/listinfo/python-list


a question on building MySQL-python

2010-02-19 Thread George Trojan
During installation of MySQL-python-1.2.3c1 I encountered the following 
error:


$ python2.6 setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.linux-x86_64-2.6/MySQLdb
running build_ext
building '_mysql' extension
creating build/temp.linux-x86_64-2.6
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall 
-Wstrict-prototypes -fPIC -Dversion_info=(1,2,3,'gamma',1) 
-D__version__=1.2.3c1 -I/usr/include/mysql 
-I/usr/local/Python-2.6.3/include/python2.6 -c _mysql.c -o 
build/temp.linux-x86_64-2.6/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 
-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE 
-fno-strict-aliasing -fwrapv
gcc -pthread -shared build/temp.linux-x86_64-2.6/_mysql.o 
-L/usr/lib64/mysql -L/usr/lib64 -L. -lmysqlclient_r -lz -lpthread 
-lcrypt -lnsl -lm -lpthread -lssl -lcrypto -lpython2.6 -o 
build/lib.linux-x86_64-2.6/_mysql.so

/usr/bin/ld: cannot find -lpython2.6
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1

Linker could not find libpython2.6.so. Note that the compiler *did* find 
Python include file: -I/usr/local/Python-2.6.3/include/python2.6.

I am running CentOS5.3. Python 2.6 was configured as follows:

$ TARGET=/usr/local/Python-2.6.3
$ export LDFLAGS=-Wl,-rpath,$TARGET/lib
$ ./configure --prefix=$TARGET \
--with-cxx=g++ --with-threads --enable-shared

to avoid messing with LD_LIBRARY_PATH.
I managed to complete the installation by pasting the above link command 
 and adding proper -L option, but I would like to know what would be 
the proper fix.


George
--
http://mail.python.org/mailman/listinfo/python-list


parmiko problem

2010-07-19 Thread George Trojan
I have a problem with connecting to a host without specifying password 
(but with ssh keys configured correctly. That is


[tina src]$ sftp alice
Connecting to alice...
sftp>

works, but the code

import paramiko
paramiko.util.log_to_file('/tmp/paramiko')
t = paramiko.Transport(('alice', 22))
t.connect(username='gtrojan') # , password='a-passwd'])
sftp = paramiko.SFTPClient.from_transport(t)
sftp.close()
t.close()

results in the following output in /tmp/paramiko:

DEB [20100719-19:58:22.497] thr=1   paramiko.transport: starting thread 
(client mode): 0xb81e1150L
INF [20100719-19:58:22.501] thr=1   paramiko.transport: Connected 
(version 2.0, client OpenSSH_4.3)
DEB [20100719-19:58:22.502] thr=1   paramiko.transport: kex 
algos:['diffie-hellman-group-exchange-sha1', 
'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server 
key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 
'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 
'aes192-cbc', 'aes256-cbc', 'rijndael-...@lysator.liu.se', 'aes128-ctr', 
'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 
'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 
'aes192-cbc', 'aes256-cbc', 'rijndael-...@lysator.liu.se', 'aes128-ctr', 
'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 
'hmac-ripemd160', 'hmac-ripemd...@openssh.com', 'hmac-sha1-96', 
'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'hmac-ripemd160', 
'hmac-ripemd...@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client 
compress:['none', 'z...@openssh.com'] server compress:['none', 
'z...@openssh.com'] client lang:[''] server lang:[''] kex follows?False
DEB [20100719-19:58:22.502] thr=1   paramiko.transport: Ciphers agreed: 
local=aes128-ctr, remote=aes128-ctr
DEB [20100719-19:58:22.502] thr=1   paramiko.transport: using kex 
diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local 
aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; 
compression: local none, remote none
DEB [20100719-19:58:22.571] thr=1   paramiko.transport: Switch to new 
keys ...
DEB [20100719-19:58:22.578] thr=2   paramiko.transport: [chan 1] Max 
packet in: 34816 bytes
WAR [20100719-19:58:22.611] thr=1   paramiko.transport: Oops, unhandled 
type 3
DEB [20100719-20:00:22.502] thr=1   paramiko.transport: EOF in transport 
thread


and a traceback in the terminal:

Traceback (most recent call last):
  File "./try.py", line 18, in 
sftp = paramiko.SFTPClient.from_transport(t)
  File "build/bdist.linux-x86_64/egg/paramiko/sftp_client.py", line 
102, in from_transport
  File "build/bdist.linux-x86_64/egg/paramiko/transport.py", line 655, 
in open_session
  File "build/bdist.linux-x86_64/egg/paramiko/transport.py", line 745, 
in open_channel

EOFError

When the remove the comment on the connect() line and specify password 
the code works fine. Is this a bug, or I am missing something? I am 
running Python 2.6.3 on Centos 5.4.


George
--
http://mail.python.org/mailman/listinfo/python-list


Re: parmiko problem

2010-07-19 Thread George Trojan

George Trojan wrote:
I have a problem with connecting to a host without specifying password 
(but with ssh keys configured correctly. That is


[tina src]$ sftp alice
Connecting to alice...
sftp>

works, but the code

import paramiko
paramiko.util.log_to_file('/tmp/paramiko')
t = paramiko.Transport(('alice', 22))
t.connect(username='gtrojan') # , password='a-passwd'])
sftp = paramiko.SFTPClient.from_transport(t)
sftp.close()
t.close()

results in the following output in /tmp/paramiko:

DEB [20100719-19:58:22.497] thr=1   paramiko.transport: starting thread 
(client mode): 0xb81e1150L
INF [20100719-19:58:22.501] thr=1   paramiko.transport: Connected 
(version 2.0, client OpenSSH_4.3)
DEB [20100719-19:58:22.502] thr=1   paramiko.transport: kex 
algos:['diffie-hellman-group-exchange-sha1', 
'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server 
key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 
'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 
'aes192-cbc', 'aes256-cbc', 'rijndael-...@lysator.liu.se', 'aes128-ctr', 
'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 
'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 
'aes192-cbc', 'aes256-cbc', 'rijndael-...@lysator.liu.se', 'aes128-ctr', 
'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 
'hmac-ripemd160', 'hmac-ripemd...@openssh.com', 'hmac-sha1-96', 
'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'hmac-ripemd160', 
'hmac-ripemd...@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client 
compress:['none', 'z...@openssh.com'] server compress:['none', 
'z...@openssh.com'] client lang:[''] server lang:[''] kex follows?False
DEB [20100719-19:58:22.502] thr=1   paramiko.transport: Ciphers agreed: 
local=aes128-ctr, remote=aes128-ctr
DEB [20100719-19:58:22.502] thr=1   paramiko.transport: using kex 
diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local 
aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; 
compression: local none, remote none
DEB [20100719-19:58:22.571] thr=1   paramiko.transport: Switch to new 
keys ...
DEB [20100719-19:58:22.578] thr=2   paramiko.transport: [chan 1] Max 
packet in: 34816 bytes
WAR [20100719-19:58:22.611] thr=1   paramiko.transport: Oops, unhandled 
type 3
DEB [20100719-20:00:22.502] thr=1   paramiko.transport: EOF in transport 
thread


and a traceback in the terminal:

Traceback (most recent call last):
  File "./try.py", line 18, in 
sftp = paramiko.SFTPClient.from_transport(t)
  File "build/bdist.linux-x86_64/egg/paramiko/sftp_client.py", line 102, 
in from_transport
  File "build/bdist.linux-x86_64/egg/paramiko/transport.py", line 655, 
in open_session
  File "build/bdist.linux-x86_64/egg/paramiko/transport.py", line 745, 
in open_channel

EOFError

When the remove the comment on the connect() line and specify password 
the code works fine. Is this a bug, or I am missing something? I am 
running Python 2.6.3 on Centos 5.4.


George


Thanks for listening;-) Found my problem - has to pass private key to 
connect():


paramiko.util.log_to_file('/tmp/paramiko')
t = paramiko.Transport(('alice', 22))
path = os.path.join(os.environ['HOME'], '.ssh', 'id_dsa')
key = paramiko.DSSKey.from_private_key_file(path)
t.connect(username='gtrojan', pkey=key)
sftp = paramiko.SFTPClient.from_transport(t)
print sftp.listdir()
sftp.close()
t.close()
--
http://mail.python.org/mailman/listinfo/python-list


Splitting text into lines

2016-12-13 Thread George Trojan - NOAA Federal
I have files containing ASCII text with line s separated by '\r\r\n'.
Example:

$ od -c FTAK31_PANC_131140.1481629265635
000   F   T   A   K   3   1   P   A   N   C   1   3   1   1
020   4   0  \r  \r  \n   T   A   F   A   B   E  \r  \r  \n   T   A
040   F  \r  \r  \n   P   A   B   E   1   3   1   1   4   0   Z
060   1   3   1   2   /   1   4   1   2   0   7   0   1   0
100   K   T   P   6   S   M   S   C   T   0   3   5   O
120   V   C   0   6   0  \r  \r  \n   F   M   1
140   3   2   1   0   0   1   0   0   1   2   G   2   0   K   T
160   P   6   S   M   B   K   N   1   0   0   W   S   0
200   1   5   /   1   8   0   3   5   K   T  \r  \r  \n
220   F   M   1   4   1   0   0   0   0   9   0   1   5
240   G   2   5   K   T   P   6   S   M   B   K   N   0   5
260   0   W   S   0   1   5   /   1   8   0   4   0   K   T   =
300  \r  \r  \n
303

What is the proper way of getting a list of lines?
Both
>>> open('FTAK31_PANC_131140.1481629265635').readlines()
['FTAK31 PANC 131140\n', '\n', 'TAFABE\n', '\n', 'TAF\n', '\n', 'PABE
131140Z 1312/1412 07010KT P6SM SCT035 OVC060\n', '\n', ' FM132100
10012G20KT P6SM BKN100 WS015/18035KT\n', '\n', ' FM141000 09015G25KT
P6SM BKN050 WS015/18040KT=\n', '\n']

and

>>> open('FTAK31_PANC_131140.1481629265635').read().splitlines()
['FTAK31 PANC 131140', '', 'TAFABE', '', 'TAF', '', 'PABE 131140Z 1312/1412
07010KT P6SM SCT035 OVC060', '', ' FM132100 10012G20KT P6SM BKN100
WS015/18035KT', '', ' FM141000 09015G25KT P6SM BKN050 WS015/18040KT=',
'']

introduce empty (or single character '\n') strings. I can do this:

>>> [x.rstrip() for x in open('FTAK31_PANC_131140.1481629265635',
'rb').read().decode().split('\n')]
['FTAK31 PANC 131140', 'TAFABE', 'TAF', 'PABE 131140Z 1312/1412 07010KT
P6SM SCT035 OVC060', ' FM132100 10012G20KT P6SM BKN100 WS015/18035KT',
' FM141000 09015G25KT P6SM BKN050 WS015/18040KT=', '']

but it looks cumbersome. I Python2.x I stripped '\r' before passing the
string to split():

>>> open('FTAK31_PANC_131140.1481629265635').read().replace('\r', '')
'FTAK31 PANC 131140\nTAFABE\nTAF\nPABE 131140Z 1312/1412 07010KT P6SM
SCT035 OVC060\n FM132100 10012G20KT P6SM BKN100 WS015/18035KT\n
FM141000 09015G25KT P6SM BKN050 WS015/18040KT=\n'

but Python 3.x replaces '\r\r\n' by '\n\n' on read().

Ideally I'd like to have code that handles both '\r\r\n' and '\n' as the
split character.

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Splitting text into lines

2016-12-13 Thread George Trojan - NOAA Federal
>
> Are repeated newlines/carriage returns significant at all? What about
> just using re and just replacing any repeated instances of '\r' or '\n'
> with '\n'? I.e. something like
>  >>> # the_string is your file all read in
>  >>> import re
>  >>> re.sub("[\r\n]+", "\n", the_string)
> and then continuing as before (i.e. splitting by newlines, etc.)
> Does that work?
> Cheers,
> Thomas


The '\r\r\n' string is a line separator, though not used consistently in US
meteorological bulletins. I do not want to eliminate "real" empty lines.
I was hoping there is a way to prevent read() from making hidden changes to
the file content.

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Splitting text into lines

2016-12-13 Thread George Trojan - NOAA Federal
>
> Tell Python to keep the newline chars as seen with
> open(filename, newline="")
> For example:
> >>>
> * open("odd-newlines.txt", "rb").read() *
> b'alpha\nbeta\r\r\ngamma\r\r\ndelta\n'
> >>>
> * open("odd-newlines.txt", "r", newline="").read().replace("\r", *
> "").splitlines()
> ['alpha', 'beta', 'gamma', 'delta']


Thanks Peter. That's what I needed.

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Python3.6 tkinter bug?

2017-01-31 Thread George Trojan - NOAA Federal
The following program behaves differently under Python 3.6:

'''
checkbutton test
'''

import tkinter

class GUI(tkinter.Tk):
def __init__(self):
tkinter.Tk.__init__(self)
frame = tkinter.Frame(self)
for tag in ('A', 'B'):
w = tkinter.Checkbutton(frame, text=tag)
w.pack(side='top', padx=10, pady=10)
print(w)
frame.pack(side='top')
frame = tkinter.Frame(self)
for tag in ('C', 'D'):
w = tkinter.Checkbutton(frame, text=tag)
w.pack(side='top', padx=10, pady=10)
print(w)
frame.pack(side='top')

gui = GUI()
gui.mainloop()

Selection of button 'A' also selects button 'C'. Same goes for 'B' and 'D'.
I noticed that widget names have changed, which likely leads to the cause:

> /usr/local/Python-3.5.1/bin/python3 foo.py
.140182648425776.140182647743208
.140182648425776.140182647841848
.140182648424152.140182648282080
.140182648424152.140182648282136

> /usr/local/Python-3.6.0/bin/python3 foo.py
.!frame.!checkbutton
.!frame.!checkbutton2
.!frame2.!checkbutton
.!frame2.!checkbutton2

Is this a known issue?

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python3.6 tkinter bug

2017-01-31 Thread George Trojan - NOAA Federal
On 2017-01-31 18:02, MRAB wrote:

On 2017-01-31 22:34, Christian Gollwitzer wrote:
> >* Am 31.01.17 um 20:18 schrieb George Trojan - NOAA Federal:
> *>>* Selection of button 'A' also selects button 'C'. Same goes for 'B' and 
> 'D'.
> *>>* I noticed that widget names have changed, which likely leads to the 
> cause:
> *>>
> >>>* /usr/local/Python-3.5.1/bin/python3 foo.py
> *>>* .140182648425776.140182647743208
> *>>* .140182648425776.140182647841848
> *>>* .140182648424152.140182648282080
> *>>* .140182648424152.140182648282136
> *>>
> >>>* /usr/local/Python-3.6.0/bin/python3 foo.py
> *>>* .!frame.!checkbutton
> *>>* .!frame.!checkbutton2
> *>>* .!frame2.!checkbutton
> *>>* .!frame2.!checkbutton2
> *>
> >* The widget names look fine to, and the 3.6 naming is way better, because
> *>* it can be used for debugging more easily. The behaviour you describe can
> *>* have two reasons, a) the same widget can be packed twice b) the widgets
> *>* use the same variable. In Tk, a widget does not need an associate
> *>* variable - which can be done by setting the variable to an empty string,
> *>* or by leaving this option off.
> *>
> >* Presumably Python 3.6 passes anything else than an empty string to Tk as
> *>* the -variable option, maybe a mistranslated None? Can't test it myself,
> *>* but in your example you could, for instance, check the output of
> *>* self.call(w, 'configure'), which lists you the Tk side configuration, or
> *>* self.call(w, 'configure', '-variable') to get specifically the bound
> *>* variable.
> *>
> Perhaps someone who knows Tcl and tk can tell me, but I notice that in
> the first example, the second part of the widget names are unique,
> whereas in the second example, the second part of the widget names are
> the reused (both "!checkbutton" and "!checkbutton2" occur twice). Is
> that the cause?
> Do the names need to be:
> .!frame.!checkbutton
> .!frame.!checkbutton2
> .!frame2.!checkbutton3
> .!frame2.!checkbutton4
> ?


Adding dummy variable solves the issue. Following Christian's
suggestion I added code to print Tk variable:

from functools import partial
import tkinter

class GUI(tkinter.Tk):
def __init__(self):
tkinter.Tk.__init__(self)
frame = tkinter.Frame(self)
for tag in ('A', 'B'):
w = tkinter.Checkbutton(frame, text=tag, #
variable=tkinter.IntVar(),
command=partial(print, tag))
w.pack(side='top', padx=10, pady=10)
print(tag, self.call(w, 'configure', '-variable'))
print(w)
frame.pack(side='top')
frame = tkinter.Frame(self)
for tag in ('C', 'D'):
w = tkinter.Checkbutton(frame, text=tag, #
variable=tkinter.IntVar(),
command=partial(print, tag))
w.pack(side='top', padx=10, pady=10)
print(tag, self.call(w, 'configure', '-variable'))
print(w)
frame.pack(side='top')

gui = GUI()
gui.mainloop()

The output is:

(venv-3.6.0) dilbert@gtrojan> python foo.py
A ('-variable', 'variable', 'Variable', '', )
.!frame.!checkbutton
B ('-variable', 'variable', 'Variable', '', )
.!frame.!checkbutton2
C ('-variable', 'variable', 'Variable', '', )
.!frame2.!checkbutton
D ('-variable', 'variable', 'Variable', '', )
.!frame2.!checkbutton2

It looks like the default name is the last component of widget tree.

When the # sign is removed, the names are unique and the problem disappears:

(venv-3.6.0) dilbert@gtrojan> python foo.py
A ('-variable', 'variable', 'Variable', '', )
.!frame.!checkbutton
B ('-variable', 'variable', 'Variable', '', )
.!frame.!checkbutton2
C ('-variable', 'variable', 'Variable', '', )
.!frame2.!checkbutton
D ('-variable', 'variable', 'Variable', '', )
.!frame2.!checkbutton2
-- 
https://mail.python.org/mailman/listinfo/python-list


A tool to add diagrams to sphinx docs

2016-04-01 Thread George Trojan - NOAA Federal
What graphics editor would you recommend to create diagrams that can be
included in sphinx made documentation? In the past I used xfig, but was not
happy with font quality. My understanding is the diagrams would be saved in
a .png file and I should use an image directive in the relevant .rst file.

George
-- 
https://mail.python.org/mailman/listinfo/python-list


functools puzzle

2016-04-06 Thread George Trojan - NOAA Federal
Here is my test program:

''' generic test '''

import functools
import inspect

def f():
'''I am f'''
pass

g = functools.partial(f)
g.__doc__ = '''I am g'''
g.__name__ = 'g'

def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc

h = partial(f)
functools.update_wrapper(h, f)
h.__doc__ = '''I am h'''
h.__name__ = 'h'

print(type(g), g.__name__, inspect.getdoc(g))
print(type(h), h.__name__, inspect.getdoc(h))
help(g)
help(h)

The output is what I expect:

(devenv-3.5.1) dilbert@gtrojan> python x.py
 g I am g
 h I am h

However the help screens for h and g are different. help(h) is again what I
expect:

Help on function h in module __main__:

h()
I am h

But help(g) is (almost) identical to help(functools.partial), the doc
string disappears:

Help on partial object:

g = class partial(builtins.object)
 |  partial(func, *args, **keywords) - new function with partial application
 |  of the given arguments and keywords.
...

The module functools has partial() defined as above, then overrides the
definition by importing partial from _functools. That would explain the
above behaviour. My question is why?

George
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: functools puzzle

2016-04-06 Thread George Trojan - NOAA Federal
True, but the pure Python and C implementation differ. Is that
intentional?  I find the current behaviour confusing. The doc states only
that partial object does not set the __doc__ attribute, not that the
attribute might be ignored. I had a peek at the pydoc module. It uses
inspect to determine the type of object in question through the
inspect.isXXX() functions. My h is a function, while g is not.

This is a follow-up on my previous problem with sphinx not recognizing the
doc string. I don't know whether this and sphinx issues are related. My
basic question is how to document functions created by functools.partial,
such that the documentation can be viewed not only by reading the code. Of
course, as the last resort, I could create my own implementation (i.e. copy
the pure Python code).

George

On Wed, Apr 6, 2016 at 6:39 PM, Michael Selik 
wrote:

>
> > On Apr 6, 2016, at 6:57 PM, George Trojan - NOAA Federal <
> george.tro...@noaa.gov> wrote:
> >
> > The module functools has partial() defined as above, then overrides the
> > definition by importing partial from _functools. That would explain the
> > above behaviour. My question is why?
>
> A couple speculations why an author might retain a vestigial Python
> implementation after re-implementing in C: to provide a backup in case the
> C fails to compile or to simply provide an easier-to-read example of what
> the C is doing.
>
>
-- 
https://mail.python.org/mailman/listinfo/python-list