Re: Python GUI toolkit

2008-02-04 Thread bockman

>
> Another toolkit you might look into is Tkinter. I think it is something
> like the "official" toolkit for python. I also think it is an adapter
> for other toolkits, so it will use gtk widgets on gnome, qt widgets on
> kde and some other strange widgets on windows.
>

Not t so,  AFAIK. Tkinter is the python adapter for Tk, the toolkit
originally developed for Tcl language.

The latest version of Tk (not yet integrated in Python, maybe in 2.6)
has themes, which emulates
the look-and-feel of native toolkit at list for XP and OS X. For unix,
the last time I checked, there was only a theme that looked like a
plainer version of Gtk default theme. No Gnome or Kde themes yet.

The latest version of Tk also increased the set of available widgets,
which now is similar to the set of widgets offered by Qt/Gtk..
However, how much of these widgets will be available through Tkinter
will depend on people stepping in and upgrading Tkinter beyond simpy
ensuring that the old widgets still works. Given that many GUI-
developing python programmers have moved to other toolkits, I'm not
sure this will ever happen.

Ciao

FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Exception on keypress

2008-02-14 Thread bockman
On 14 Feb, 14:27, Michael Goerz <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> I'm writing a command line program that watches a file, and recompiles
> it when it changes. However, there should also be a possibility of doing
> a complete clean restart (cleaning up temp files, compiling some
> dependencies, etc.).
>
> Since the program is in an infinite loop, there are limited means of
> interacting with it. Right now, I'm using the Keyboard Interrupt: if the
> user presses CTRL+C once, a clean restart is done, if he presses it
> twice within a second, the program terminates. The stripped down code
> looks like this:
>
>      while True:
>          try:
>              time.sleep(1)
>              if watched_file_has_changed():
>                  compile_the_changed_file()
>          except KeyboardInterrupt: # user hits CTRL+C
>              try:
>                  print("Hit Ctrl+C again to quit")
>                  time.sleep(1)
>                  clean_restart()
>              except KeyboardInterrupt:
>                  do_some_cleanup()
>                  sys.exit(0)
>
> Is there another way of doing this? Ideally, there would be an exception
> every time any key at all is pressed while the code in the try block is
> being executed. That way, the user could just hit the 'R' key  to do a
> clean restart, and the 'E' key to exit the program. Is there any way to
> implement something like that?
>
> Right now, the CTRL+C solution works, but isn't very extensible (It
> wouldn't be easy to add another command, for example). Any suggestions?
>
> Thanks,
> Michael

I don't know any way to extend your solution. However, I would suggest
to experiment with the threading
module. Threading in Python is quite easy, as long as that you stick
with queue and events for communications
between threads.

Here is an example, where the main thread is used to handle the
console and the background thread does the job.
The assumption is that the background thread  can do the job in
separate short steps, checking for new commands between steps.
This esemple uses events to signal commands to the background thread
and uses a queue to send from background thread
to main thread synchronous messages to be displayed on the console. I
guess the example could be shorter if I used a command queue instead
of events, but I wanted to show how to use events.

The program works, but surely can be improved ...

Ciao

FB

#
# Example of program with two threads
# one of MMI, one background
#

import sys, traceback

import threading, time, Queue

class BackgroundThread(threading.Thread):
TICK = 1.0
def __init__(self, msg_queue):
threading.Thread.__init__(self)
self.reset_event = threading.Event()
self.quit_event = threading.Event()
self.msg_queue = msg_queue

def do_job(self):
pass # This shoud execute each time a step and return shortly

def do_reset(self):
pass

def run(self):
while not self.quit_event.isSet():
self.do_job() # should be one short step only
time.sleep(self.TICK)
if self.reset_event.isSet():
self.do_reset()
self.reset_event.clear()
self.msg_queue.put('Reset completed')


def main():
msg_queue = Queue.Queue()
print 'Starting background thread ...'
b_thread = BackgroundThread(msg_queue)
b_thread.start()
while 1:
print 'Type R to reset'
print 'Type Q to quit'
cmd = raw_input('Command=>')
if cmd in 'Rr':
b_thread.reset_event.set()
# wait for reset command completion
print msg_queue.get()
elif cmd in 'qQ':
b_thread.quit_event.set()
break

print 'Waiting the background thread to terminate ...'
b_thread.join()
print 'All done.'

if __name__ == '__main__':
try:
main()
print 'Program completed normally.'
raw_input('Type something to quit')
except:
err, detail, tb = sys.exc_info()
print err, detail
traceback.print_tb(tb)
raw_input('Oops...')






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: network programming: how does s.accept() work?

2008-02-25 Thread bockman
On 25 Feb, 09:51, 7stud <[EMAIL PROTECTED]> wrote:
> I have the following two identical clients
>
> #test1.py:---
> import socket
>
> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>
> host = 'localhost'
> port = 5052  #server port
>
> s.connect((host, port))
> print s.getsockname()
>
> response = []
> while 1:
>     piece = s.recv(1024)
>     if piece == '':
>         break
>
>     response.append(piece)
>
> #test3.py:
> import socket
>
> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>
> host = 'localhost'
> port = 5052  #server port
>
> s.connect((host, port))
> print s.getsockname()
>
> response = []
> while 1:
>     piece = s.recv(1024)
>     if piece == '':
>         break
>
>     response.append(piece)
>
> and this basic server:
>
> #test2.py:--
> import socket
>
> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>
> host = ''
> port = 5052
>
> s.bind((host, port))
> s.listen(5)
>
> while 1:
>     newsock, client_addr = s.accept()
>     print "orignal socket:", s.getsockname()
>
>     print "new socket:", newsock.getsockname()
>     print "new socket:", newsock.getpeername()
>     print
>
> I started the server, and then I started the clients one by one.  I
> expected both clients to hang since they don't get notified that the
> server is done sending data, and I expected the server output to show
> that accept() created two new sockets.  But this is the output I got
> from the server:
>
> original socket: ('0.0.0.0', 5052)
> new socket, self: ('127.0.0.1', 5052)
> new socket, peer: ('127.0.0.1', 50816)
>
> original socket: ('0.0.0.0', 5052)
> new socket, self: ('127.0.0.1', 5052)
> new socket, peer: ('127.0.0.1', 50818)
>
> The first client I started generated this output:
>
> ('127.0.0.1', 50816)
>
> And when I ran the second client, the first client disconnected, and
> the second client produced this output:
>
> ('127.0.0.1', 50818)
>
> and then the second client hung.  I expected the server output to be
> something like this:
>
> original socket: ('127.0.0.1', 5052)
> new socket, self: ('127.0.0.1', 5053)
> new socket, peer: ('127.0.0.1', 50816)
>
> original socket: ('0.0.0.0', 5052)
> new socket, self: ('127.0.0.1', 5054)
> new socket, peer: ('127.0.0.1', 50818)
>
> And I expected both clients to hang.  Can someone explain how accept()
> works?

I guess (but I did not try it) that the problem is not accept(), that
should work as you expect,
but the fact that at the second connection your code actually throws
away the first connection
by reusing the same variables without storing the previous values.
This could make the Python
garbage collector to attempt freeing the socket object created with
the first connection, therefore
closing the connection.

If I'm right, your program should work as you expect if you for
instance collect in a list the sockets
returned by accept.

Ciao

FB


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: network programming: how does s.accept() work?

2008-02-25 Thread bockman

>
> The question I'm really trying to answer is: if a client connects to a
> host at a specific port, but the server changes the port when it
> creates a new socket with accept(), how does data sent by the client
> arrive at the correct port?  Won't the client be sending data to the
> original port e.g. port 5052 in the client code above?
>

I'm not an expert, never used TCP/IP below the socket abstraction
level, but I imagine
that after accept, the client side of the connection is someow
'rewired' with the new
socket created on the server side.

Anyhow, this is not python-related, since the socket C library behaves
exactly in the same way.

Ciao
-
FB

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie: How can I use a string value for a keyword argument?

2008-02-25 Thread bockman
On 25 Feb, 12:42, Doug Morse <[EMAIL PROTECTED]> wrote:
> Hi,
>
> My apologies for troubling for what is probably an easy question... it's just
> that can't seem to find an answer to this anywhere (Googling, pydocs, etc.)...
>
> I have a class method, MyClass.foo(), that takes keyword arguments.  For
> example, I can say:
>
> x = MyClass()
> x.foo(trials=32)
>
> Works just fine.
>
> What I need to be able to do is call foo() with a string value specifying the
> keyword (or both the keyword and value would be fine), something along the
> lines of:
>
> x = MyClass()
> y = 'trials=32'
> x.foo(y)        # doesn't work
>
> or
>
> x.MyClass()
> y = 'trials'
> x.foo(y = 32)   # does the "wrong" thing
>

Try this:
y='trials'
x.foo( **{y:32} )

Ciao
-
FB


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Import, how to change sys.path on Windows, and module naming?

2008-03-03 Thread bockman
On 1 Mar, 20:17, Steve Holden <[EMAIL PROTECTED]> wrote:
> Jeremy Nicoll - news posts wrote:
>
>
>
> > Jeremy Nicoll - news posts <[EMAIL PROTECTED]> wrote:
>
> >> If I understand correctly, when I import something under Windows, Python
> >> searches the directory that the executing script was loaded from, then
> >> other directories as specified in "sys.path".
>
> > Sorry to followup my own question, but I ran
>
> >  for p,q in enumerate(sys.path): print p, q
>
> > and got:
>
> > 0 C:\Documents and Settings\Laptop\My Documents\JN_PythonPgms
> > 1 C:\Program Files\~P-folder\Python25\Lib\idlelib
> > 2 C:\WINDOWS\system32\python25.zip
> > 3 C:\Program Files\~P-folder\Python25\DLLs
> > 4 C:\Program Files\~P-folder\Python25\lib
> > 5 C:\Program Files\~P-folder\Python25\lib\plat-win
> > 6 C:\Program Files\~P-folder\Python25\lib\lib-tk
> > 7 C:\Program Files\~P-folder\Python25
> > 8 C:\Program Files\~P-folder\Python25\lib\site-packages
> > 9 C:\Program Files\~P-folder\Python25\lib\site-packages\win32
> > 10 C:\Program Files\~P-folder\Python25\lib\site-packages\win32\lib
> > 11 C:\Program Files\~P-folder\Python25\lib\site-packages\Pythonwin
>
> > Does every Windows user have: 2 C:\WINDOWS\system32\python25.zip
> > in their sys.path?  What's the point of having a zip in the path?
>
> So that the files inside the zip can be imported as modules and
> packsges, of course.
>
> > Also, looking in  C:\WINDOWS\system32\   I don't actually have a file called
> > python25.zip, but I do have one called  python25.dll - so has something gone
> > wrong in creation of sys.path?
>
> No. I'm not sure why the zip file is on there by default.
>
> regards
>   Steve
> --
> Steve Holden        +1 571 484 6266   +1 800 494 3119
> Holden Web LLC              http://www.holdenweb.com/- Nascondi testo tra 
> virgolette -
>
> - Mostra testo tra virgolette -

I believe the answer is in how the new import protocol (PEP 302)
works: when you install a new path handler in sys.import_hooks, this
is called for each element of sys.path; the path handler has two
options: either to raise
ImportError, which means that cannot handle the specific path, or to
return an object with the methods defined
in the PEP, which means that the returned object - and only that one -
will be used to import modules in the specific path.

This means that only one path handler can be used for each element of
sys.path. Therefore, if you want to add a
path handler that does not interfere with the other ones, one way to
do it is to add something in sys.path that
is rejected by all path handlers except yours. I believe that
python25.zip is that 'something' that is used by the
zipimporter path handler, which allows to import directly from zip
files.

I did something similar in my toy experiment with the import hooks,
half-believing that there was something I missed. Nice to see that the
'big guys' did the same trick  :-)
(unless of course I _did_ miss something and my guess is completely
wrong; I should have done some experiment
before posting, but I'm too lazy for that).

Ciao
---
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Import, how to change sys.path on Windows, and module naming?

2008-03-03 Thread bockman
On 3 Mar, 17:12, [EMAIL PROTECTED] wrote:

> (unless of course I _did_ miss something and my guess is completely
> wrong; I should have done some experiment
> before posting, but I'm too lazy for that).
>
> Ciao
> ---
> FB- Nascondi testo tra virgolette -
>
> - Mostra testo tra virgolette -

Oops... I tried removing python25.zip from sys.path, and I can still
import packages from zip files ...
so my guess was wrong and my lazyness has been punished :-)

Ciao
-
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Protocol for thread communication

2008-03-05 Thread bockman
On 5 Mar, 06:12, Michael Torrie <[EMAIL PROTECTED]> wrote:
> Does anyone have any recommended ideas/ways of implementing a proper
> control and status protocol for communicating with threads?  I have a
> program that spawns a few worker threads, and I'd like a good, clean way
> of communicating the status of these threads back to the main thread.
> Each thread (wrapped in a very simple class) has only a few states, and
> progress levels in those states.  And sometimes they can error out,
> although if the main thread knew about it, it could ask the thread to
> retry (start over).  How would any of you do this?  A callback method
> that the thread can call (synchronizing one-way variables isn't a
> problem)?  A queue?  How would the main thread check these things?
> Currently the main thread is polling some simple status variables.  This
> works, and polling will likely continue to be the simplest and easiest
> way, but simple status variables are very limited.  Are there any
> pythonic patterns people have developed for this.
>
> thanks.
>
> Michael

I've found that Queue.Queue objects are the easiest way to communicate
between threads in Python.
So, I'd suggets that you attach a Queue to each thread: the main
thread will use its queue to receive
the status messages from the other threads; the other threads will use
their queues to receive the retry command (or any other command that
may be needed) from the main thread.

Ciao
--
FB

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: draining pipes simultaneously

2008-03-05 Thread bockman
On 5 Mar, 10:33, "Dmitry Teslenko" <[EMAIL PROTECTED]> wrote:
> Hello!
> Here's my implementation of a function that executes some command and
> drains stdout/stderr invoking other functions for every line of
> command output:
>
> def __execute2_drain_pipe(queue, pipe):
>         for line in pipe:
>                 queue.put(line)
>         return
>
> def execute2(command, out_filter = None, err_filter = None):
>         p = subprocess.Popen(command , shell=True, stdin = subprocess.PIPE, \
>                 stdout = subprocess.PIPE, stderr = subprocess.PIPE, \
>                 env = os.environ)
>
>         qo = Queue.Queue()
>         qe = Queue.Queue()
>
>         to = threading.Thread(target = __execute2_drain_pipe, \
>                 args = (qo, p.stdout))
>         to.start()
>         time.sleep(0)
>         te = threading.Thread(target = __execute2_drain_pipe, \
>                 args = (qe, p.stderr))
>         te.start()
>
>         while to.isAlive() or te.isAlive():
>                 try:
>                         line = qo.get()
>                         if out_filter:
>                                 out_filter(line)
>                         qo.task_done()
>                 except Queue.Empty:
>                         pass
>
>                 try:
>                         line = qe.get()
>                         if err_filter:
>                                 err_filter(line)
>                         qe.task_done()
>                 except Queue.Empty:
>                         pass
>
>         to.join()
>         te.join()
>         return p.wait()
>
> Problem is my implementation is buggy and function hungs when there's
> empty stdout/stderr. Can I have your feedback?

The Queue.get method by default is blocking. The documentation is not
100% clear about that (maybe it should report
the full python definition of the function parameters, which makes
self-evident the default value) but if you do
help(Queue.Queue) in a python shell you will see it.

Hence, try using a timeout or a non-blocking get (but in case of a non
blocking get you should add a delay in the
loop, or you will poll the queues at naximum speed and maybe prevent
the other threads from accessing them).

Ciao
-
FB

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: draining pipes simultaneously

2008-03-05 Thread bockman

>
> Inserting delay in the beginning of the loop causes feeling of command
> taking long to start and delay at the end of the loop may cause of
> data loss when both thread became inactive during delay.

time.sleep() pauses ony the thread that executes it, not the
others. And queue objects can hold large amount of data (if you have
the RAM),
so unless your subprocess is outputting data very fast, you should not
have data loss.
Anyway, if it works for you ... :-)

Ciao
-
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Sockets Help

2008-03-11 Thread bockman
On 10 Mar, 23:58, Mark M Manning <[EMAIL PROTECTED]> wrote:
> I need your expertise with a sockets question.
>
> Let me preface this by saying I don't have much experience with
> sockets in general so this question may be simple.
>
> I am playing with the mini dns server from a script I found 
> online:http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/491264/index_txt
>
> All I want to do is edit this script so that it records the IP
> address.  I've seen other examples use the accept() object which
> returns the data and the IP address it is receiving the data from.  I
> can't use that in this case but I'm wondering if someone could show me
> how.
>
> Here is the socket part of the script:
>
> if __name__ == '__main__':
>   ip='192.168.1.1'
>   print 'pyminifakeDNS:: dom.query. 60 IN A %s' % ip
>
>   udps = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
>   udps.bind(('',53))
>
>   try:
>     while 1:
>       data, addr = udps.recvfrom(1024)
>       p=DNSQuery(data)
>       udps.sendto(p.respuesta(ip), addr)
>       print 'Respuesta: %s -> %s' % (p.dominio, ip)
>   except KeyboardInterrupt:
>     print 'Finalizando'
>     udps.close()
>
> Thanks to everyone in advance!
> ~Mark

You already have the address of the sender, is in the 'addr' variable,
as returned by udps.recvfrom.
Change the print statement in sometinmh like:
  print 'Respuesta (%s): %s -> %s' % ( addr, p.dominio, ip)
and you will see the sender address in dotted notation printed inside
the ().

Ciao
---
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: do you fail at FizzBuzz? simple prog test

2008-05-12 Thread bockman
On 12 Mag, 09:00, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
> En Sat, 10 May 2008 22:12:37 -0300, globalrev <[EMAIL PROTECTED]> escribió:
>
>
>
>
>
> >http://reddit.com/r/programming/info/18td4/comments
>
> > claims people take a lot of time to write a simple program like this:
>
> > "Write a program that prints the numbers from 1 to 100. But for
> > multiples of three print "Fizz" instead of the number and for the
> > multiples of five print "Buzz". For numbers which are multiples of
> > both three and five print "FizzBuzz".
>
> > for i in range(1,101):
> >     if i%3 == 0 and i%5 != 0:
> >         print "Fizz"
> >     elif i%5 == 0 and i%3 != 0:
> >         print "Buzz"
> >     elif i%5 == 0 and i%3 == 0:
> >         print "FizzBuzz"
> >     else:
> >         print i
>
> > is there a better way than my solution? is mine ok?
>
> Is it correct? Did you get at it in less than 15 minutes? If so, then it's OK.
> The original test was not "write the most convoluted algorithm you can think 
> of", nor "write the best program to solve this". It was a *practical* test: 
> if you can't get anything remotely working for such a simple problem in 15 
> minutes, we're not interested in your services.
>
> (We used this question last year - some people gave a sensible answer in less 
> than 5 minutes, but others did not even know how to start)
>
> --
> Gabriel Genellina- Nascondi testo tra virgolette -
>
> - Mostra testo tra virgolette -

As a test, I would leave out the last sentence, and see how many
people (and how fast) figure out than a number can be multiple of
three _and_ five and that the requirement is somehow incomplete ...

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: List behaviour

2008-05-15 Thread bockman
On 15 Mag, 12:08, Gabriel <[EMAIL PROTECTED]> wrote:
> Hi all
>
> Just wondering if someone could clarify this behaviour for me, please?
>
> >>> tasks = [[]]*6
> >>> tasks
>
> [[], [], [], [], [], []]>>> tasks[0].append(1)
> >>> tasks
>
> [[1], [1], [1], [1], [1], [1]]
>
> Well what I was expecting to end up with was something like:
> >>> tasks
> [[1], [], [], [], [], []]
>
> I got this example from page 38 of Beginning Python.
>
> Regards
>
> Gabriel

The reason is that
tasks = [[]]*6
creates a list with six elements pointing to *the same* list, so when
you change one,
it shows six times.

In other words, your code is equivalent to this:
>>> a = []
>>> tasks = [a,a,a,a,a,a]
>>> a.append(1)
>>> tasks
[[1], [1], [1], [1], [1], [1]]


Insead, to create a list of lists, use the list comprehension:

>>> tasks = [ [] for x in xrange(6) ]
>>> tasks[0].append(1)
>>> tasks
[[1], [], [], [], [], []]
>>>

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list

Re: about python modules

2008-05-21 Thread bockman
On 21 Mag, 14:31, srinivas <[EMAIL PROTECTED]> wrote:
> hi friends i am new to python programming.
> i am using Python 2.5 and IDLE as editor.
> i have developed some functions in python those will be calling
> frequently in my main method .
> now i want to know how to import my functions folder to python in
> sucha way that the functions in functions folder should work like
> python library modules .
>
> i have  python in folder C:\python25\..
> and functions folder D:\programs\Functions\
>
> pls help me friends how to do that.

You have two choices:

1. In this way you can import single modules (files) in tour folder

import sys
sys.path.append(r'D:\programs\Functions\')
import my_module_1
import my_module_2

and then  use whatever you have in the modules:

my_module_1.my_function()
print my_module_1.my_variable


2.
If you add an empty python module called __init__.py inside the folder
D:\programs\Functions\,
then python will handle the folder as a package (i.e. a group of
modules) and  you can import
them in this way:

sys.path.append(r'D:\programs\')
import Functions # I'm not sure this is needed ...
from Functions import my_module_1, my_module_2

And then use whatever is in your modules as in case 1.

If you put any code in __init__.py, this code will be executed when
the import Functions
statement is executed. This can be handy in some cases, e.g. if you
have subfolders of
Function folder and want to extend sys.path to include all them.

For more details, read the section 6 of Python tutorial.

HTH

Ciao
--
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: read file into list of lists

2008-07-11 Thread bockman
On 11 Lug, 15:15, antar2 <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I can not find out how to read a file into a list of lists. I know how
> to split a text into a list
>
> sentences = line.split(\n)
>
> following text for example should be considered as a list of lists (3
> columns and 3 rows), so that when I make the print statement list[0]
> [0], that the word pear appears
>
> pear noun singular
> books nouns plural
> table noun singular
>
> Can someone help me?
>
> Thanks


You can use split again, using ' ' or nothing(defaults to whitespace
characters) as separator,
like this:

>>> text = """pear noun singular
books nouns plural
table noun singular"""

>>> words = [ x.split() for x in text.split('\n') ]
>>> print words
[['pear', 'noun', 'singular', ''], ['books', 'nouns', 'plural', ''],
['table', 'noun', 'singular']]


Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: undo a dictionary

2008-07-30 Thread bockman
On 30 Lug, 16:51, mmm <[EMAIL PROTECTED]> wrote:
> I found code to undo a dictionary association.
>
> def undict(dd, name_space=globals()):
>     for key, value in dd.items():
>         exec "%s = %s" % (key, repr(value)) in name_space
>
> So if i run
>
> >>> dx= { 'a':1, 'b': 'B'}
> >>> undict(dx)
>
> I get>>> print A, B
>
> 1 B
>
> Here,  a=1 and b='B'
>
> This works well enough for simple tasks and I understand the role of
> globals() as the default names space, but creating local variables is
> a problem. Also having no output arguemtns to undict() seems
> counterintuitive.  Also, the function fails if the key has spaces or
> operand characters (-,$,/,%).  Finally I know I will have cases where
> not clearing (del(a,b)) each key-value pair might create problems in a
> loop.
>
> So I wonder if anyone has more elegant code to do the task that is
> basically the opposite of creating a dictionary from a set of
> globally assigned variables.  And for that matter a way to create a
> dictionary from a set of variables (local or global).  Note I am not
> simply doing and  undoing dict(zip(keys,values))


Maybe you can use objects as pseudo name spaces and do sommething like
this:

>>> class Scope(object):
def dict(self):
res = dict()
for k, v in self.__dict__.items(): res[k] = v
return res
def undict(self, dict):
for k,v in dict.items():
setattr(self, k, v )


>>> myscope = Scope()
>>> myscope.undict(dict(A=1, B=2))
>>> myscope.A
1
>>> myscope.B
2
>>> myscope.dict()
{'A': 1, 'B': 2}
>>>


Ciao
--
FB


--
http://mail.python.org/mailman/listinfo/python-list


Re: kill thread

2008-08-08 Thread bockman
On 8 Ago, 10:03, "Mathieu Prevot" <[EMAIL PROTECTED]> wrote:
> 2008/8/8 Miki <[EMAIL PROTECTED]>:
>
> > Hello,
>
> >> I have a threading.Thread class with a "for i in range(1,50)" loop
> >> within. When it runs and I do ^C, I have the error [1] as many as
> >> loops. I would like to catch this exception (and if possible do some
> >> cleanup like in C pthreads) so the program finishes cleanly. Where and
> >> how can I do this ? in __run__ ? __init__ ? a try/except stuff ?
> > You can have a try/except KeyboardException around the thread code.
>
> > HTH,
> > --
> > Miki
>
> Of course, but I don't know where. I placed this inside loop, within
> called functions from the loop, I still have the problem.
>
> Mathieu

Try this:

  loop_completed = True
  for i in range(1,50):
  try:
 # your code here
  except KeyboardException:
 loop_completed = False
 break # this breaks the loop
  # end loop
  if loop_completed:
 # code to be executed in case of normal completion
  else:
 # code to be executed in case of interruption
  # code to be executed in both cases
--
http://mail.python.org/mailman/listinfo/python-list


Re: tkinter, event.widget, what do i get?

2008-04-16 Thread bockman
On 16 Apr, 01:45, [EMAIL PROTECTED] wrote:
> On 16 Apr, 00:24, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
> > En Tue, 15 Apr 2008 17:45:08 -0300, <[EMAIL PROTECTED]> escribió:
>
> > > when calling function hmm here, what do i get? the widget i clicked
> > > on?
> > > if i have a canvs on wich i have a bitmap and i click on the bitmap,
> > > is the event.widget then the bitmap?
> > > can i get info about the bitmap then? like color of the pixel i
> > > clicked. if so, how?
>
> > > w.bind("", key)
> > > w.bind("", hmm)
>
> > > def hmm(event):
> > >     return event.widget
>
> > Why don't you try by yourself? You can use: print repr(something)
>
> > --
> > Gabriel Genellina
>
> i get 
>
> thing is i get that even though i click outside the image.
> and what can i do with this number anyway?- Nascondi testo tra virgolette -
>
> - Mostra testo tra virgolette -

If your image is a canvas item (i.e. created with canvas create_image
method), then you can use
the method tag_bind to handle events specific of that item.
In that case, the callback argument is a Tkinter.Event instance.

Ciao
-
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How is GUI programming in Python?

2008-04-16 Thread bockman
On 11 Apr, 20:19, Rune Strand <[EMAIL PROTECTED]> wrote:
> On Apr 10, 3:54 am, Chris Stewart <[EMAIL PROTECTED]> wrote:
> ...
>
>
>
> > Next, what would you say is the best framework I should look into?
> > I'm curious to hear opinions on that.
>
> GUI-programming in Python is a neanderthal experience. What one may
> love with console scripts is turned upside-down.  Projects like Boa
> Constructor seemed to be a remedy, but is not developed. The Iron-
> Pythonistas has a very promising RAD GUI-tool in the IronPython -
> Studio,http://www.codeplex.com/IronPythonStudio- but if you're non-
> Iron, only sorrow is left - unless you fancy creating GUI in a text-
> editor. Something I consider waste of life.

If you refer to lack of GUI designer, every toolkit usable by python -
barring Tkinter - has a GUI
designer wich can be used:

pygtk -> Glade
pywx -> wxDesigner, rxced, ...
pyqt -> QDesigner, ...

All can generate python code and/or generate files that can be used by
python program to
create the whole GUI with a few function calls (e.g. libglade ).

If you refer to the lack of visual programming ala visualstudio or
JBorland, you might be right,
but I personally found that visual programming makes for very
unmaintenable code, especially if you have to
fix something and you don't have the IDE with you (and this has
happened many times to me).
Therefore I now prefer a clean separation between the GUI (described
in someting like glade files or .xrc files)
and my code.

BTW, once learned to use the right layout managers, even building a
GUI from scratch is not such a PITA, since you
don't have to manually place each widget anymore, but only define the
structure of packers and grids and then
adjust borders and such with some -limited IME - experimentation. I
know people that prefer this approach to any GUI builder, having
developed their own little library to help reducing the boilerplate
(and in Python you can do nice things with decorators ans such ... ).

So maybe yes, in python you might not have the fancy world of visual
programming, but neither are deprived of tools
that make your work easier.

Ciao
-
FB

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and stale file handles

2008-04-17 Thread bockman
On 17 Apr, 04:22, tgiles <[EMAIL PROTECTED]> wrote:
> Hi, All!
>
> I started back programming Python again after a hiatus of several
> years and run into a sticky problem that I can't seem to fix,
> regardless of how hard I try- it it starts with tailing a log file.
>
> Basically, I'm trying to tail a log file and send the contents
> elsewhere in the script (here, I call it processor()). My first
> iteration below works perfectly fine- as long as the log file itself
> (logfile.log) keeps getting written to.
>
> I have a shell script constantly writes to the logfile.log... If I
> happen to kill it off and restart it (overwriting the log file with
> more entries) then the python script will stop sending anything at all
> out.
>
> import time, os
>
> def processor(message,address):
>         #do something clever here
>
> #Set the filename and open the file
> filename = 'logfile.log'
> file = open(filename,'r')
>
> #Find the size of the file and move to the end
> st_results = os.stat(filename)
> st_size = st_results[6]
> file.seek(st_size)
>
> while 1:
>     where = file.tell()
>     line = file.readline()
>     if not line:
>         time.sleep(1)
>         file.seek(where)
>     else:
>         print line, # already has newline
>         data = line
>         if not data:
>             break
>         else:
>                 processor(data,addr)
>                 print "Sending message '",data,"'."
>
> someotherstuffhere()
>
> ===
>
> This is perfectly normal behavior since the same thing happens when I
> do a tail -f on the log file. However, I was hoping to build in a bit
> of cleverness in the python script- that it would note that there was
> a change in the log file and could compensate for it.
>
> So, I wrote up a new script that opens the file to begin with,
> attempts to do a quick file measurement of the file (to see if it's
> suddenly stuck) and then reopen the log file if there's something
> dodgy going on.
>
> However, it's not quite working the way that I really intended it to.
> It will either start reading the file from the beginning (instead of
> tailing from the end) or just sit there confuzzled until I kill it
> off.
>
> ===
>
> import time, os
>
> filename = logfile.log
>
> def processor(message):
>     # do something clever here
>
> def checkfile(filename):
>     file = open(filename,'r')
>     print "checking file, first pass"
>     pass1 = os.stat(filename)
>     pass1_size = pass1[6]
>
>     time.sleep(5)
>
>     print "file check, 2nd pass"
>     pass2 = os.stat(filename)
>     pass2_size = pass2[6]
>     if pass1_size == pass2_size:
>         print "reopening file"
>         file.close()
>         file = open(filename,'r')
>     else:
>         print "file is OK"
>         pass
>
> while 1:
>         checkfile(filename)
>     where = file.tell()
>     line = file.readline()
>     print "reading file", where
>     if not line:
>         print "sleeping here"
>         time.sleep(5)
>         print "seeking file here"
>         file.seek(where)
>     else:
>         # print line, # already has newline
>         data = line
>         print "readying line"
>         if not data:
>             print "no data, breaking here"
>             break
>         else:
>             print "sending line"
>             processor(data)
>
> So, have any thoughts on how to keep a Python script from bugging out
> after a tailed file has been refreshed? I'd love to hear any thoughts
> you my have on the matter, even if it's of the 'that's the way things
> work' variety.
>
> Cheers, and thanks in advance for any ideas on how to get around the
> issue.
>
> tom

Possibly, restarting the program that writes the log file creates a
new file rather than
appending to the old one??

I think you should always reopen the file between the first and the
second pass
of your checkfile function, and then:
- if the file has the same size, it is probably the same file (but it
would better to
check the update time!), so seek to the end of it
- otherwise, its a new file, and then start reading it from the
beginning

To reduce the number of seeks, you could perform checkfile only if for
N cycles you did not
get any data.

Ciao
-
FB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get inner exception traceback

2008-04-24 Thread bockman
On 24 Apr, 13:20, Thomas Guettler <[EMAIL PROTECTED]> wrote:
> Hi,
>
> How can you get the traceback of the inner exception?
>
> try:
>      try:
>          import does_not_exit
>      except ImportError:
>          raise Exception("something wrong")
> except:
>      ...
>
> Background: In Django some exceptions are caught and a new
> exception gets raised. Unfortunately the real error is hard
> to find. Sometimes I help myself and change (in this example)
> ImportError to e.g. IOError and then I can see the real root
> of the problem. But maybe there is a way to get the inner
> exception and its traceback. This could be displayed in the
> debug view.
>
>   Thomas
>
> --
> Thomas Guettler,http://www.thomas-guettler.de/
> E-Mail: guettli (*) thomas-guettler + de

I'm not sure it ill work since sys.exc_info() might not return a deep
copy of the traceback info,
but you could try to store the inner exception and its  traceback as
attributes of the outer exception:

class ReraisedException(Exception):
def __init__(self, message, exc_info):
Exception.__init__(self, message)
self.inner_exception = exc_info

 try:
  try:
  import does_not_exit
  except ImportError:
   raise ReraisedException("Something wrong", sys.exc_info() )
 except ReraisedException, e:
 ... # here you can use e.inner_exception
 except:
 ...


Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to get inner exception traceback

2008-04-24 Thread bockman
On 24 Apr, 15:00, Christian Heimes <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] schrieb:
>
:
>
> > class ReraisedException(Exception):
> >     def __init__(self, message, exc_info):
> >         Exception.__init__(self, message)
> >         self.inner_exception = exc_info
>
> >  try:
> >       try:
> >           import does_not_exit
> >       except ImportError:
> >            raise ReraisedException("Something wrong", sys.exc_info() )
> >  except ReraisedException, e:
> >      ... # here you can use e.inner_exception
> >  except:
>
> This may lead to reference cycles, please 
> readhttp://docs.python.org/dev/library/sys.html#sys.exc_info
>
> Christian- Nascondi testo tra virgolette -
>
> - Mostra testo tra virgolette -

Thanks. I was not aware of that (Last time I read that section, the
warning was not there).
I usually do something like that in my scripts:

try:
   do_something()
except:
   err, detail, tb = sys.exc_info()
   print err, detail
   traceback.print_tb(tb)

According to the document you linked to, also this causes circular
reference, although in my case
it is ininfluent , since I usually do it only before exiting a program
after a
fatal error.

However, this seems like  a dark spot in the implementation of
CPython.
Do you now if this has/will be cleaned  in Python 3.x ? I'd like to
see a 'print_tb'
method in the exception class, so that I could do something like this:

try:
   do_something()
except Exception, e : # I know, in python 3.0 the syntax will be
different
   print e
   e.print_tb()


Ciao
---
F.B.
--
http://mail.python.org/mailman/listinfo/python-list


Re: error: (10035, 'The socket operation...

2008-04-28 Thread bockman
On 28 Apr, 01:01, Don Hanlen <[EMAIL PROTECTED]> wrote:
> IDLE internal error in runcode()
> Traceback (most recent call last):
>   File "C:\PYTHON25\lib\idlelib\rpc.py", line 235, in asyncqueue
>     self.putmessage((seq, request))
>   File "C:\PYTHON25\lib\idlelib\rpc.py", line 332, in putmessage
>     n = self.sock.send(s[:BUFSIZE])
> error: (10035, 'The socket operation could not complete without
> blocking')
>
> Does this look familiar to anyone?  I can't figure out what to do
> about it.  Python 2.5, windoze.  I get it when I execute a Tkinter op
> that works elsewhere.
>
> changing this:
>
> t = self.b.create_text(
>     (point.baseX + 1)*self.checkerSize/2 + fudge,
>     y + fudge,
>     text = str(point.occupied),
>     width = self.checkerSize)
>
> to
>
> t = self.b.create_text(
>     (point.baseX + 1)*self.checkerSize/2 + fudge,
>     y + fudge,
>     text = str(point.occupied),
>     font=("Times", str(self.checkerSize/2), "bold"),
>     width = self.checkerSize)
>
> for example.  The same code works fine elsewhere.  I thought I'd ask
> here before I try (no clue) increasing BUFSIZE in rpc.py?  I'm not
> crazy about tinkering with code I have no clue about..
> --
> don

The error is EWOULDBLOCK, which you get when you configure a socket
for asynchronous
I/O and then try an operation which cannot be completed immediately.
It is not an actual failure,
it is part of the asynch socket handling issues: it means that you
have to wait and try later.

AFAIK (almost nothing) the Tkinter application has two processes
(maybe the front-end and the interpreter)
which communicate through a socket. From the traceback, I world say
that the two processes communicate
using RPC protocol and the rpclib module, and that this in turns uses
the asyncqueue module, and one of them fails handling the EWOULDBLOCK
code.

I don't think that increasing BUFSIZE would solve the problem, since
you will try to send more bytes in a single
operation, and this would probably still result in an EWOULDBLOCK
return code.
Anyway, it looks like an IDLE problem, so if you can use another IDE
( Pythonwin? ), you could just ignore it,
maybe submit a bug report ?

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding Queue object

2008-04-29 Thread bockman
On 27 Apr, 12:27, Terry <[EMAIL PROTECTED]> wrote:
> Hello!
>
> I'm trying to implement a message queue among threads using Queue. The
> message queue has two operations:
> PutMsg(id, msg) #  this is simple, just combine the id and msg as one
> and put it into the Queue.
> WaitMsg(ids, msg) # this is the hard part
>
> WaitMsg will get only msg with certain ids, but this is not possible
> in Queue object, because Queue provides no method to peek into the
> message queue and fetch only matched item.
>
> Now I'm using an ugly solution, fetch all the messages and put the not
> used ones back to the queue. But I want a better performance. Is there
> any alternative out there?
>
> This is my current solution:
>
>     def _get_with_ids(self,wait, timeout, ids):
>         to = timeout
>         msg = None
>         saved = []
>         while True:
>             start = time.clock()
>             msg =self.q.get(wait, to)
>             if msg and msg['id'] in ids:
>                 break;
>             # not the expecting message, save it.
>             saved.append(msg)
>             to = to - (time.clock()-start)
>             if to <= 0:
>                 break
>         # put the saved messages back to the queue
>         for m in saved:
>             self.q.put(m, True)
>         return msg
>
> br, Terry

Wy put them back in the queue?
You could have a defaultdict with the id as key and a list of
unprocessed messages with that id as items.
Your _get_by_ids function could first look into the unprocessed
messages for items with that ids and then
look into the queue, putting any unprocessed item in the dictionary,
for later processing.
This should improve the performances, with a little complication of
the method code (but way simpler
that implementing your own priority-based queue).

Ciao
-
FB
--
http://mail.python.org/mailman/listinfo/python-list