PyGUI as a standard GUI API for Python?
As anyone knows, the state of Python GUI programming is a little fractured at this time, with many toolkits, wrappers and meta-wrappers dead and alive, with or without documentation. I've come across two projects that have the appeal of striving for simple, pythonic APIs: PyGUI and wax. The latter is a wrapper around wxPython. It is lacking documentation but actually quite usable and concise. The other, PyGUI, has an even nicer API and more docs but has relatively few widgets implemented at this time. It also strives for compatibility with several toolkits (two at this time), which I think is the right idea. So far, development of PyGUI seems to be a one-man effort, and it may be slowed down by the attempt to develop the API and the implementations concurrently. Could it be useful to uncouple the two, such that the API would be specified ahead of the implementation? This might make it easier for people to contribute implementation code and maybe port the API to additional toolkits. It seems that this approach has been quite successful in case of the Python database API. That API defines levels of compliance, which might be a way of accommodating different GUI toolkits as well. I may be underestimating the difficulties of my proposed approach - I don't have much practical experience with GUI programming myself. Best, Michael -- http://mail.python.org/mailman/listinfo/python-list
Re: PyGUI as a standard GUI API for Python?
On Sep 3, 12:57 pm, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > Michael Palmer schrieb: > > > The other, PyGUI, has an even nicer API and more docs but has > > relatively few widgets implemented at this time. It also strives for > > compatibility with several toolkits (two at this time), which I think > > is the right idea. > > I disagree with that. Meta-wrappers like this will always suffer from > problems, as they have difficulties providing a consistent api. For > example wx is said to be very windows-toolkit-centric in it's API. Yes I > know that it works under Linux with GTK, but it does not come as natural . wax actually does a nice job at wrapping wxPython with a cleaner API. > > So far, development of PyGUI seems to be a one-man effort, and it may > > be slowed down by the attempt to develop the API and the > > implementations concurrently. Could it be useful to uncouple the two, > > such that the API would be specified ahead of the implementation? This > > might make it easier for people to contribute implementation code and > > maybe port the API to additional toolkits. It seems that this approach > > has been quite successful in case of the Python database API. That API > > defines levels of compliance, which might be a way of accommodating > > different GUI toolkits as well. > > > I may be underestimating the difficulties of my proposed approach - I > > don't have much practical experience with GUI programming myself. > > I think you do. The reason for the various toolkits is not because of > python - it's the proliferation of toolkits that exist out there. Right. But that is similar to the situation with relational databases. There are so many of them that it's impossible to include an adapter to each of them in the stdlib. The next best thing is to provide a high-level API that abstracts away the differences. > As long as none of these is "the winner" (and it doesn't look is if that's > to happen soon), I doubt that one API to rule them all will exist - they > all have their different strengths and weaknesses, and a python-API > should reflect these. I rather think that a standard API would cover a reasonable subset - it should NOT contain the idiosyncrasies of each individual toolkit. The anygui project, which has been dormant for a while, is another attempt at a high-level api. Apparently, it tried to implement backends for a lot of toolkits - which again may have been to ambitious an agenda. Maybe someone who was involved in that project might provide some insight. -- http://mail.python.org/mailman/listinfo/python-list
Re: Help needed to freeze a script.
On Sep 3, 1:30 pm, LB <[EMAIL PROTECTED]> wrote: > Hi, > > I would like to freeze a numpy based script in order to have an > application which could run without having to install numpy and cie. > > Indeed, I'm not root on the targeted computer and I can't easily > make a complete install of numpy and scipy. > > So I decided to test the freeze.py tool shipped with python2.5. > To complicate matters, I must say that I only have a local > installation > of python2.5 and numpy. > > I used the following command line : > > > python2.5 ../Python-2.5.1/Tools/freeze/freeze.py > > ~/Python/numpy/test_freeze.py > > At first sight, it seems to be fine, as I saw numpy in the liste of > frozen dependancies : > [...] > freezing numpy ... > freezing numpy.__config__ ... > freezing numpy._import_tools ... > freezing numpy.add_newdocs ... > freezing numpy.core ... > freezing numpy.core._internal ... > freezing numpy.core.arrayprint ... > freezing numpy.core.defchararray ... > freezing numpy.core.defmatrix ... > freezing numpy.core.fromnumeric ... > freezing numpy.core.info ... > freezing numpy.core.memmap ... > freezing numpy.core.numeric ... > freezing numpy.core.numerictypes ... > freezing numpy.core.records ... > freezing numpy.ctypeslib ... > [...] > freezing numpy.version ... > > But at the end I saw this message : > Warning: unknown modules remain: _bisect _csv _ctypes _curses _hashlib > _heapq > [...] > numpy.core._dotblas numpy.core._sort numpy.core.multiarray > numpy.core.scalarmath numpy.core.umath numpy.fft.fftpack_lite > numpy.lib._compiled_base numpy.linalg.lapack_lite numpy.random.mtrand > operator parser pyexpat readline > [...] > Now run "make" to build the target: test_weibull > > I runned make without any problem but the final application didn't > work : > % ./test_freeze > Traceback (most recent call last): > File "/home/loic/Python/numpy/test_freeze.py", line 8, in > import numpy as np > [...] > File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ > core/__init__.py", line 5, in > import multiarray > ImportError: No module named multiarray > > Is there any known issue when freezing a numpy based script ? > I should add that I configured my PYTHONPATH to match my local > installation > > echo $PYTHONPATH > /home/loic/lib/python:/home/loic/tmp/bluelagoon/lib/python2.5:/home/ > loic/tmp/bluelagoon/lib/python2.5/site-packages/: > > and this local installation work fine : > > > python2.5 -c 'import numpy; print numpy.__version__; import > > numpy.core.multiarray; print "no pb"' > > 1.2.0b2 > no pb > > Have you got any recipe to freeze numpy based script ? > > Regards, > > -- > LB Did you try py2exe instead of freeze? On the page http://www.py2exe.org/index.cgi/WorkingWithVariousPackagesAndModules there is only one brief mention of numpy packaging troubles, suggesting that it might work better. I have used py2exe in the past without much trouble. -- http://mail.python.org/mailman/listinfo/python-list
Re: hashing an array - howto
On Sep 5, 11:18 am, [EMAIL PROTECTED] wrote: > Helmut Jarausch: > > > I need to hash arrays of integers (from the hash module). > > One of the possible solutions is to hash the equivalent tuple, but it > requires some memory (your sequence must not be tuples already): why can't it be tuple already? Doesn't matter: >>> from numpy import arange >>> a=arange(5) >>> a array([0, 1, 2, 3, 4]) >>> hash(a) Traceback (most recent call last): File "", line 1, in ? TypeError: unhashable type >>> b=tuple(a) >>> b (0, 1, 2, 3, 4) >>> c=tuple(b) >>> c (0, 1, 2, 3, 4) >>> hash(c) 1286958229 you can discard the tuple, so the memory requirement is transient. -- http://mail.python.org/mailman/listinfo/python-list
Re: running python as a dameon
On Sep 5, 9:56 pm, Sean Davis <[EMAIL PROTECTED]> wrote: > > What I want > > to do is to provide the python NLP program as a service to any other > > PHP/Java/Ruby process request. So the mapping is > > > http -> apache -> PHP/Java/Ruby/... -> Python NLP > > Why not use a simple CGI script or wsgi application? You could make > the service online and interactive and with the same application and > code make an XMLRPC web service. So, things would look more like: > > http -> apache -> Python (running NLP and serving requests) > > You can use apache to proxy requests to any one of a dozen or so > python-based webservers. You could also use mod_wsgi to interface > with a wsgi application. > > Sean xmlrpc is the right idea, as it interfaces easily across languages. -- http://mail.python.org/mailman/listinfo/python-list
Re: Read and write binary data
On Sep 7, 6:41 pm, Mars creature <[EMAIL PROTECTED]> wrote: > Hi guys, > I am new to Python, and thinking about migrating to it from matlab > as it is a really cool language. Right now, I am trying to figure out > how to control read and write binary data, like > 'formatted','stream','big-endian','little-edian' etc.. as in fortran. > I googled, but can not find a clear answer. Anyone has clue where can > I learn it? Thanks!! > Jinbo the struct module should be useful -- http://mail.python.org/mailman/listinfo/python-list
Re: setattr in class
On Sep 12, 11:08 am, Bojan Mihelac <[EMAIL PROTECTED]> wrote: > Hi all - when trying to set some dynamic attributes in class, for > example: > > class A: > for lang in ['1', '2']: > exec('title_%s = lang' % lang) #this work but is ugly > # setattr(A, "title_%s" % lang, lang) # this wont work > > setattr(A, "title_1", "x") # this work when outside class > > print A.title_1 > print A.title_2 > > I guess A class not yet exists in line 4. Is it possible to achive > adding dynamic attributes without using exec? > > thanks, > Bojan Is it really worth it? If the names of the attributes are only known at runtime, why not just use a dict - that's what they are for. If you want a dict plus some special behaviour, just write a class that inherits from dict, or use UserDict.DictMixin. -- http://mail.python.org/mailman/listinfo/python-list
Re: how to exclude specific things when pickling?
On Sep 14, 10:53 am, "inhahe" <[EMAIL PROTECTED]> wrote: > If I gather correctly pickling an object will pickle its entire hierarchy, > but what if there are certain types of objects anywhere within the hierarchy > that I don't want included in the serialization? What do I do to exclude > them? Thanks. If your class defines a __getstate__ method, it is expected to return the pickled state of the entire class. You can for example del those items from self.__dict__ that you don't want pickled and then return dumps(self). -- http://mail.python.org/mailman/listinfo/python-list
Re: Porting a pygtk app to Windows
On Sep 16, 12:30 pm, binaryjesus <[EMAIL PROTECTED]> wrote: > hi everyone, > first of all > I had written an app using pygtk module and created the GUI with > glade.All the development was done on a linux machine and the app was > working fine all this tme in linux. > > now, the thing is i have to change the development environment to > windows. So this means that i have to port the application to work in > windows. > > Initially i thought that porting an application written using a > platform independent language and cross-platform window frame work > would be a piece of cake. Well i guess all the assumptions end there. > unlike linux, in windows pygtk and the GTK frame work are not > installed by default. > > So, long story short. i installed GTK devel, pygtk, pygobject, pycaro, > glade ui. Also made a lot of path adjustments (os.path.absolutepath() > is not portable i guess) and finally got the app to at least start > without showing an error. > > The problem that i am now facing is that nothing shows up in the app. > No menu, buttons, frames or anything else is visible. When i move the > cursor over the window it changes into an hour-glass type icon. hoe > ever all c++ GTK programs seem to render well. > > here is a screen shot:http://i36.tinypic.com/x52uk9.jpg > > i have written below the startup code of the app: > > import pygtk > pygtk.require('2.0') > import gtk > import gtk.glade > from ConfigParser import ConfigParser > > class jDesk(object): > def __init__(self): > #self.seclstore.append(["0","Section1"]) > #self.catlstore.append(["cat 1"]) > self.synclstore = gtk.ListStore(str,str,str,str,str,int) > self.seclstore = gtk.ListStore(str,str) > self.catlstore = gtk.ListStore(str,str) > self.process_glade() > > def process_glade(self): > self.gladefile = "gui.glade" > self.glade = gtk.glade.XML(self.gladefile) > #windows > self.main_window = self.glade.get_widget('MainWindow') > #main window > self.templatefile = self.glade.get_widget('templatefile') > self.imageurl = self.glade.get_widget('imageurl') > self.posttitle = self.glade.get_widget('posttitle') > self.sectionbox = self.glade.get_widget('sectionbox') > self.categorybox = self.glade.get_widget('categorybox') > self.demolink = self.glade.get_widget('demolink') > self.posttext = self.glade.get_widget('posttext') > self.statusbar = self.glade.get_widget('statusbar') > > self.signal_autoconnect() > self.main_window.show() > print '===main wind created=' > def run(self): > try: > print "Entering GTK main now" > gtk.main() > print "Leaving GTK main" > except: > print "Exception in main" > > if __name__ == "__main__": > conf = ConfigParser() > conf.read('settings.cfg') > gtk.gdk.threads_init() > app = jDesk() > app.run() > > i have tried a lot of things, checked up paths, checked libcairo but > nothing seems to help.problem seems to be with pygtk since other c++ > GTK programs like pedgin and GTK demo rn fine. > So maybe is there any pygtk windows bugs that i coming from linux > background might not be knowing about or perhaps u have encountered > such a problem in the past before ? > Much thanks in advance > BinaryJ I haven't tried it myself, but I came across a blog post the other day that describes a way of building windows installers for pyGTK applications at http://unpythonic.blogspot.com/2007/07/pygtk-py2exe-and-inno-setup-for-single.html -- http://mail.python.org/mailman/listinfo/python-list
Re: find the path of a module
On Sep 16, 4:07 pm, [EMAIL PROTECTED] wrote: > I'd like to know if I can somehow find the path for a module somewhere > in a the package hierarchy > for instance if I import my module like so > from spam.eggs import sausage > my hypothetical method would return something like > '/home/developer/projects/spam/eggs/sausage.py/c' > given that module object. The __file__ attribute is what you want: >>> import pyPdf >>> pyPdf.__file__ '/data/python/pyPdf/__init__.pyc' >>> -- http://mail.python.org/mailman/listinfo/python-list
Re: shelve file space always increase!
On Sep 17, 6:17 am, smalltalk <[EMAIL PROTECTED]> wrote: > >>> import shelve > >>> sf = shelve.open('e:/abc.db') > >>> for i in range(1): > > ... sf[str(i)]=i > ...>>> sf.close() > >>> sf = shelve.open('e:/abc.db') > >>> sf.clear() > >>> sf > > {} > the abc.db is always 312k though i have use clear(), how can i shrink > the space? shelve doesn't have any way of doing that. the only option is to read all items from your shelve and write them to a new one. -- http://mail.python.org/mailman/listinfo/python-list
Re: ssl server
On Sep 17, 1:33 pm, Seb <[EMAIL PROTECTED]> wrote: > I'm making a ssl server, but I'm not sure how I can verify the > clients. What do I actually need to place in _verify to actually > verify that the client cert is signed by me? > > 50 class SSLTCPServer(TCPServer): > 51 keyFile = "sslcert/server.key" > 52 certFile = "sslcert/server.crt" > 53 def __init__(self, server_address, RequestHandlerClass): > 54 ctx = SSL.Context(SSL.SSLv23_METHOD) > 55 ctx.use_privatekey_file(self.keyFile) > 56 ctx.use_certificate_file(self.certFile) > 57 ctx.set_verify(SSL.VERIFY_PEER | > SSL.VERIFY_FAIL_IF_NO_PEER_CERT | SSL.VERIFY_CLIENT_ONCE, > self._verify) > 58 ctx.set_verify_depth(10) > 59 ctx.set_session_id('DFS') > 60 > 61 self.server_address = server_address > 62 self.RequestHandlerClass = RequestHandlerClass > 63 self.socket = socket.socket(self.address_family, > self.socket_type) > 64 self.socket = SSL.Connection(ctx, self.socket) > 65 self.socket.bind(self.server_address) > 66 self.socket.listen(self.request_queue_size) > 67 > 68 def _verify(self, conn, cert, errno, depth, retcode): > 69 return not cert.has_expired() and > cert.get_issuer().organizationName == 'DFS' If I were you, I would just just hide behind apache, nginx oder another server that does ssl. just have that server proxy locally to your python server over http, and firewall the python server port. -- http://mail.python.org/mailman/listinfo/python-list
Re: generator exceptions
On Sep 19, 9:40 am, Alexandru Mosoi <[EMAIL PROTECTED]> wrote: > i have a generator that raises an exception when calling next(), > however if I try to catch the exception and print the traceback i get > only the line where next() was called > > while True: > try: > iterator.next() > except StopIteration: > break > except Exception, e: > traceback.print_exc() > > how do I get the traceback starting from where exception was raised in > first place? What happens if you simply remove the 'except Exception' clause? It should print the entire traceback by default. -- http://mail.python.org/mailman/listinfo/python-list
Re: Twisted vs Python Sockets
On Sep 18, 4:24 pm, Fredrik Lundh <[EMAIL PROTECTED]> wrote: > James Matthews wrote: > > I am wondering what are the major points of twisted over regular python > > sockets. I am looking to write a TCP server and want to know the pros > > can cons of using one over the other. > > Twisted is a communication framework with lots of ready-made components: > >http://twistedmatrix.com/trac/wiki/TwistedAdvantage > > Regular sockets are, well, regular sockets. No more, no less. > > Depends on what you want your TCP server to do. Just to mention it, there is module SocketServer in the standard library that already contains various server classes, including a TCP server. -- http://mail.python.org/mailman/listinfo/python-list
Re: Launching a subprocess without waiting around for the result?
On Sep 18, 5:33 pm, erikcw <[EMAIL PROTECTED]> wrote: > Hi, > > I have a cgi script where users are uploading large files for > processing. I want to launch a subprocess to process the file so the > user doesn't have to wait for the page to load. > > What is the correct way to launch subprocess without waiting for the > result to return? > > Thanks! both os.spawn or subprocess can be used. I actually find subprocess hard to remember so usually prefer os.spawn. For various examples and explanations, see http://effbot.org/librarybook/os.htm -- http://mail.python.org/mailman/listinfo/python-list
Re: matrix algebra
On Sep 22, 4:02 am, Al Kabaila <[EMAIL PROTECTED]> wrote: > This is a very active newsgroup that incudes such giants as Frederik Lundh He looks rather small to me in this picture: http://www.python.org/~guido/confpix/flundh-2.jpg -- http://mail.python.org/mailman/listinfo/python-list
Re: Why no tailcall-optimization?
On Sep 22, 9:13 pm, process <[EMAIL PROTECTED]> wrote: > Why doesn't Python optimize tailcalls? Are there plans for it? > > I know GvR dislikes some of the functional additions like reduce and > Python is supposedly about "one preferrable way of doing things" but > not being able to use recursion properly is just a big pain in the > a**. There are some attempts, see for example http://code.activestate.com/recipes/496691/ -- http://mail.python.org/mailman/listinfo/python-list
Re: gplt from scipy missing ?
On Sep 23, 7:44 am, Ivan Reborin <[EMAIL PROTECTED]> wrote: > On Tue, 23 Sep 2008 04:26:14 -0300, "Gabriel Genellina" > > <[EMAIL PROTECTED]> wrote: > > >I think scipy does not bundle plotting packages anymore - you may use > >whatever suits you, from other sources. > >Try matplotlib, see the wiki: > >http://wiki.python.org/moin/NumericAndScientific/Plotting > > Hello Gabriel, > thank you for answering. > > Unfortunatelly, I cannot change my plotting package, unless I indend > to change a lot of code that I'll be using in the future. I'm not a > programmer by trade, just a guy doing some calculations with already > written programms. > > Do you know, by any chance, where one could get gplt separately, or > for example, get older versions of scipy ? > I'm using python 5.2.2.. If I install scipy for python 2.3. for > example (let's assume that one still has gplt in it) will it work ? > > Best regards > Ivan Well, if you are using scipy, you must at least be doing some programming. Instead of using gplt, you could just write your data to a .csv file and feed that to gnuplot yourself. You can then use the full flexibility of gnuplot for formatting your output, without having to cross your fingers that the features you need will be covered by the gplt module. You also have your data in a readable format after calculation but before plotting - I find such intermediate data useful for debugging. -- http://mail.python.org/mailman/listinfo/python-list
Re: Comparing float and decimal
> > This seems to break the rule that if A is equal to B and B is equal to C > > then A is equal to C. > > I don't see why transitivity should apply to Python objects in general. Well, for numbers it surely would be a nice touch, wouldn't it. May be the reason for Decimal to accept float arguments is that irrational numbers or very long rational numbers cannot be converted to a Decimal without rounding error, and Decimal doesn't want any part of it. Seems pointless to me, though. -- http://mail.python.org/mailman/listinfo/python-list
Re: Comparing float and decimal
On Sep 23, 10:08 am, Michael Palmer <[EMAIL PROTECTED]> wrote: > May be the reason for Decimal to accept float arguments is that NOT to accept float arguments. -- http://mail.python.org/mailman/listinfo/python-list
Re: urllib error on urlopen
On Sep 24, 11:46 am, Mike Driscoll <[EMAIL PROTECTED]> wrote: > Hi, > > I have been using the following code for over a year in one of my > programs: > > f = urllib2.urlopen('https://www.companywebsite.com/somestring') > > It worked great until the middle of the afternoon yesterday. Now I get > the following traceback: > > Traceback (most recent call last): > File "", line 1, in > response = urllib2.urlopen(req).read().strip() > File "c:\python25\lib\urllib2.py", line 124, in urlopen > return _opener.open(url, data) > File "c:\python25\lib\urllib2.py", line 381, in open > response = self._open(req, data) > File "c:\python25\lib\urllib2.py", line 399, in _open > '_open', req) > File "c:\python25\lib\urllib2.py", line 360, in _call_chain > result = func(*args) > File "c:\python25\lib\urllib2.py", line 1115, in https_open > return self.do_open(httplib.HTTPSConnection, req) > File "c:\python25\lib\urllib2.py", line 1082, in do_open > raise URLError(err) > URLError: routines:SSL23_GET_SERVER_HELLO:unknown protocol')> > > I tried my Google Fu on this error, but there's not much out there. I > tried using a proxy in Python, but that returned the same traceback. > If I copy the URL into my browser, it resolves correctly. Does anyone > have any advice on how to troubleshoot this error? > > I am using Python 2.5.2 on Windows XP. > > Thanks, > > Mike Could it just be a misconfiguration at the other end? Can you open other https urls? -- http://mail.python.org/mailman/listinfo/python-list
Re: multiple processes, private working directories
On Sep 24, 9:27 pm, Tim Arnold <[EMAIL PROTECTED]> wrote: > I have a bunch of processes to run and each one needs its own working > directory. I'd also like to know when all of the processes are > finished. > > (1) First thought was threads, until I saw that os.chdir was process- > global. > (2) Next thought was fork, but I don't know how to signal when each > child is > finished. > (3) Current thought is to break the process from a method into a > external > script; call the script in separate threads. This is the only way I > can see > to give each process a separate dir (external process fixes that), and > I can > find out when each process is finished (thread fixes that). > > Am I missing something? Is there a better way? I hate to rewrite this > method > as a script since I've got a lot of object metadata that I'll have to > regenerate with each call of the script. > > thanks for any suggestions, > --Tim Arnold 1, Does the work in the different directories really have to be done concurrently? You say you'd like to know when each thread/process was finished, suggesting that they are not server processes but rather accomplish some limited task. 2. If the answer to 1. is yes: All that os.chdir gives you is an implicit global variable. Is that convenience really worth a multi- process architecture? Would it not be easier to just work with explicit path names instead? You could store the path of the per- thread working directory in an instance of threading.local - for example: >>> import threading >>> t = threading.local() >>> >>> class Worker(threading.Thread): ... def __init__(self, path): ... t.path=path ... the thread-specific value of t.path would then be available to all classes and functions running within that thread. -- http://mail.python.org/mailman/listinfo/python-list
Re: multiple processes, private working directories
On Sep 25, 8:16 am, "Tim Arnold" <[EMAIL PROTECTED]> wrote: > "Tim Arnold" <[EMAIL PROTECTED]> wrote in message > > news:[EMAIL PROTECTED] > > >I have a bunch of processes to run and each one needs its own working > > directory. I'd also like to know when all of the processes are > > finished. > > Thanks for the ideas everyone--I now have some news tools in the toolbox. > The task is to use pdflatex to compile a bunch of (>100) chapters and know > when the book is complete (i.e. the book pdf is done and the separate > chapter pdfs are finished. I have to wait for that before I start some > postprocessing and reporting chores. > > My original scheme was to use a class to manage the builds with threads, > calling pdflatex within each thread. Since pdflatex really does need to be > in the directory with the source, I had a problem. > > I'm reading now about python's multiprocessing capabilty, but I think I can > use Karthik's suggestion to call pdflatex in subprocess with the cwd set. > That seems like the simple solution at this point, but I'm going to give > Cameron's pipes suggestion a go as well. > > In any case, it's clear I need to rethink the problem. Thanks to everyone > for helping me get past my brain-lock. > > --Tim Arnold I still don't see why this should be done concurrently? Do you have > 100 processors available? I also happen to be writing a book in Latex these days. I have one master document and pull in all chapters using \include, and pdflatex is only ever run on the master document. For a quick preview of the chapter I'm currently working on, I just use \includeonly - compiles in no time at all. How do you manage to get consistent page numbers and cross-referencing if you process all chapters separately, and even in _parallel_ ? That just doesn't look right to me. -- http://mail.python.org/mailman/listinfo/python-list