importing class objects from a pickled file
Hello, I have an object of class X that I am writing to a pickled file. The pickling part goes fine, but I am having some problems reading the object back out, as I get complaints about "unable to import module X". The only way I have found around it is to run the read-file code out of the same directory that contains the X.py file, but this is obviously not a portable way of doing things. Even when I put statements into the code such as "from Y.X import X" where Y is the name of the python package that contains the X,py file, the import statement works, but I am still unable to read the object from the pickled file, running into the same "unable to import module X" error. Am I explaining myself properly? Why doesn't the code that loads the object from the pickled file work unless I am sitting in the same directory? The code that writes the pickled file has the statement "from Y.X import X" statement" at the top, as does the reading code, but even though that import statement succeeds, the read still fails with the import error. Thanks for any help, Catherine -- http://mail.python.org/mailman/listinfo/python-list
using masks and numpy record arrays
Hello, I am trying to work with a structured array and a mask, and am encountering some problems. For example: >>> xtype = numpy.dtype([("n", numpy.int32), ("x", numpy.float32)]) >>> a = numpy.zeros((4), dtype=xtype) >>> b = numpy.arange(0,4) >>> a2 = numpy.zeros((4), dtype=xtype) >>> mask = numpy.where(b%2 == 0) >>> a2[:]["n"] += b! this changes the values of a2 >>> a[mask]["n"] += b[mask]! this does not change the values of a >>> a2 array([(0, 0.0), (1, 0.0), (2, 0.0), (3, 0.0)], dtype=[('n', '>> a array([(0, 0.0), (0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('n', '>> a = numpy.zeros((4)) >>> a[mask] += b[mask] >>> a array([ 0., 0., 2., 0.]) What is it about a numpy record array that prevents the mask statement from working, and how do I get around this? Thanks, Catherine -- http://mail.python.org/mailman/listinfo/python-list
fast copying of large files in python
Hello, I'm working on an application that as part of its processing, needs to copy 50 Meg binary files from one NFS mounted disk to another. The simple-minded approach of shutil.copyfile is very slow, and I'm guessing that this is due to the default 16K buffer size. Using shtil.copyfileobj is faster when I set a larger buffer size, but it's still slow compared to a "os.system("cp %s %s" % (f1,f2))" call. Is this because of the overhead entailed in having to open the files in copyfileobj? I'm not worried about portability, so should I just do the os.system call as described above and be done with it? Is that the fastest method in this case? Are there any "pure python" ways of getting the same speed as os.system? Thanks, Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: Python advanced course (preferably in NA)
I've taken two Python classes from David Beazley and can second Eric's recommendation. The "advanced" class is really advanced and goes into some pretty mind-blowing stuff. The class comes with lots of problems and solutions, and a book of all the slides which are a great reference. Well worth the time and money! Catherine Eric Snow wrote: On Thu, Nov 3, 2011 at 12:13 PM, Behnam wrote: Anybody is aware of any advanced course in Python preferably in north america? I've been partly coding in Python for couple of years now and have used PyQt. What I'd like to learn more is a kind of advance OOP in python. Any idea? While I don't know specifically, check out the following link (from the Python site): http://wiki.python.org/moin/PythonTraining I have taken a class each (PyCon tutorial) from Raymond Hettinger, David Beazley, and Brian Jones, and found each of them to be outstanding courses. Only David is listed on that page to which I linked, though I know Raymond does courses at least from time to time. I've also heard a talk from Wesley Chun and found him to be fantastic. -eric BTW, I'm not a computer engineer and have mechanical background. Thanks in advance! -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Python advanced course (preferably in NA)
I've taken two Python classes from David Beazley and can second Eric's recommendation. The "advanced" class is really advanced and goes into some pretty mind-blowing stuff. The class comes with lots of problems and solutions, and a book of all the slides which are a great reference. Well worth the time and money! Catherine Eric Snow wrote: On Thu, Nov 3, 2011 at 12:13 PM, Behnam wrote: Anybody is aware of any advanced course in Python preferably in north america? I've been partly coding in Python for couple of years now and have used PyQt. What I'd like to learn more is a kind of advance OOP in python. Any idea? While I don't know specifically, check out the following link (from the Python site): http://wiki.python.org/moin/PythonTraining I have taken a class each (PyCon tutorial) from Raymond Hettinger, David Beazley, and Brian Jones, and found each of them to be outstanding courses. Only David is listed on that page to which I linked, though I know Raymond does courses at least from time to time. I've also heard a talk from Wesley Chun and found him to be fantastic. -eric BTW, I'm not a computer engineer and have mechanical background. Thanks in advance! -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
tracking variable value changes
Hello, Is there a way to create a C-style pointer in (pure) Python so the following code will reflect the changes to the variable "a" in the dictionary "x"? For example: >>> a = 1.0 >>> b = 2.0 >>> x = {"a":a, "b":b} >>> x {'a': 1.0, 'b': 2.0} >>> a = 100.0 >>> x {'a': 1.0, 'b': 2.0} ## at this point, I would like the value ## associated with the "a" key to be 100.0 ## rather than 1.0 If I make "a" and "b" numpy arrays, then changes that I make to the values of a and b show up in the dictionary x. My understanding is that when I redefine the value of "a", that Python is creating a brand-new float with the value of 100.0, whereas when I use numpy arrays I am merely assigning a new value to the same object. Is there some way to rewrite the code above so the change of "a" from 1.0 to 100.0 is reflected in the dictionary. I would like to use simple datatypes such as floats, rather than numpy arrays or classes. I tried using weakref's, but got the error that a weak reference cannot be created to a float. Catherine -- http://mail.python.org/mailman/listinfo/python-list
executing multiple functions in background simultaneously
Hello everybody, I know how to spawn a sub-process and then wait until it completes. I'm wondering if I can do the same thing with a Python function. I would like to spawn off multiple instances of a function and run them simultaneously and then wait until they all complete. Currently I'm doing this by calling them as sub-processes executable from the command-line. Is there a way of accomplishing the same thing without having to make command-line executables of the function call? I'm primarily concerned about code readability and ease of programming. The code would look a lot prettier and be shorter to boot if I could spawn off function calls rather than subprocesses. Thanks for any advice, Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: executing multiple functions in background simultaneously
James Mills wrote: On Wed, Jan 14, 2009 at 11:02 AM, Catherine Moroney wrote: I would like to spawn off multiple instances of a function and run them simultaneously and then wait until they all complete. Currently I'm doing this by calling them as sub-processes executable from the command-line. Is there a way of accomplishing the same thing without having to make command-line executables of the function call? Try using the python standard threading module. Create multiple instances of Thread with target=your_function Maintain a list of these new Thread instnaces Join (wait) on them. pydoc threading.Thread cheers James What is the proper syntax to use if I wish to return variables from a function run as a thread? For example, how do I implement the following code to return the variable "c" from MyFunc for later use in RunThreads? Trying to return anything from the threading.Thread call results in a "unpack non-sequence" error. import threading, sys def MyFunc(a, b): c = a + b print "c =",c return c def RunThreads(): args = (1,2) threading.Thread(target=MyFunc,args=(1,2)).start() if __name__ == "__main__": RunThreads() sys.exit() -- http://mail.python.org/mailman/listinfo/python-list
Re: executing multiple functions in background simultaneously
Cameron Simpson wrote: On 14Jan2009 15:50, Catherine Moroney wrote: James Mills wrote: On Wed, Jan 14, 2009 at 11:02 AM, Catherine Moroney wrote: I would like to spawn off multiple instances of a function and run them simultaneously and then wait until they all complete. [...] Try using the python standard threading module. Create multiple instances of Thread with target=your_function Maintain a list of these new Thread instnaces Join (wait) on them. What is the proper syntax to use if I wish to return variables from a function run as a thread? The easy thing is to use a Queue object. The background thread uses .put() to place a computed result on the QUeue and the caller uses .get() to read from the queue. There's an assortment of other ways too. Cheers, Thank you for this hint. This goes a long way to solving my problem. One question - is there any way to name the objects that get put on a queue? For my application, it's important to know which thread put a particular item on the queue. Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: executing multiple functions in background simultaneously
On Jan 14, 2009, at 5:20 PM, Jean-Paul Calderone wrote: On Wed, 14 Jan 2009 17:11:44 -0800, Catherine Moroney wrote: [snip] The easy thing is to use a Queue object. The background thread uses .put() to place a computed result on the QUeue and the caller uses .get() to read from the queue. There's an assortment of other ways too. Cheers, Thank you for this hint. This goes a long way to solving my problem. One question - is there any way to name the objects that get put on a queue? For my application, it's important to know which thread put a particular item on the queue. There's lots and lots of ways. The simplest might be to put two- tuples of the thread identifier and some other object. eg queue.put((threadID, obj)) Perhaps you can accomplish your goal that way, or perhaps a minor variation would be more suitable. I just came to that conclusion myself and a short test shows that things are working. Thanks to all who contributed to this discussion. Even though I'm by no means an expert, at least I can work with threads and queues now! I love this language ... Jean-Paul Catherine -- http://mail.python.org/mailman/listinfo/python-list
memory use with regard to large pickle files
I'm writing a python program that reads in a very large "pickled" file (consisting of one large dictionary and one small one), and parses the results out to several binary and hdf files. The program works fine, but the memory load is huge. The size of the pickle file on disk is about 900 Meg so I would theoretically expect my program to consume about twice that (the dictionary contained in the pickle file plus its repackaging into other formats), but instead my program needs almost 5 Gig of memory to run. Am I being unrealistic in my memory expectations? I'm running Python 2.5 on a Linux box (Fedora release 7). Is there a way to see how much memory is being consumed by a single data structure or variable? How can I go about debugging this problem? Catherine -- http://mail.python.org/mailman/listinfo/python-list
calling python scripts as a sub-process
I have one script (Match1) that calls a Fortran executable as a sub-process, and I want to write another script (Match4) that spawns off several instances of Match1 in parallel and then waits until they all finish running. The only way I can think of doing this is to call it as a sub-process, rather than directly. I'm able to get Match1 working correctly in isolation, using the subprocess.Popen command, but calling an instance of Match1 as a subprocess spawned from Match4 isn't working. The command (stored as an array of strings) that I'm executing is: ['python ../src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] and I'm calling it as: sub1 = subprocess.Popen(command) I get the error below. Does anybody know what this error refers to and what I'm doing wrong? Is it even allowable to call another script as a sub-process rather than calling it directly? File "../src_python/Match4.py", line 24, in RunMatch4 sub1 = subprocess.Popen(command1) File "/usr/lib64/python2.5/subprocess.py", line 593, in __init__ errread, errwrite) File "/usr/lib64/python2.5/subprocess.py", line 1051, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Thanks for any help, Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: calling python scripts as a sub-process
I just tried that, and I get the same error. Interestingly enough, a shorter (and incorrect) version of the command works well enough so that it gets into the Match1 code and does the argument check there. The following code gets into Match1: >>> command = ['python', '../src_python/Match1.py','--filex="xyz"'] >>> sub1 = subprocess.Popen(command) whereas this doesn't even get to call Match1: command = ['python','/data/svn_workspace/cmm/sieglind/USC/EE569/tpaper/test/../src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', '--chmetric=M2 ', '--use_textid=true'] sub1 = subprocess.Popen(command) Can anybody see a reason for why the abbreviated version works, and the full-up one doesn't? Catherine Philip Semanchuk wrote: On Nov 19, 2008, at 2:03 PM, Catherine Moroney wrote: The command (stored as an array of strings) that I'm executing is: ['python ../src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] [snip] I get the error below. Does anybody know what this error refers to and what I'm doing wrong? Is it even allowable to call another script as a sub-process rather than calling it directly? File "../src_python/Match4.py", line 24, in RunMatch4 sub1 = subprocess.Popen(command1) File "/usr/lib64/python2.5/subprocess.py", line 593, in __init__ errread, errwrite) File "/usr/lib64/python2.5/subprocess.py", line 1051, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Try supplying a fully-qualified path to your script, e.g.: ['python /home/catherine/src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] -- http://mail.python.org/mailman/listinfo/python-list
Re: calling python scripts as a sub-process
Dan Upton wrote: On Wed, Nov 19, 2008 at 2:13 PM, Philip Semanchuk <[EMAIL PROTECTED]> wrote: On Nov 19, 2008, at 2:03 PM, Catherine Moroney wrote: The command (stored as an array of strings) that I'm executing is: ['python ../src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] [snip] I get the error below. Does anybody know what this error refers to and what I'm doing wrong? Is it even allowable to call another script as a sub-process rather than calling it directly? File "../src_python/Match4.py", line 24, in RunMatch4 sub1 = subprocess.Popen(command1) File "/usr/lib64/python2.5/subprocess.py", line 593, in __init__ errread, errwrite) File "/usr/lib64/python2.5/subprocess.py", line 1051, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Try supplying a fully-qualified path to your script, e.g.: ['python /home/catherine/src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] I think when I came across this error, I added shell=True, e.g. sub1 = subprocess.Popen(command, shell=True) I added the shell=True and this time it got into Match1 (hurrah!), but it then opened up an interactive python session, and didn't complete until I manually typed 'exit' in the interactive session. Match1 looks like: if __name__ == "__main__": <<< parse arguments >>> RunMatch1(file_ref, file_cmp, iblock_start, iblock_end, \ nlinep, nsmpp, mindispx, maxdispx, mindispl, \ maxdispl, istep, chmetric, use_textid) exit() where the routine RunMatch1 does all the actual processing. How do I get Match1 to run and exit normally without opening up an interactive session, when called as a subprocess from Match4? Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: calling python scripts as a sub-process
Dan Upton wrote: On Wed, Nov 19, 2008 at 2:38 PM, Catherine Moroney <[EMAIL PROTECTED]> wrote: Dan Upton wrote: On Wed, Nov 19, 2008 at 2:13 PM, Philip Semanchuk <[EMAIL PROTECTED]> wrote: On Nov 19, 2008, at 2:03 PM, Catherine Moroney wrote: The command (stored as an array of strings) that I'm executing is: ['python ../src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] [snip] I get the error below. Does anybody know what this error refers to and what I'm doing wrong? Is it even allowable to call another script as a sub-process rather than calling it directly? File "../src_python/Match4.py", line 24, in RunMatch4 sub1 = subprocess.Popen(command1) File "/usr/lib64/python2.5/subprocess.py", line 593, in __init__ errread, errwrite) File "/usr/lib64/python2.5/subprocess.py", line 1051, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Try supplying a fully-qualified path to your script, e.g.: ['python /home/catherine/src_python/Match1.py ', '--file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf ', '--file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf ', '--block_start=62 ', '--block_end=62 ', '--istep=16 ', "--chmetric='M2' ", "--use_textid='true '"] I think when I came across this error, I added shell=True, e.g. sub1 = subprocess.Popen(command, shell=True) I added the shell=True and this time it got into Match1 (hurrah!), but it then opened up an interactive python session, and didn't complete until I manually typed 'exit' in the interactive session. Match1 looks like: if __name__ == "__main__": <<< parse arguments >>> RunMatch1(file_ref, file_cmp, iblock_start, iblock_end, \ nlinep, nsmpp, mindispx, maxdispx, mindispl, \ maxdispl, istep, chmetric, use_textid) exit() where the routine RunMatch1 does all the actual processing. How do I get Match1 to run and exit normally without opening up an interactive session, when called as a subprocess from Match4? Alternately, rather than using a list of arguments, have you tried just using a string? (Again, that's the way I do it and I haven't been having any problems recently, although I'm running shell scripts or binaries with arguments rather than trying to invoke python on a script.) command = "python ../src_python/Match1.py --file_ref=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_BF_F03_0024.hdf --file_cmp=MISR_AM1_GRP_ELLIPSOID_GM_P228_O003571_DF_F03_0024.hdf --block_start=62 --block_end=62 --istep=16 --chmetric='M2' --use_textid=true" proc = subprocess.Popen(command, shell=True) Thanks - that did the trick. I just passed in one long string and everything actually works. Wow! I had no idea if this was even do-able. This is so cool, and saves me a lot of code duplication. I can spawn off half a dozen jobs at once and then just wait for them to finish. It's great that python can function both as a scripting language and also a full-blown programming language at the same time. Catherine -- http://mail.python.org/mailman/listinfo/python-list
uniqueness of temporary files generated by tempfile
Are the temporary filenames generated by the tempfile module guaranteed to be unique? I have a need to generate temporary files within an application, and I will have many instances of this application running as a sub-process (so I can submit them to a batch queue). Is there any danger of my different sub-processes accidentally generating the same filename, or does tempfile check for the existence of a similarly-named file before generating a filename? What's the recommended way of generating temporary filenames that are guaranteed to be unique even with multiple processes running simultaneously? Catherine -- http://mail.python.org/mailman/listinfo/python-list
order of importing modules
In what order does python import modules on a Linux system? I have a package that is both installed in /usr/lib64/python2.5/site-packages, and a newer version of the same module in a working directory. I want to import the version from the working directory, but when I print module.__file__ in the interpreter after importing the module, I get the version that's in site-packages. I've played with the PYTHONPATH environmental variable by setting it to just the path of the working directory, but when I import the module I still pick up the version in site-packages. /usr/lib64 is in my PATH variable, but doesn't appear anywhere else. I don't want to remove /usr/lib64 from my PATH because that will break a lot of stuff. Can I force python to import from my PYTHONPATH first, before looking in the system directory? Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: order of importing modules
I've looked at my sys.path variable and I see that it has a whole bunch of site-package directories, followed by the contents of my $PYTHONPATH variable, followed by a list of misc site-package variables (see below). I've verified that if I manually reverse the order of sys.path I can then import the proper version of the module that I want. But this is not a permanent solution for me as this will mess up other people are who working with the same code. But, I'm curious as to where the first bunch of 'site-package' entries come from. The /usr/lib64/python2.5/site-packages/pyhdfeos-1.0_r57_58-py2.5-linux-x86_64.egg is not present in any of my environmental variables yet it shows up as one of the first entries in sys.path. A colleague of mine is running on the same system as I am, and he does not have the problem of the /usr/lib64/python2.5/site-packages/pyhdfeos-1.0_r57_58-py2.5-linux-x86_64.egg variable showing up as one of the first entries in sys.path. Thanks for the education, Catherine Dan Stromberg wrote: On Tue, Jan 11, 2011 at 4:30 PM, Catherine Moroney wrote: In what order does python import modules on a Linux system? I have a package that is both installed in /usr/lib64/python2.5/site-packages, and a newer version of the same module in a working directory. I want to import the version from the working directory, but when I print module.__file__ in the interpreter after importing the module, I get the version that's in site-packages. I've played with the PYTHONPATH environmental variable by setting it to just the path of the working directory, but when I import the module I still pick up the version in site-packages. /usr/lib64 is in my PATH variable, but doesn't appear anywhere else. I don't want to remove /usr/lib64 from my PATH because that will break a lot of stuff. Can I force python to import from my PYTHONPATH first, before looking in the system directory? Catherine -- http://mail.python.org/mailman/listinfo/python-list Please import sys and inspect sys.path; this defines the search path for imports. By looking at sys.path, you can see where in the search order your $PYTHONPATH is going. It might actually be better to give your script a command line option that says "Throw the following directory at the beginning of sys.path". -- http://mail.python.org/mailman/listinfo/python-list
Re: order of importing modules
I've looked at my sys.path variable and I see that it has a whole bunch of site-package directories, followed by the contents of my $PYTHONPATH variable, followed by a list of misc site-package variables (see below). I've verified that if I manually reverse the order of sys.path I can then import the proper version of the module that I want. But this is not a permanent solution for me as this will mess up other people are who working with the same code. But, I'm curious as to where the first bunch of 'site-package' entries come from. The /usr/lib64/python2.5/site-packages/pyhdfeos-1.0_r57_58-py2.5-linux-x86_64.egg is not present in any of my environmental variables yet it shows up as one of the first entries in sys.path. A colleague of mine is running on the same system as I am, and he does not have the problem of the /usr/lib64/python2.5/site-packages/pyhdfeos-1.0_r57_58-py2.5-linux-x86_64.egg variable showing up as one of the first entries in sys.path. Thanks for the education, Catherine Dan Stromberg wrote: On Tue, Jan 11, 2011 at 4:30 PM, Catherine Moroney wrote: In what order does python import modules on a Linux system? I have a package that is both installed in /usr/lib64/python2.5/site-packages, and a newer version of the same module in a working directory. I want to import the version from the working directory, but when I print module.__file__ in the interpreter after importing the module, I get the version that's in site-packages. I've played with the PYTHONPATH environmental variable by setting it to just the path of the working directory, but when I import the module I still pick up the version in site-packages. /usr/lib64 is in my PATH variable, but doesn't appear anywhere else. I don't want to remove /usr/lib64 from my PATH because that will break a lot of stuff. Can I force python to import from my PYTHONPATH first, before looking in the system directory? Catherine -- http://mail.python.org/mailman/listinfo/python-list Please import sys and inspect sys.path; this defines the search path for imports. By looking at sys.path, you can see where in the search order your $PYTHONPATH is going. It might actually be better to give your script a command line option that says "Throw the following directory at the beginning of sys.path". -- http://mail.python.org/mailman/listinfo/python-list
getting a string as the return value from a system command
Hello, I want to call a system command (such as uname) that returns a string, and then store that output in a string variable in my python program. What is the recommended/most-concise way of doing this? I could always create a temporary file, call the "subprocess.Popen" module with the temporary file as the stdout argument, and then re-open that temporary file and read in its contents. This seems to be awfully long way of doing this, and I was wondering about alternate ways of accomplishing this task. In pseudocode, I would like to be able to do something like: hostinfo = subprocess.Popen("uname -srvi") and have hostinfo be a string containing the result of issuing the uname command. Thanks for any tips, Catherine -- http://mail.python.org/mailman/listinfo/python-list
Re: getting a string as the return value from a system command
Robert Kern wrote: On 2010-04-16 14:06 PM, Catherine Moroney wrote: Hello, I want to call a system command (such as uname) that returns a string, and then store that output in a string variable in my python program. What is the recommended/most-concise way of doing this? I could always create a temporary file, call the "subprocess.Popen" module with the temporary file as the stdout argument, and then re-open that temporary file and read in its contents. This seems to be awfully long way of doing this, and I was wondering about alternate ways of accomplishing this task. p = subprocess.Popen(['uname', '-srvi'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() Thanks, I knew there had to be a more elegant way of doing that. Catherine -- http://mail.python.org/mailman/listinfo/python-list