Re: noob: subprocess clarification
On 14 Sep, 22:06, Dennis Lee Bieber <[EMAIL PROTECTED]> wrote: > On Sun, 14 Sep 2008 02:29:52 -0700 (PDT), [EMAIL PROTECTED] declaimed > the following in comp.lang.python: > > > Can somebody please clarify what the shell=True does, and whether I am > > using it correctly. > > What part of: > > """ > On Unix, with shell=False (default): In this case, the Popen class uses > os.execvp() to execute the child program. args should normally be a > sequence. A string will be treated as a sequence with the string as the > only item (the program to execute). > > On Unix, with shell=True: If args is a string, it specifies the command > string to execute through the shell. If args is a sequence, the first > item specifies the command string, and any additional items will be > treated as additional shell arguments. > """ > > is giving problems? I assume that this is sarchasm :-) > > For the most part, as I recall, "shell=True" allows you to invoke > commands that are built-in/native to the default shell, or even a shell > script. False requires the specified command to be a stand-alone > executable program. > -- -- http://mail.python.org/mailman/listinfo/python-list
Re: Serial I/O problem with pywin32 ?
Hi I have resolved my problem by checking paquets. It seems that it is a problem of the GPS (it's a very cheap GPS Datalogger). > Could be hardware flow control. See this sometimes on the bluetooth > connections that are using Serial Port Protocol and the hardware flow > control hasn't been physically implemented. It seems it is the problem. The policy seems to be : - ask the GPS for the data - touch wood - retry with the missing chunks Even the official driver is doing this. > Do you lose data after exactly the same amount of data has > been received? Not. The lost are randomized but it's chunks, ex : 300 consecutive bytes ok 30 consecutive bytes lost 250 bytes ok 40 bytes lost 800 bytes ok 50 bytes lost ... -- http://mail.python.org/mailman/listinfo/python-list
Re: dynamic allocation file buffer
On 12 Set, 14:39, "Aaron \"Castironpi\" Brady" <[EMAIL PROTECTED]> wrote: > > A consideration of other storage formats such as HDF5 might > > be appropriate: > > >http://hdf.ncsa.uiuc.edu/HDF5/whatishdf5.html > > > There are, of course, HDF5 tools available for Python. > > PyTablescame up within the past few weeks on the list. > > "When the file is created, the metadata in the object tree is updated > in memory while the actual data is saved to disk. When you close the > file the object tree is no longer available. However, when you reopen > this file the object tree will be reconstructed in memory from the > metadata on disk" > > This is different from what I had in mind, but the extremity depends > on how slow the 'reconstructed in memory' step is. > (Fromhttp://www.pytables.org/docs/manual/ch01.html#id2506782). The > counterexample would be needing random access into multiple data > files, which don't all fit in memory at once, but the maturity of the > package might outweigh that. Reconstruction will form a bottleneck > anyway. Hmm, this was a part of a documentation that needed to be updated. Now, the object tree is reconstructed in a lazy way (i.e. on-demand), in order to avoid the bottleneck that you mentioned. I have corrected the docs in: http://www.pytables.org/trac/changeset/3714/trunk Thanks for (indirectly ;-) bringing this to my attention, Francesc -- http://mail.python.org/mailman/listinfo/python-list