On 11/14/2012 03:43 PM, Andrea Crotti wrote:
>
> Anyway the only thing I wanted to understand is if using the pipes in
> subprocess is exactly the same as doing
> the Linux pipe, or not.
It's not the same thing, but you can usually assume it's close. Other
effects will probably dominate any diff
On 11/14/2012 04:33 PM, Dave Angel wrote:
Well, as I said, I don't see how the particular timing has anything to
do with the rest of the thread. If you want to do an ls within a Python
program, go ahead. But if all you need can be done with ls itself, then
it'll be slower to launch python just
On 11/14/2012 11:16 AM, andrea crotti wrote:
> 2012/11/14 Dave Angel :
>> On 11/14/2012 10:56 AM, andrea crotti wrote:
>>> Ok this is all very nice, but:
>>>
>>> [andrea@andreacrotti tar_baller]$ time python2 test_pipe.py > /dev/null
>>>
>>> real 0m21.215s
>>> user 0m0.750s
>>> sys 0m1.703s
>>>
2012/11/14 Dave Angel :
> On 11/14/2012 10:56 AM, andrea crotti wrote:
>> Ok this is all very nice, but:
>>
>> [andrea@andreacrotti tar_baller]$ time python2 test_pipe.py > /dev/null
>>
>> real 0m21.215s
>> user 0m0.750s
>> sys 0m1.703s
>>
>> [andrea@andreacrotti tar_baller]$ time ls -lR /home/
On 11/14/2012 10:56 AM, andrea crotti wrote:
> Ok this is all very nice, but:
>
> [andrea@andreacrotti tar_baller]$ time python2 test_pipe.py > /dev/null
>
> real 0m21.215s
> user 0m0.750s
> sys 0m1.703s
>
> [andrea@andreacrotti tar_baller]$ time ls -lR /home/andrea | cat > /dev/null
>
> real
2012/11/14 Kushal Kumaran :
>
> Well, well, I was wrong, clearly. I wonder if this is fixable.
>
> --
> regards,
> kushal
> --
> http://mail.python.org/mailman/listinfo/python-list
But would it not be possible to use the pipe in memory in theory?
That would be way faster and since I have in theor
Ian Kelly writes:
> On Tue, Nov 13, 2012 at 11:05 PM, Kushal Kumaran
> wrote:
>> Or, you could just change the p1's stderr to an io.BytesIO instance.
>> Then call p2.communicate *first*.
>
> This doesn't seem to work.
>
b = io.BytesIO()
p = subprocess.Popen(["ls", "-l"], stdout=b)
> Tr
On Tue, Nov 13, 2012 at 11:05 PM, Kushal Kumaran
wrote:
> Or, you could just change the p1's stderr to an io.BytesIO instance.
> Then call p2.communicate *first*.
This doesn't seem to work.
>>> b = io.BytesIO()
>>> p = subprocess.Popen(["ls", "-l"], stdout=b)
Traceback (most recent call last):
Ian Kelly writes:
> On Tue, Nov 13, 2012 at 3:31 AM, andrea crotti
> wrote:
>> but it's a bit ugly. I wonder if I can use the subprocess PIPEs to do
>> the same thing, is it going to be as fast and work in the same way??
>
> It'll look something like this:
>
p1 = subprocess.Popen(cmd1, she
On Tue, Nov 13, 2012 at 9:25 AM, Ian Kelly wrote:
> Sorry, the example I gave above is wrong. If you're calling
> p1.communicate(), then you need to first remove the p1.stdout pipe
> from the Popen object. Otherwise, the communicate() call will try to
> read data from it and may "steal" input fr
On Tue, Nov 13, 2012 at 9:07 AM, Ian Kelly wrote:
> It'll look something like this:
>
p1 = subprocess.Popen(cmd1, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
p2 = subprocess.Popen(cmd2, shell=True, stdin=p1.stdout,
stdout=subprocess.PIPE, stderr=subprocess.PIP
On Tue, Nov 13, 2012 at 3:31 AM, andrea crotti
wrote:
> but it's a bit ugly. I wonder if I can use the subprocess PIPEs to do
> the same thing, is it going to be as fast and work in the same way??
It'll look something like this:
>>> p1 = subprocess.Popen(cmd1, shell=True, stdout=subprocess.PIPE
2012/11/8 andrea crotti :
>
>
>
> Yes yes I saw the answer, but now I was thinking that what I need is
> simply this:
> tar czpvf - /path/to/archive | split -d -b 100M - tardisk
>
> since it should run only on Linux it's probably way easier, my script
> will then only need to create the list of fil
2012/11/7 Oscar Benjamin :
>
> Correct. But if you read the rest of Alexander's post you'll find a
> suggestion that would work in this case and that can guarantee to give
> files of the desired size.
>
> You just need to define your own class that implements a write()
> method and then distributes
On 7 November 2012 21:52, Andrea Crotti wrote:
> On 11/07/2012 08:32 PM, Roy Smith wrote:
>>
>> In article <509ab0fa$0$6636$9b4e6...@newsspool2.arcor-online.net>,
>> Alexander Blinne wrote:
>>
>>> I don't know the best way to find the current size, I only have a
>>> general remark.
>>> This sol
On 11/07/2012 08:32 PM, Roy Smith wrote:
In article <509ab0fa$0$6636$9b4e6...@newsspool2.arcor-online.net>,
Alexander Blinne wrote:
I don't know the best way to find the current size, I only have a
general remark.
This solution is not so good if you have to impose a hard limit on the
resulti
In article <509ab0fa$0$6636$9b4e6...@newsspool2.arcor-online.net>,
Alexander Blinne wrote:
> I don't know the best way to find the current size, I only have a
> general remark.
> This solution is not so good if you have to impose a hard limit on the
> resulting file size. You could end up having
I don't know the best way to find the current size, I only have a
general remark.
This solution is not so good if you have to impose a hard limit on the
resulting file size. You could end up having a tar file of size "limit +
size of biggest file - 1 + overhead" in the worst case if the tar is at
l
On 2012-11-07, andrea crotti wrote:
> Simple problem, given a lot of data in many files/directories, I
> should create a tar file splitted in chunks <= a given size.
>
> The simplest way would be to compress the whole thing and then split.
>
> At the moment the actual script which I'm replacing is
19 matches
Mail list logo