On 15Apr2012 14:30, Amadeus W.M. wrote:
| > Look at this (completely untested) loop:
| >
| > # a little setup
| > cmd=`basename "$0"`
| > : ${TMPDIR:=/tmp}
| > tmppfx=$TMPDIR/$cmd.$$
| >
| > i=0
| > while read -r url
| > do
| > i=$((i+1))
| > out=$tmppfx.$i
| > if curl
> I don't know what tracking your other program is doing...but couldn't
> you simply do
>
> curl -s $url | program &
>
Parsing can be done per url, but tracking/analyzing MUST be across urls.
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription optio
>
> Look at this (completely untested) loop:
>
> # a little setup
> cmd=`basename "$0"`
> : ${TMPDIR:=/tmp}
> tmppfx=$TMPDIR/$cmd.$$
>
> i=0
> while read -r url
> do
> i=$((i+1))
> out=$tmppfx.$i
> if curl -s "$url" >"$out"
> then echo "$out"
> else echo "$cm
On 15Apr2012 05:52, Amadeus W.M. wrote:
| With this exact script, it works for FOO (probably because it's short).
| For FOO...(1000 Os) I see again fewer than 100 lines in "zot".
| This, if I iterate 100 times. If I iterate, say, 10-20 times only, I seem
| to get all the lines. Can it h
On 04/15/2012 01:52 PM, Amadeus W.M. wrote:
> The real code is like this:
>
> #!/bin/bash
>
> for url in $(cat myURLs)
> do
> curl -s $url &
> done
>
>
>
> I pipe the combined curl outputs to a program that parses the html and
> keeps track of something (I do pipe afterall). I could do that
> Definitely yes. But again, that only matters if you had lots of curl
> commands each with its own >> file open. But your code had one >> open
> and lots of echoes (well, curls) using it.
>
> Which is it? (Actually, both should work. But we need to know what is
> actually goging on.)
It's the l
>
> if your script forks off lots of curls into a file and does not wait for
> them all, then you may get to run the grep before they all finish, hence
> the weird results.
If ioTest.sh is the original example I posted, I'm NOT doing this:
./ioTest.sh | grep ^A | wc -l
I am doing this:
./ioTes
On 04/15/2012 10:36 AM, Amadeus W.M. wrote:
> It does seem to work, but can you explain why it does? Would it still
> work if each process outputs, say, 1Mb? 100Mb?
It works fine when i is set to 100.
I think I'll leave the explanations to Cameron, who is doing a *much* better
job in
that
On 15Apr2012 02:36, Amadeus W.M. wrote:
| > [egreshko@meimei test]$ mkfifo pipe
| > [egreshko@meimei test]$ ./io.sh > pipe
| > [egreshko@meimei test]$ cat pipe | grep ^A | wc
| > 100 100 600
| >
|
| It does seem to work, but can you explain why it does? Would it still
| work if each
On 15Apr2012 02:32, Amadeus W.M. wrote:
| > | Multiple processes that open the same file for writing each maintain |
| > their own file positions, so they may overwrite the output of another |
| > process, unless the processes all open the file with the "O_APPEND" |
| > option.
| >
| > This only
>
> [egreshko@meimei test]$ mkfifo pipe
> [egreshko@meimei test]$ ./io.sh > pipe
>
> [egreshko@meimei test]$ cat pipe | grep ^A | wc
> 100 100 600
>
It does seem to work, but can you explain why it does? Would it still
work if each process outputs, say, 1Mb? 100Mb?
--
users mail
>
> | Multiple processes that open the same file for writing each maintain |
> their own file positions, so they may overwrite the output of another |
> process, unless the processes all open the file with the "O_APPEND" |
> option.
>
> This only matters if the processes _independently_ opened t
On Sat, 14 Apr 2012 09:45:50 -0700, John Wendel wrote:
> On 04/14/2012 08:20 AM, Amadeus W.M. wrote:
>>> If you really would like to get output in sequence, write to a pipe,
>>> and have a reader process drain the pipe to a logfile. It's pretty
>>> easy; look at "mknod" with the 'p' option, or "m
On 14Apr2012 09:45, John Wendel wrote:
| > I don't see how echoing into a pipe would change the problem.
| > Theoretically, if several processes (e.g. echo) are running in the
| > background, e.g. on a round robin basis, then potentially I could see
| > random sequences of As, Bs and Cs. It doesn'
On 14 Apr 2012 at 9:45, John Wendel wrote:
Date sent: Sat, 14 Apr 2012 09:45:50 -0700
From: John Wendel
To: users@lists.fedoraproject.org
Subject:Re: off topic: combined output of concurrent
processes
> On 04/14/2012 08:20
On 04/14/2012 08:20 AM, Amadeus W.M. wrote:
If you really would like to get output in sequence, write to a pipe, and
have a reader process drain the pipe to a logfile. It's pretty easy;
look at "mknod" with the 'p' option, or "mkfifo". I'd still suggest
tagging each output line with an identifi
On 04/14/2012 11:20 PM, Amadeus W.M. wrote:
> Theoretically, if several processes (e.g. echo) are running in the
> background, e.g. on a round robin basis, then potentially I could see
> random sequences of As, Bs and Cs. It doesn't seem to be the case in
> practice though. So which is it?
Oh,
On 04/14/2012 11:20 PM, Amadeus W.M. wrote:
> For the sake of the argument, assume I echo 500 As, 500 Bs and 500 Cs.
>
> I don't care which process the output is coming from. It doesn't matter
> which order the As, Bs and Cs are output. All I care about is that I
> don't get 349As followed by 24
> If you really would like to get output in sequence, write to a pipe, and
> have a reader process drain the pipe to a logfile. It's pretty easy;
> look at "mknod" with the 'p' option, or "mkfifo". I'd still suggest
> tagging each output line with an identifier and sequence number.
>
For the sa
On Fri, Apr 13, 2012 at 10:31 PM, Amadeus W.M. wrote:
>>
>> [egreshko@meimei test]$ grep ^A out | wc
>> 97 97 582
>> [egreshko@meimei test]$ grep ^B out | wc
>> 94 94 564
>> [egreshko@meimei test]$ grep ^C out | wc
>> 96 96 576
>
> I replicated this and in
On 04/14/2012 10:09 PM, Dave Ihnat wrote:
> On Sat, Apr 14, 2012 at 09:49:00PM +0800, Ed Greshko wrote:
>> The one problem would be if, as the OP's script is written,
>> backgrounding of processes is done there is no way to control the
>> order of data being written to the pipe. And, as you pointe
On Sat, Apr 14, 2012 at 09:49:00PM +0800, Ed Greshko wrote:
> The one problem would be if, as the OP's script is written,
> backgrounding of processes is done there is no way to control the
> order of data being written to the pipe. And, as you pointed out,
> you may want to write out sequence num
On 04/14/2012 09:33 PM, Dave Ihnat wrote:
> On Sat, Apr 14, 2012 at 01:35:38AM +, Amadeus W.M. wrote:
>> So here is the question. Suppose I have several processes that run
>> concurrently and each outputs stuff to stdout. Can the combined output be
>> intermingled?
> If you just send the out
On Sat, Apr 14, 2012 at 01:35:38AM +, Amadeus W.M. wrote:
> So here is the question. Suppose I have several processes that run
> concurrently and each outputs stuff to stdout. Can the combined output be
> intermingled?
If you just send the output to a file, you've no way of knowing exactly
>
> [egreshko@meimei test]$ grep ^A out | wc
> 97 97 582
> [egreshko@meimei test]$ grep ^B out | wc
> 94 94 564
> [egreshko@meimei test]$ grep ^C out | wc
> 96 96 576
>
I replicated this and indeed I don't get 100 lines of As, Bs and Cs.
That's a new p
On 04/14/2012 10:33 AM, Ed Greshko wrote:
> No Not only that, you are placing the echo commands in the background.
> So, it
> is certainly possible that the script will finish before the echos are
> completed.
> Not only that, there is no guarantee that all output from the echos will be
>
On 04/14/2012 09:35 AM, Amadeus W.M. wrote:
> This is not Fedora specific, so apologies for posting it here. I used to
> post this kind of questions in comp.linux.misc or the like but I don't
> have access to usenet groups anymore.
>
> So here is the question. Suppose I have several processes tha
This is not Fedora specific, so apologies for posting it here. I used to
post this kind of questions in comp.linux.misc or the like but I don't
have access to usenet groups anymore.
So here is the question. Suppose I have several processes that run
concurrently and each outputs stuff to stdout.
28 matches
Mail list logo