I often have data that can be processed in parallel. It would be great if split --filter could look at every n'th line instead of chunking into n chunks:
cat bigfile | split --every-nth -n 8 --filter "grep foo" The above should start 8 greps and give each a line in round robin manner. Ideally it should be possible to do so non-blocking so if some lines take longer for one instance of grep, then the rest of the greps are not blocked. /Ole