On Mon, 06 Apr 2009 07:09:47 EDT erik quanstrom <quans...@quanstro.net> wrote: > > Nitpick: the output type of one command and the input type of > > the next command in the pipeline has to match, not every > > command. > > i think this is wrong. there's no requirement > that the programs participating in a pipeline are compatable > at all; that's the beauty of pipes.
If program A outputs numbers in big-endian order and B expects input in little-endian order, A|B won't do the "right thing". Even for programs like wc have a concept of a 'character' and if the prev prog. produces something else you will be counting something meaningless. Perhaps it is impossible to capture such type compatibility in anything but runtime IO routines but the concept exists. > you can do things > that were not envisioned at the time the programs were > written. That comes from composability. > > To go beyond simple char streams, one can for example build a > > s-expr pipeline: a stream of self identifying objects of a > > few types (chars, numbers, symbols, lists, vectors). In Q > > (from kx.com) over an IPC connection you can send strings, > > vectors, dictionaries, tables, or arbitray Q expressions. But > > there the model is more of a client/server. > > or ntfs where files are databases. not sure if streams > can look the same way. May be not. What I was getting at was that one can do a lot with a small number of IO types -- no need for "type profligacy"! [Unless your definition of profligacy is anything more than one.] The nice thing about s-expr is that you have a syntax for structured IO and its printer, parser are already written for you. Anyway, a typed sh would be an interesting experiment.