On Thu, 2009-08-13 at 09:42 +0100, roger peppe wrote:
> 2009/8/13 Roman Shaposhnik <r...@sun.com>:
> > Am I totally missing something or hasn't been the binary RPC
> > of that style been dead ever since SUNRPC? Hasn't the eulogy
> > been delivered by CORBA? Haven't folks realized that S-exprs
> > are really quite good for data serialization in the heterogeneous
> > environments (especially when they are called JSON)
> 
> i'm not familiar with Thrift, but i've done some stuff with google protobufs,
> from which i think Thrift is inspired.

I'd be very curious to know the details of the project where you
did find protobufs useful. Can you share, please?

> speaking of protobufs, i don't think they're a bad idea.
> they're specifically designed to deal
> with forward- and backward-compatibility, which is something
> you don't automatically get with s-expressions, 

Not unless you can make transport protocol take care of that for you. 
HTTP most certainly can and same can be said about 9P. I truly believe
that such a division of labor is a good thing. Thrift and protobufs are
doing RPC, I think we can agree on that. So if we were to look at how
local procedure calls are dealing with the compatibility issue we get
dynamic linker symbol versioning. I don't like those, but at least they
are implemented in the right place -- the dynamic linker. What Thrift
feels like would be analogous to pushing that machinery into my 
library. 

And since we are on this subject of dynamic linking -- one of the
fallacies of introducing versioning was that it would somehow make
compatibility seamless. It didn't. In all practical cases it made things
much, much, much worse. Perhaps web services are different, but I can't
really pinpoint what makes them so: you are doing calls to the symbols,
the symbols are versioned and they also happen to be remote. Not much
difference from calling your trustworthy r...@version1 ;-)

> and if you're
> dealing with many identically-typed records, the fact that each field
> in each record is not string-tagged counts saves a lot of bandwidth
> (and makes them more compressible, too).

That's really a type of YMMV argument. I can only speak from my personal
experience where the latency is much more a problem than bandwidth. 

> we don't use text for 9p, do we?

No we don't. But, as erik pointed out we use 9P as a transport
protocol. My biggest beef with how Thrift was positioned (and
that's why I'm so curious to know the details of your project)
is the fact that they seem to be pushing it as a better JSON.
At that level -- you already have a transport protocol, and
it just doesn't make a lot of sense to represent data in such
an unfriendly manner. And representation is important. After all,
REST stands for *representational* state transfer, doesn't it?

I certainly wouldn't object to Thrift use as a poorman's way
of implementing an equivalent of 9P (or any kind of protocol
for that matter) cheaply.

Hm. Now that I've mentioned it, perhaps trying Thrift out as
an implementation mechanism for 9P and comparing the result
with the handwritten stuff would be a good way for me to 
really see how useful it might be in practice.

> > and you
> > really shouldn't be made to figure out how large is the integer
> > on a host foo?

I firmly believe in self-descriptive data. If you have an integer
you have an integer. You shouldn't burden the representation layer
with encoding issues. Thus:
    { integer: 12312321...very long stream of digits...123123 }
is a perfectly good way to send the data. You might unpack it
into whatever makes sense on the receiving end, but please don't 
make me suffer at the data representation layer.

Thanks,
Roman.


Reply via email to