Agreed. BSON was born out of implementations that either lacked arbitrary precision numbers or had a strong affinity to an int/floating point way of thinking about numbers. I believe that if BSON had an arbitrary precision number type, it would be a proper superset of JSON.
As an aside, the max range of an int in BSON 64bits. Back to my original comment that BSON was "grown" instead of designed, it looks like both the 32bit and 64bit integers were added late in the game and that the original designers perhaps were just going to store all numbers as double. Perhaps we should enumerate the attributes of what would make a good binary encoding? Terry On Tue, Oct 19, 2010 at 8:57 AM, Andrew Dunstan <and...@dunslane.net> wrote: > > > On 10/19/2010 10:44 AM, Robert Haas wrote: > >> On Sat, Oct 16, 2010 at 12:59 PM, Terry Laurenzo<t...@laurenzo.org> wrote: >> >>> - It is directly iterable without parsing and/or constructing an AST >>> - It is its own representation. If iterating and you want to tear-off >>> a >>> value to be returned or used elsewhere, its a simple buffer copy plus >>> some >>> bit twiddling. >>> - It is conceivable that clients already know how to deal with BSON, >>> allowing them to work with the internal form directly (ala MongoDB) >>> - It stores a wider range of primitive types than JSON-text. The most >>> important are Date and binary. >>> >> When last I looked at that, it appeared to me that what BSON could >> represent was a subset of what JSON could represent - in particular, >> that it had things like a 32-bit limit on integers, or something along >> those lines. Sounds like it may be neither a superset nor a subset, >> in which case I think it's a poor choice for an internal >> representation of JSON. >> > > Yeah, if it can't handle arbitrary precision numbers as has previously been > stated it's dead in the water for our purposes, I think. > > cheers > > andrew >