On 4/8/13, Richard Biener <richard.guent...@gmail.com> wrote: > I advocate the infinite precision signed representation as one > solution to avoid the issues that come up with your implementation > (as I currently have access to) which has a representation > with N bits of precision encoded with M <= N bits and no sign > information. That obviously leaves operations on numbers of > that representation with differing N undefined. You define it > by having coded the operations which as far as I can see simply > assume N is equal for any two operands and the effective sign for > extending the M-bits encoding to the common N-bits precision is > "available". A thorough specification of both the encoding scheme > and the operation semantics is missing. I can side-step both of > these issues nicely by simply using a infinite precision signed > representation and requiring the client to explicitely truncate / > extend to a specific precision when required. I also leave open > the possibility to have the _encoding_ be always the same as an > infinite precision signed representation but to always require > an explicitely specified target precision for each operation > (which rules out the use of operator overloading).
For efficiency, the machine representation of an infinite precision number should allow for a compact one-word representation. class infinite { int length; union representation { int inside_word; int *outside_words; } field; public: int mod_one_word() { if (length == 1) return field.inside_word; else return field.outside_word[0]; } }; Also for efficiency, you want to know the modulus at the time you do the last normal operation on it, not as a subsequent operation. > Citing your example: > > 8 * 10 / 4 > > and transforming it slightly into a commonly used pattern: > > (byte-size * 8 + bit-size) / 8 > > then I argue that what people want here is this carried out in > _infinite_ precision! But what people want isn't really relevant, what is relevant is what the language and/or compatiblity requires. Ideally, gcc should accurately represent languages with both finite size and infinite size. > Even if byte-size happens to come from > a sizetype TREE_INT_CST with 64bit precision. So either > choice - having a fixed-precision representation or an > infinite-precision representation - can and will lead to errors > done by the programmer. And as you can easily build a > finite precision wrapper around an infinite precision implementation > but not the other way around it's obvious to me what the > implementation should provide. IIUC, the issue here is not the logical chain of implementation, but the interface that is most helpful to the programmers in getting to performant, correct code. I expect we need the infinite precision forms, but also that having more concise coding for fixed-precision would be helpful. For mixed operations, all the languages that I know of promote smaller operands to larger operands, so I think a reasonable definition is possible here. -- Lawrence Crowl