ohn Machin wrote: > [EMAIL PROTECTED] wrote: > > "For each nibble n of x" means to take each 4 bit piece of the BCD > > integer as a value from zero to sixteen (though only 0 through 9 > > will appear), from most significant to least significant. > The OP's input, unvaryingly through the whole thread, even surviving to > his Javacard implementation of add() etc, is a list/array of decimal > digits (0 <= value <= 9). Extracting a nibble is so simple that > mentioning a "subroutine" might make the gentle reader wonder whether > there was something deeper that they had missed.
Yes, it's simple; that was the point. The most complex routine I assumed is integer addition, and it's not really hard. I'll present an example below. > > "Adding" > > integers and "shifting" binary integers is well-defined > > terminology. > Yes, but it's the *representation* of those integers that's been the > problem throughout. Right. To solve that problem, I give the high-level algorithm and deal with the representation in the shift and add procedures. > > I already posted the three-line algorithm. It > > appeared immediately under the phrase "To turn BCD x to binary > > integer y," and that is what it is intended to achieve. > Oh, that "algorithm". The good ol' num = num * base + digit is an > "algorithm"??? You lost me. The algorithm I presented didn't use a multiply operator. It could have, and of course it would still be an algorithm. > The problem with that is that the OP has always maintained that he has > no facility for handling a binary integer ("num") longer than 16 bits > -- no 32-bit long, no bignum package that didn't need "long", ... No problem. Here's an example of an add procedure he might use in C. It adds modestly-large integers, as base-256 big-endian sequences of bytes. It doesn't need an int any larger than 8 bits. Untested: typedef unsigned char uint8; #define SIZEOF_BIGINT 16 uint8 add(uint8* result, const uint8* a, const uint8* b) /* Set result to a+b, returning carry out of MSB. */ { uint8 carry = 0; unsigned int i = SIZEOF_BIGINT; while (i > 0) { --i; result[i] = (a[i] + b[i] + carry) & 0xFF; carry = carry ? result[i] <= a[i] : result[i] < a[i]; } return carry; } > Where I come from, a "normal binary integer" is base 2. It can be > broken up into chunks of any size greater than 1 bit, but practically > according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since > when is base 256 "normal" and in what sense of normal? All the popular CPU's address storage in byte. In C all variable sizes are in units of char/unsigned char, and unsigned char must hold zero through 255. > The OP maintained the line that he has no facility for handling a > base-256 number longer than 2 base-256 digits. So he'll have to build what's needed. That's why I showed the problem broken down to shifts and adds; they're easy to build. > The dialogue between Dennis and the OP wasn't the epitome of clarity: Well, I found Dennis clear. [...] > I was merely wondering whether you did in fact > have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16) > without assembling the number in some much larger base b3 (e.g. 256). I'm not sure what that means. -- --Bryan -- http://mail.python.org/mailman/listinfo/python-list