Thank you Elliot, this solution is the one I was trying to come up
with. Thank you for your help and thank you to everyone for their
suggestions.
Best regards,
Lorn
--
http://mail.python.org/mailman/listinfo/python-list
On May 28, 2005, at 2:52 PM, Lorn wrote:
> Yes, that would get rid of the decimals... but it wouldn't get rid of
> the extraneous precision. Unfortunately, the precision out to the ten
> thousandth is noise... I don't need to round it either as the numbers
> are artifacts of an integer to float c
"Lorn" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> I'm trying to work on a dataset that has its primary numbers saved as
> floats in string format. I'd like to work with them as integers with an
> implied decimal to the hundredth. The problem is that the current
> precision is va
Yes, that would get rid of the decimals... but it wouldn't get rid of
the extraneous precision. Unfortunately, the precision out to the ten
thousandth is noise... I don't need to round it either as the numbers
are artifacts of an integer to float conversion. Basically, I need to
know how many decim
Multiply them by 1 ?
Lorn wrote:
> I'm trying to work on a dataset that has it's primary numbers saved as
> floats in string format. I'd like to work with them as integers with an
> implied decimal to the hundredth. The problem is that the current
> precision is variable. For instance, some n