:*S

Yes... I see the point. Correct. 

For a constant value this is true, in the instance of x = x + 1 (which I see 
where the error was), where x is a float this makes no difference. 

However, for clarity of the OP, x = x + y where y is an int, should be typecast 
correctly as x = x + (float)y 

Personally when I am writing code, I always write as x = x + 1.0 for clarity as 
well.

Sorry for not understanding the rebuttal :*)


> Subject: Re: Confused about floats
> From: scott_r...@killerbytes.com
> Date: Tue, 5 Oct 2010 09:24:13 -0600
> CC: cocoa-dev@lists.apple.com
> To: shashan...@hotmail.com
> 
> On Oct 5, 2010, at 9:16 AM, Shawn Bakhtiar wrote:
> 
> > Did you just call typecasting  "*completely* unnecessary and pointless"...
> 
> No, I called typecasting an int type to a floating type, in order to add it 
> to a floating type, unnecessary and pointless.
> 
> > You may be correct in that in Objective-C this may no longer be an issue, 
> > as the compiler does your work for you, but that was not an assumption I 
> > was making.
> 
> It's got nothing to do with Objective-C; C has always taken care of that 
> case. What exactly do you think is wrong with x = x + 100 where x is a 
> double? 
> 
> -- 
> Scott Ribe
> scott_r...@elevated-dev.com
> http://www.elevated-dev.com/
> (303) 722-0567 voice
> 
> 
> 
> 
                                          
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to