On Aug 8, 2013, at 15:29, Trygve Inda wrote:
> [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]
>
> This is called from C (not Cocoa) so I am looking at the best way to do this
> once and pass the NSCalendar object to where it is needed.
A common thing for NSCalendar and NSDat
Fascinating (Spock - Star Trek, 1967)
I added just this to the loop
for (int jj = 0; jj<100;++jj)
{
T += (float)jj/1000.00;
……..
The time for 100 iterations is 0.033 sec. about ten times slower than before. I
guess some optimizations remain.
Still, that works out at 330 microsec
On Aug 8, 2013, at 10:26 AM, David Rowland wrote:
> One hundred times through this loop, on my iPhone 5, took about 0.0028
> seconds. Two hundred times took about 0.0056 sec.
Those times are way too small to be reliable. You're down at the level where OS
scheduler effects are significant.
Make
David,
Why don't you increment T by a little bit in each iteration, say by jj/1000.,
to prove that no optimization is occurring? I would do it but I develop for Mac
OSX only.
Tom Wetmore
On Aug 8, 2013, at 3:59 PM, David Rowland wrote:
> I ran it in Debug mode which should turn off most opti
I ran it in Debug mode which should turn off most optimizations. I ran the loop
100 times and then 200 times. The latter took almost exactly twice the time as
the former. The results are saved in instance variables of the C++ class this
belongs to.
On Aug 8, 2013, at 12:06 PM, Sandy McGuffog
Be careful using that code as a test; a good optimizing compiler could pick up
that sin is a library function without side effects, and no result is saved,
and optimize that loop to two calls to adjustValueRadians.
Sandy
On Aug 8, 2013, at 8:17 PM, Thomas Wetmore wrote:
> David,
>
> Those a
On Aug 8, 2013, at 11:04 AM, Scott Ribe wrote:
> On Aug 8, 2013, at 11:50 AM, Jens Alfke wrote:
>> I can’t quote you from the C language spec, but I’d be very surprised if
>> this were true. The result of (type)+(type) is the same (type), for all
>> numeric types; same for the other built-in op
David,
Those are lightening speeds. So I agree with you wholeheartedly -- there is no
sense in working on a custom table-driven approach. The current approach must
already be table-based with speeds like that.
Tom Wetmore
On Aug 8, 2013, at 1:26 PM, David Rowland wrote:
> I wrote an app tha
On 2013-08-08, at 1:26 PM, David Rowland wrote:
> The functions are probably very carefully written and could not be improved
> by table lookups, vector libraries, etc. That is barking up the wrong tree.
"vForce is a library of highly optimized transcendental functions (e.g. sin,
cos, exp).
On Aug 8, 2013, at 11:50 AM, Jens Alfke wrote:
> I can’t quote you from the C language spec, but I’d be very surprised if this
> were true. The result of (type)+(type) is the same (type), for all numeric
> types; same for the other built-in operators.
>
On Aug 8, 2013, at 11:19 AM, John McCal
I wrote an app that calculates the positions of Sun and Moon and other
information as well. The heart of the Moon calculation is this. I added a loop
around it and called a stopwatch at the beginning and end.
startTime();
for (int jj = 0; jj<100;++jj)
{
//in radians
double lambda = 3.
On Aug 8, 2013, at 6:37 AM, Scott Ribe wrote:
> On Aug 8, 2013, at 6:37 AM, Roland King wrote:
>> shouldn't do that as long as one uses the correct functions, ie sinf() and
>> cosf().
>
> But */+- promote…
If all the operands to an operator are floats, the operation occurs in float,
and
the t
Fritz,
I know you know that the accuracy of this approach goes far
beyond being accurate only on the half degrees. The interpolation,
which can be done by very simple linear interpolation, will
convey almost the same level of accuracy on all intervening
angle values. There are some places where t
And if half-degrees are too coarse for you, you can take advantage of the
cyclic nature of the derivatives of sine and cosine, and run the Taylor series
out as far as you like (though you'd probably lose out to a
professionally-crafted trig library pretty quickly). I guess that could be
vectori
p.s. Of course you don't have to call sin() and cos() for
every half degree when building these tables. You can take advantage
of how trig functions repeat in the four quadrants, and you can
take advantage of other inverse and pythagorean reationships
that exist between them. Even the initial tabl
Returning strictly to the issue of trig performance. A solution I
have used in the past is to initialize tables of trig functions,
say by calling sin() and cos() for every half a degree, and then
interpolating those tables, never calling sin() or cos() again.
I did this 29 years ago on an Atari 520
On 2013/08/08, at 22:29, Trygve Inda wrote:
>>
>> On 8 Aug, 2013, at 3:28 PM, Kevin Meaney wrote:
>>
>>> I could well be wrong as I'm working from ancient memory but I believe c
>>> upgrades floats to doubles to perform calculations and then if result is
>>> stored in a float it will chuck a
I think something like this was mentioned recently, but if you suspect that the
performance bottleneck are the FP functions, write a small profiler that times
the actions and writes out the elapsed time to a log. Then try it in the
simulator and try it on your device(s).
It's a really easy way
On Aug 8, 2013, at 6:37 AM, Roland King wrote:
> shouldn't do that as long as one uses the correct functions, ie sinf() and
> cosf().
But */+- promote…
Keeping it all as float is not easy.
--
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
__
>
> On 8 Aug, 2013, at 3:28 PM, Kevin Meaney wrote:
>
>> I could well be wrong as I'm working from ancient memory but I believe c
>> upgrades floats to doubles to perform calculations and then if result is
>> stored in a float it will chuck away precision at time of assignment to the
>> float.
>
On 8 Aug, 2013, at 3:28 PM, Kevin Meaney wrote:
> I could well be wrong as I'm working from ancient memory but I believe c
> upgrades floats to doubles to perform calculations and then if result is
> stored in a float it will chuck away precision at time of assignment to the
> float.
>
> I w
I could well be wrong as I'm working from ancient memory but I believe c
upgrades floats to doubles to perform calculations and then if result is stored
in a float it will chuck away precision at time of assignment to the float.
I would repeat the recommendations of other to try using vlib.
I g
On Aug 7, 2013, at 1:50 PM, Trygve Inda wrote:
> What can I do to speed this up?
Did you use Instruments to CPU-profile the app? It would be a good idea, at
least to confirm that the slowness comes directly from the math functions and
not something else like memory allocation or method-dispat
> I have written an app that does astronomical calculations like that , Sun and
> Moon rise and set and location and….. I never saw a problem with speed. I was
> very impressed with how much it can do. However, are you using Objective C
> methods for the calculations? The run time dispatch in Objec
I have written an app that does astronomical calculations like that , Sun and
Moon rise and set and location and….. I never saw a problem with speed. I was
very impressed with how much it can do. However, are you using Objective C
methods for the calculations? The run time dispatch in Objective
>
> On Aug 7, 2013, at 14:34 , Trygve Inda wrote:
>
>> I am currently doing it on an NSThread. I may try replacing all the doubles
>> with floats in the algorithm and see how that goes (on a backup of course!).
>
> Yes, floats should help (unless precision errors make it worse). You'll be
> mov
> I'm a little surprised to see that veclib supports doubles. My instinct (based
> on imagining that you'll be striding through an array with vector registers
> that can hold two doubles or four floats)* is that floats could be much
> faster, and you should really think about whether you need doubl
On Aug 7, 2013, at 2:27 PM, Fritz Anderson wrote:
> On 7 Aug 2013, at 3:50 PM, Trygve Inda wrote:
>
>> I have an app that is running slow. I have narrowed it down to several
>> functions which are trig-intensive (used to calculate the position of the
>> moon at a given moment and more specifica
On Aug 7, 2013, at 14:34 , Trygve Inda wrote:
> I am currently doing it on an NSThread. I may try replacing all the doubles
> with floats in the algorithm and see how that goes (on a backup of course!).
Yes, floats should help (unless precision errors make it worse). You'll be
moving a lot les
> A few things:
>
> - The little ARM in the iPad 3 is nothing compared to your desktop. Not only
> is it not as fast, it doesn't have the memory bandwidth. Also, an iPad 4 will
> be twice as fast.
> - make sure you're running optimized code. You can set optimization flags on
> individual files (th
On Aug 7, 2013, at 1:50 PM, Trygve Inda wrote:
> I have an app that is running slow. I have narrowed it down to several
> functions which are trig-intensive (used to calculate the position of the
> moon at a given moment and more specifically to calculate rise/set times).
>
> To calculate the p
On 7 Aug 2013, at 3:50 PM, Trygve Inda wrote:
> I have an app that is running slow. I have narrowed it down to several
> functions which are trig-intensive (used to calculate the position of the
> moon at a given moment and more specifically to calculate rise/set times).
>
> To calculate the pos
A few things:
- The little ARM in the iPad 3 is nothing compared to your desktop. Not only is
it not as fast, it doesn't have the memory bandwidth. Also, an iPad 4 will be
twice as fast.
- make sure you're running optimized code. You can set optimization flags on
individual files (that's what I
I have an app that is running slow. I have narrowed it down to several
functions which are trig-intensive (used to calculate the position of the
moon at a given moment and more specifically to calculate rise/set times).
To calculate the position and rise/set times for a month on average,
requires:
34 matches
Mail list logo