Date: Mon, 24 Nov 2008 11:35:42 +0100
From: WT <[EMAIL PROTECTED]>

Hi Peter,

thanks for answering my post.

Happy to.  :)

[...]
While it may wind up making sense to put the simulation engine in a
separate thread, I would recommend _not_ trying to adjust the
simulation rate to account for the speed of the balls, precision, etc.
This sort of load-adjusting sounds good in theory, but it ultimately
fails because the performance characteristics of the code changes
according to the settings.


I'm afraid I'll have to disagree.

It's your program, your prerogative.  :)  Still...

My concern is not balancing the CPU
load, but getting accurate results. To use the bouncing balls example
again, if the balls move fast enough and if I don't compensate by
decreasing the time-interval through which I'm advancing the motion,
the balls WILL overlap and may do so considerably. Even if they don't
overlap in a way that's visible to the human eye, overlaps may cause
other anomalies, such as energy non-conservation. These kinds of
problems are typical of numerical integration of differential
equations and have nothing to do with optimizing the CPU use.

Sure, they do. The only reason to not run the simulation at the minimum time-interval possible is to adjust CPU usage. My advice: just always run the simulation at the minimum time-interval possible.

You seem to have misunderstood my statement to mean that you should _reduce_ the precision always. I'm not saying that. I'm saying that you should run the simulation at maximum precision always.

And while I appreciate that you feel I and others have gone off-road a bit with your question, the fact is, you posted the original message, and it doesn't make sense for you to then expect people to ignore aspects of the message you wrote. You probably should have made the original message more concise and to the point, but given that you didn't, objecting to comments offered by people to parts you never intended for people to care about doesn't make sense. You posted the message; live with the replies. At worst, you can just ignore them.

[...]
Very true. Another of my concerns, though, is that having high enough
simulation refresh rates and frame update rates will cause a large
number of object allocations, which will compete with other resources
needed for the simulation to happen. It seems to me that lots of
objects in Cocoa cannot be reused once created and have to be created
again. For instance, as far as I know, there is no way to change the
period of an NSTimer that's already in existence. Since both the
simulation update rate and the frame update rate are timer-driven and
dynamically changeable by the user, I expect lots of NSTimer objects
to be created.

I'm not sure I understand the NSTimer concern. Surely the rate for the timer should not change that frequently. A user can only provide their commands so quickly (or at worst, you could respond to them only so quickly...even if the user can provide a new timer interval every 1 ms, surely it would be fine to update the actual timer interval only every 100 ms or so...the user would never know the difference).

It is unfortunate that NSTimer doesn't appear to have a way to change the auto-repeat interval after creation, but it seems to me that creation of NSTimer objects should not be an activity the code would spend much of its time doing anyway.

More important to you is probably this information from the timer docs (http://developer.apple.com/documentation/Cocoa/Conceptual/ Timers/Articles/timerConcepts.html#//apple_ref/doc/uid/20000806- BAJFBAIH):

    Because of the various input sources a typical run loop
    manages, the effective resolution of the time interval
    for a timer is limited to on the order of 50-100 milliseconds.

In other words, the best you can hope for is a timer firing 10 to 20 times per second. This is yet another argument in favor of just running the simulation as fast as it will go. Doing so provides at least three benefits:

-- Fewer worries about missing corner-cases in the performance testing (mentioned in my previous message) -- Higher resolution simulation (with NSTimer, your simulation thread just isn't going to get notified that frequently) -- No need to manage _any_ NSTimer objects (so no memory management overhead for that at all)

If you insist on adjusting the simulation frequency, you may find that using the NSThread's sleepForTimeInterval: method is more appropriate. I don't know for sure, but I suspect that method uses Unix's thread-scheduling mechanism, rather than relying on the NSRunLoop to manage the notification. That's likely to produce higher-resolution results.

As far as memory management goes, I think more interesting is the question of data objects used for inter-thread communication. One advantage of foregoing the advice to use immutable objects is the potential for reuse if you use mutable objects. I wouldn't bother trying to make that optimization unless I found that my code was performing poorly due to memory management overhead, but if you _do_ find that, it might be worth your while to use a sort of "inter- thread communication object pool", so that you only have to allocate a few such objects and reuse them as needed.

Still, as with the NSTimer, assuming code of any complexity, it seems likely that the cost of the other main components -- simulation and rendering -- will dwarf any overhead due to object maintenance or memory management. If it doesn't, then there may just be a simply design problem with the way objects are used and management in the code.

Pete
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to