On 17.10.2009, at 17:38, BJ Homer wrote:

In the Garbage Collection Programming Guide: Architecture [1], an example is given of a set of non-GC getters and setters which use @synchronized (self)
to control the access to the ivar, and in the setter to protect the
releasing of the old object and retaining of the new. Then, a GC example is given in which the getters and setters have no synchronization; the getter
simply returns the value, and the setter simply assigns the new value.
I understand why the setter is so much simpler under GC; no retain/ release
fiddling is necessary.  However, I'm confused about the lack of
synchronization in the GC examples. I know there's lots of skepticism as to whether synchronization at the getter/setter level is even useful (hence the general recommendation to make properties nonatomic). But assuming that you wanted it there in the first place, why does the GC version not need the synchronization? Is it simply because the setter is doing less, and thus
the getter wouldn't ever catch the setter halfway through its setting?
(After releaseing the old value, but before retaining the new, for
example.)

Basically yes.

Another way to look at it is that assignment to a GC-controlled pointer variable must be thread safe anyway (because GC is running in a separate thread and because there are things like weak references), so there is no need to put extra synchronization around it.

By the way, this thread safety feature of GC is another big reason to use GC over managed memory besides the automatic memory management.

Kai

_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to