Rasmus,
<snip>

- Request 1 starts before the deploy and loads script A, B
- Deploy to a separate directory and the docroot symlink now points to here
- Request 2 starts and loads A, B, C
- Request 1 was a bit slow and gets to load C now
The issues that you raise about introducing atomic versioning in the script namespace do need to be addressed to avoid material service disruption during application version upgrade. However, surely another facet of the O+ architectural also frustrates this deployment model.

My reading is that is that O+ processes each new (cache-miss) compile request by first sizing the memory requirements for the compiled source and then allocating a single brick from (one of) the SMA at its high water mark. Stale cache entries are marked as corrupt and their storage is then allocated to wasted_shared_memory with no attempt to reuse it. SMA exhaustion or the % wastage exceeding a threshold ultimately triggers a process shutdown cascade. This strategy is lean and fast but as far as I understand this, it ultimately uses a process death cascade and population rebirth to implement garbage collection.

Wouldn't your non-stop models would require a more stable reuse architecture which recycles wasted memory stably without the death cascade? Perhaps one of the Zend team could correct my inference if I've got it wrong again :-(

Regards
Terry

Reply via email to