On 11/25/2009 03:51 PM, Bryan Call wrote:
We also need to worry about stability too.  We have a lot of changes in the 
tree that are untested and we don't have any automated testing to verify all 
the changes we have already made to the Apache tree don't have any hidden gems 
(bugs).  There has to be a balance with breaking compatibility moving forward 
and having a stable code base.

APIs are something that should be very stable since it requires real work 
(human time) to make the change.  Files that can be blown away and recreated 
over time should be considered less important.  I don't think we need to have a 
major version number bump for files, but we do for APIs breaking.

What do you mean "files"? Like the hostdb? Then yes, that's not a huge deal. Making changes to the cache could be very distruptive. It could be prohibitively expensive for someone to have to blow away TBs of cache just for doing a minor Traffic Server upgrade (I certainly would think twice before doing that myself).

I haven't seen many requests for caches over 512GB per server.  I think it is 
important to have this change, more for partitions sizes and the potential to 
reduce the number of partitions (reduces the seeking for writes).

Hmmm, internally at Y!, there is at least one very large deployment with 2.5TB of cache per machine (Wretch). I don't know what the "outside" world would do, but 512GB is not a huge cache, particularly in a forward proxy setup.

For the first release I would like to see less changes so we can move over to 
it quicker within Yahoo!.  If we try to push to much into the release we are 
going to have a harder time moving over to the Apache tree.

That's a "management" decision we have to make. I'd rather break things now that are difficult to break later, that would include:

* APIs (InkAPI, Remap API, any CLI APIs), in particular newer APIs like the Cache plugin APIs needs to be finalized.
    * ABIs (for above)
    * Disk cache (RAM cache doesn't matter, since it's volatile)
* Any changes related to complete the 64-bit support (e.g. objects > 2GB), possibly related to APIs.


I'd also argue that if we don't make sure that upgrading within the 2.x "branch" can be done internally at Y! without major disturbance, we'll never get the ASF version successfully used internally. This is one thing we've been really good about at Y! so far, there's been very few changes that would cause an upgrade to be disruptive for our users.

I'd suggest we come up with a list of the requirements for what needs to be done in the "2.x" branch, and then freeze it. I'd be hard to convince that addressing the issues listed above is not one of those requirements :).

Cheers,

-- Leif

Reply via email to