I've been wrestling with the python dtests recently and that led to some 
discussions with other contributors about whether we as a project should be 
writing new tests in the python dtest framework or the in-jvm framework. This 
discussion has come up tangentially on some other topics, including the lack of 
documentation / expertise on the in-jvm framework dis-incentivizing some folks 
from authoring new tests there vs. the difficulty debugging and maintaining 
timer-based, sleep-based non-deterministic python dtests, etc.

I don't know of a place where we've formally discussed this and made a 
project-wide call on where we expect new distributed tests to be written; if 
I've missed an email about this someone please link on the thread here (and 
stop reading! ;))

At this time we don't specify a preference for where you write new multi-node 
distributed tests on our "development/testing" portion of the site and 
documentation: https://cassandra.apache.org/_/development/testing.html

The primary tradeoffs as I understand them for moving from python-based 
multi-node testing to jdk-based are:
Pros:
 1. Better debugging functionality (breakpoints, IDE integration, etc)
 2. Integration with simulator
 3. More deterministic runtime (anecdotally; python dtests _should_ be 
deterministic but in practice they prove to be very prone to environmental 
disruption)
 4. Test time visibility to internals of cassandra
Cons:
 1. The framework is not as mature as the python dtest framework (some 
functionality missing)
 2. Labor and process around revving new releases of the in-jvm dtest API
 3. People aren't familiar with it yet and there's a learning curve

So my bid here: I personally think we as a project should freeze writing new 
tests in the python dtest framework and all focus on getting the in-jvm 
framework robust enough to cover edge-cases that might still be causing new 
tests to be written in the python framework. This is going to require 
documentation work from some of the original authors of the in-jvm framework as 
well as folks currently familiar with it and effort from those of us not yet 
intimately familiar with the API to get to know it, however I believe the 
long-term benefits to the project will be well worth it.

We could institute a pre-commit check that warns on a commit increasing our raw 
count of python dtests to help provide process-based visibility to this change 
in direction for the project's testing.

So: what do we think?

Reply via email to