My logic here was that CQLTester tests would probably be the best candidate as 
they are largely single-threaded and single-node. I'm sure there are background 
processes that might slow things down when serialised into a single execution 
thread, but my expectation would be that it will not be as significant as with 
other tests such as multinode in-jvm dtests. 

On Thu, Dec 7, 2023, at 7:44 PM, Benedict wrote:
> 
> I think the biggest impediment to that is that most tests are probably not 
> sufficiently robust for simulation. If things happen in a surprising order 
> many tests fail, as they implicitly rely on the normal timing of things.
> 
> Another issue is that the simulator does potentially slow things down a 
> little at the moment. Not sure what the impact would be overall.
> 
> It would be great to setup a JUnitRunner using the simulator and find out 
> though.
> 
> 
>> On 7 Dec 2023, at 15:43, Alex Petrov <al...@coffeenco.de> wrote:
>> 
>> We have been extensively using simulator for TCM, and I think we have make 
>> simulator tests more approachable. I think many of the existing tests should 
>> be ran under simulator instead of CQLTester, for example. This will both 
>> strengthen the simulator, and make things better in terms of determinism. Of 
>> course not to say that CQLTester tests are the biggest beneficiary there.
>> 
>> On Thu, Dec 7, 2023, at 4:09 PM, Benedict wrote:
>>> To be fair, the lack of coherent framework doesn’t mean we can’t merge them 
>>> from a naming perspective. I don’t mind losing one of burn or fuzz, and 
>>> merging them.
>>> 
>>> Today simulator tests are kept under the simulator test tree but that 
>>> primarily exists for the simulator itself and testing it. It’s quite a 
>>> complex source tree, as you might expect, and it exists primarily for 
>>> managing its own complexity. It might make sense to bring the Paxos and 
>>> Accord simulator entry points out into the burn/fuzz trees, though not sure 
>>> it’s all that important.
>>> 
>>> 
>>> > On 7 Dec 2023, at 15:05, Benedict <bened...@apache.org> wrote:
>>> > 
>>> > Yes, the only system/real-time timeout is a progress one, wherein if 
>>> > nothing happens for ten minutes we assume the simulation has locked up. 
>>> > Hitting this is indicative of a bug, and the timeout is so long that no 
>>> > realistic system variability could trigger it.
>>> > 
>>> >> On 7 Dec 2023, at 14:56, Brandon Williams <dri...@gmail.com> wrote:
>>> >> 
>>> >> On Thu, Dec 7, 2023 at 8:50 AM Alex Petrov <al...@coffeenco.de> wrote:
>>> >>>> I've noticed many "sleeps" in the tests - is it possible with 
>>> >>>> simulation tests to artificially move the clock forward by, say, 5 
>>> >>>> seconds instead of sleeping just to test, for example whether TTL 
>>> >>>> works?)
>>> >>> 
>>> >>> Yes, simulator will skip the sleep and do a simulated sleep with a 
>>> >>> simulated clock instead.
>>> >> 
>>> >> Since it uses an artificial clock, does this mean that the simulator
>>> >> is also impervious to timeouts caused by the underlying environment?
>>> >> 
>>> >> Kind Regards,
>>> >> Brandon
>>> 
>>> 
>> 

Reply via email to