Ah, missed some other stuff.
The bug we've seen is one that's not always visible, particularly on busy
systems, because the event loop never waits very long. However, in the
particular use case here, the systems are sufficiently lightly loaded and
the latency requirements sufficiently strict that
For event processing style programming, it's standard to have two event
scheduling "styles" - an immediate "do as soon as possible", and a
"scheduled" style which is guaranteed to unwind the event processing stack
at least once. In another way, it's similar to "call" and "yield" in
co-routines. I w
it's not a bug, it's how it was supposed to be from the beginning but
wasn't functioning correctly.
On Wed, Nov 20, 2019 at 2:27 PM Sudheer Vinukonda
wrote:
> >>> that will cause an issue for devs that are using
> schedule with 0 timeouts in their code to schedule itself over and over
> again (
>>> that will cause an issue for devs that are using
schedule with 0 timeouts in their code to schedule itself over and over
again (see test code in PR comment), which is bad programming on their part
but it might happen and is a valid concern.
So, isn't this a new bug introduced with PR# 6103?
It's not a hole, it's how it should have been all along, but have not been
working properly for a long time. schedule_imm is supposed to put things in
queue and have the eventloop process them asap and not waiting for IO or
other stuff, but instead it stays in queue till we wake up the thread
eithe
Hmm..Why’d the API need to differentiate the implementation details to users?
Alternately, why’d someone pick an API that may have a hole?
I haven’t fully analyzed and understood the proposed changes, but having two
different API that only differ in how they are implemented under the hood (and
Let me rephrase that, the new API behaves the same as the TSContSchedule
with 0 timeout after PR#6103, which will handle events as soon as possible.
While this is good for delays, it also causes the situation scw00 brought
up (dead loop). And since there is no good way of differentiating this
behav
> On Nov 20, 2019, at 05:52, Fei Deng wrote:
>
> Forgot to mention that this change would restore the old behavior of
> TSContSchedule minus the delay and dead loop.
>
>> On Tue, Nov 19, 2019 at 2:39 PM Fei Deng wrote:
>>
>> While PR#6103 (https://github.com/apache/trafficserver/pull/6103)
Forgot to mention that this change would restore the old behavior of
TSContSchedule minus the delay and dead loop.
On Tue, Nov 19, 2019 at 2:39 PM Fei Deng wrote:
> While PR#6103 (https://github.com/apache/trafficserver/pull/6103) solves
> the problem of having the 60ms delay (caused by waiting
While PR#6103 (https://github.com/apache/trafficserver/pull/6103) solves
the problem of having the 60ms delay (caused by waiting in sleep), it also
creates an issue where the event loop ends up in a "dead loop" if the
scheduled event schedules itself with 0 timeout (see test code by scw00).
Here's
10 matches
Mail list logo