On Fri, Jan 18, 2019 at 2:51 PM Ivan Kelly <iv...@apache.org> wrote:

> One thing missing from this discussion is details on the motivating
> use-case. How many delayed messages per second are we expecting? And
> what is the payload size?
>
> > If consumer control the delayed message specific execution time we must
> > trust clock of consumer, this can cause delayed message process ahead of
> > time, some applications cannot tolerate this condition.
>
> This can be handled in a number of ways. Consumer clocks can be skewed
> with regard to other clocks, but it is generally safe to assume that
> clocks advance at the same rate, especially at the granularity of a
> couple of hours.
> So rather than specifying the absolute timestamp that the message
> should appear to the user, the dispatcher can specify the relative
> delay after dispatch that it should appear to the user.
>
> > > My concern of this category of approaches is "bandwidth" usage. It is
> > > basically trading bandwidth for complexity.
> >
> > @Sijie Guo <si...@apache.org> Agree with you, such an trading can cause
> the
> > broker's out going network to be more serious.
>
> I don't think PIP-26's approach may not use less bandwidth in this
> regard. With PIP-26, the msg ids are stored in a ledger, and when the
> timeout triggers it dispatches? Are all the delayed message being
> cached at the broker? If so, that is using a lot of memory, and it's
> exactly the kind of memory usage pattern that is very bad for JVM
> garbage collection. If not, then you have to read the message back in
> from bookkeeper, so the bandwidth usage is the same, though on a
> different path.
>
> In the client side approach, the message could be cached to avoid a
> redispatch. When I was discussing with Matteo, we discussed this. The
> redelivery logic has to be there in any case, as any cache (broker or
> client side) must have a limited size.
> Another option would be to skip sending the payload for delayed
> messages, and only send it when the client request redelivery, but
> this has the same issue with regard to the entry likely falling out
> the cache at the broker-side.


There are bandwidth usage at either approaches for sure. The main
difference between broker-side and client-side approaches is which part of
the bandwidth is used.

In the broker-side approach, it is using the bookies egress and broker
ingress bandwidth. In a typical pulsar deployment, bookies egress is mostly
idle unless there are consumers falling behind.

In the client-side approach, it is using broker’s egress bandwidth and
potentially bookies’ egress bandwidth. Brokers’ egress is critical since it
is shared across consumers. So if the broker egress is doubled, it is a red
flag.

Although I agree the bandwidth usage depends on workloads. But in theory,
broker-side approach is more friendly to resource usage and a better
approach to use the resources in a multi layered architecture. Because it
uses less bandwidth at broker side. A client side can cause more bandwidth
usage at broker side.

Also as what penghui pointed out, clock screw can be another factor causing
more traffic in a fanout case. In a broker-side approach, the deferred is
done in a central point, so when the deferred time point kicks in, broker
just need to read the data one time from bookies. However in a client-side
approach, the messages are asked by different subscriptions, different
subscription can ask the deferred message at any time based on their clocks.



>
> -Ivan
>

Reply via email to