OK, I'm convinced :)

I'll commit after Becket addresses other review points and adds some
comments on how our error handling works (to make sure we don't screw
it up later).

Gwen

On Wed, Jul 22, 2015 at 11:34 AM, Todd Palino <tpal...@gmail.com> wrote:
> Since I've been dealing with the fallout of this particular problem all
> week, I'll add a few thoughts...
>
>
> On Wed, Jul 22, 2015 at 10:51 AM, Gwen Shapira <gshap...@cloudera.com>
> wrote:
>>
>>
>>
>> > On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
>> > > core/src/main/scala/kafka/network/SocketServer.scala, line 465
>> > >
>> > > <https://reviews.apache.org/r/36664/diff/1/?file=1018238#file1018238line465>
>> > >
>> > >     Turns out that catching Throwable is a really bad idea:
>> > > https://www.sumologic.com/2014/05/05/why-you-should-never-catch-throwable-in-scala/
>> >
>> > Jiangjie Qin wrote:
>> >     Ah... Didn't know that before. I explicitly listed the exceptions.
>> >
>> > Guozhang Wang wrote:
>> >     Searching ": Throwable" gives me 180+ cases in code base :P Though
>> > many of them are from unit tests (which, arguably maybe OK) there are still
>> > a lot in the core package. I agree that we should avoid catching Throwable
>> > whenever possible, which will also help enforcing the developers to think
>> > about possible checked exceptions in the calling trace.
>>
>> I know :(
>> I'm not sure if going over and converting everything is worth the effort.
>> Although it can be a nice newbie jira.
>>
>>
>> > On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
>> > > core/src/main/scala/kafka/network/SocketServer.scala, line 400
>> > >
>> > > <https://reviews.apache.org/r/36664/diff/1/?file=1018238#file1018238line400>
>> > >
>> > >     So in case of unexpected exception, we log an error and keep
>> > > running?
>> > >
>> > >     Isn't it better to kill the processor, since we don't know what's
>> > > the state of the system? If the acceptor keeps placing messages in the 
>> > > queue
>> > > for a dead processor, isn't it a separate issue?
>> >
>> > Jiangjie Qin wrote:
>> >     This part I'm not quite sure. I am not very experienced in the error
>> > handling in such case, so please correct me if I missed something.
>> >     Here is what I thought.
>> >
>> >     The way it currently works is that the acceptor will
>> >     1. accept new connection request and create new socket channel
>> >     2. choose a processor and put the socket channel into the
>> > processor's new connection queue
>> >
>> >     The processor will just take the socket channels from the queue and
>> > register it to the selector.
>> >     If the processor runs and get an uncaught exception, there are
>> > several possibilities.
>> >     Case 1: The exception was from one socket channel.
>> >     Case 2: The exception was associated with a bad request.
>> >     In case 1, ideally we should just disconnect that socket channel
>> > without affecting other socket channels.
>> >     In case 2, I think we should log the error and skip the message -
>> > assuming client will retry sending data if no response was received for a
>> > given peoriod of time.
>> >
>> >     I am not sure if letting processor exit is a good idea because this
>> > will lead to the result of a badly behaving client screw the entire cluster
>> > - it might screw processors one by one. Comparing with that, I kind of
>> > leaning towards keeping the processor running and serving other normal TCP
>> > connections if possible, but log the error so monitoring system can detect
>> > and see if human intervention is needed.
>> >
>> >     Also, I don't know what to do here to prevent the thread from
>> > exiting without catching all the throwables.
>> >     According to this blog
>> > http://www.tzavellas.com/techblog/2010/09/20/catching-throwable-in-scala/
>> >     I guess I can rethrow all the ControlThrowables, but intercept the
>> > rests?
>> >
>> > Guozhang Wang wrote:
>> >     I would also prefer not to close the Processor thread upon
>> > exceptions, mainly for avoid one bad client killing a shared Kafka cluster.
>> > In the past we have seen such issues like DDoS MetadataRequest killing the
>> > cluster and all other clients gets affected, etc, and the quota work is
>> > towards preventing it. Since Processor threads are shared (8 by default on 
>> > a
>> > broker), it should not be closed by a single socket / bad client request.
>>
>> I like your thinking around cases #1 and #2. I think this should go as a
>> code comment somewhere, so when people improve / extend SocketServer they
>> will keep this logic in mind. Maybe even specify in specific catch clauses
>> if they are handling possible errors in request level or channel level.
>>
>> My concern is with possible case #3: Each processor has an
>> o.a.k.common.network.Selector. I'm concerned about the possibility of
>> something going wrong in the state of the selector, which will possibly be
>> an issue for all channels. For example failure to register could be an issue
>> with the channel.register call, but also perhaps an issue with keys.put
>> (just an example - I'm not sure something can actually break keys table).
>>
>> I'd like to be able to identify cases where the Selector state may have
>> gone wrong and close the processor in that case. Does that make any sense?
>> Or am I being too paranoid?
>
>
> If there are error cases that are not associated with a specific connection
> or request, then I agree that we should handle that a little differently.
> What we need to keep in mind is that "close the processor" is actually
> "terminate the broker". Which means it's pretty damaging to do (both in
> terms of the cluster health, and on call happiness).
>
> My gut feeling is that if we have a failure in the processor that affects
> all connections, we're going to see that by its symptoms (drop in messages
> in/out, buildup of connections that are not getting handled or closed,
> etc.). Since we're not even sure there's a real failure case there to worry
> about, does it make sense to implement the fix we have now for exception
> handling and worry about any strange error state that might come up as a
> separate issue to address later when we have evidence of it?
>
> -Todd
>
>
>>
>>
>>
>> - Gwen
>>
>>
>> -----------------------------------------------------------
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/36664/#review92496
>> -----------------------------------------------------------
>>
>>
>> On July 22, 2015, 5:02 a.m., Jiangjie Qin wrote:
>> >
>> > -----------------------------------------------------------
>> > This is an automatically generated e-mail. To reply, visit:
>> > https://reviews.apache.org/r/36664/
>> > -----------------------------------------------------------
>> >
>> > (Updated July 22, 2015, 5:02 a.m.)
>> >
>> >
>> > Review request for kafka.
>> >
>> >
>> > Bugs: KAFKA-2353
>> >     https://issues.apache.org/jira/browse/KAFKA-2353
>> >
>> >
>> > Repository: kafka
>> >
>> >
>> > Description
>> > -------
>> >
>> > Addressed Gwen's comments
>> >
>> >
>> > Diffs
>> > -----
>> >
>> >   core/src/main/scala/kafka/network/SocketServer.scala
>> > 91319fa010b140cca632e5fa8050509bd2295fc9
>> >
>> > Diff: https://reviews.apache.org/r/36664/diff/
>> >
>> >
>> > Testing
>> > -------
>> >
>> >
>> > Thanks,
>> >
>> > Jiangjie Qin
>> >
>> >
>>
>

Reply via email to