I see. Thanks Neha.

On Thursday, October 24, 2013, Neha Narkhede <neha.narkh...@gmail.com>
wrote:
> That was one of the examples I gave that doesn't cover all cases. We have
> admin tools that can move leader away selectively. In that case, the
> partitions would have different error codes.
>
> Thanks,
> Neha
>
>
> On Wed, Oct 23, 2013 at 1:36 AM, xingcan <xingc...@gmail.com> wrote:
>
>> Neha,
>>
>> Thanks for your prompt reply. And I got another two questions. As I wrote
>> before,
>> my mechanism is to add all partitions belonging to the same leader broker
>> to
>> a single request. And then send these requests one by one for each
broker.
>> Is this necessary? And if that, all topic and partitions in one
>> FetchResponse
>> should belong to the same broker. Will the broker's down lead to partial
or
>> definitely total error for all topic and partitions?
>>
>>
>>
>> On Tue, Oct 22, 2013 at 10:12 PM, Neha Narkhede <neha.narkh...@gmail.com
>> >wrote:
>>
>> > We need to return an error code per partition since your fetch request
>> > could've succeeded for some but not all partitions. For example, if one
>> > broker fails, some partitions might temporarily return a
>> LeaderNotAvailable
>> > error code. So you have to go through the individual error codes to
know
>> > which partitions you need to retry the operation for.
>> >
>> > Thanks,
>> > Neha
>> > On Oct 21, 2013 11:27 PM, "xingcan" <xingc...@gmail.com> wrote:
>> >
>> > > Hi,
>> > >
>> > > After migrating from 0.72 to 0.8, I still use SimpleConsumer to
>> construct
>> > > my own consumer. By FetchRequestBuilder, I add all partitions
belonging
>> > to
>> > > the same broker to a single request and get a FetchResponse for all
>> these
>> > > partitions. However, I find  the error code in FetchResponse is a
>> little
>> > > hard to retrieve for I must iterate all these partitions to check. Is
>> > there
>> > > any tips or suggestions to deal with this or maybe the API provided
by
>> > > FetchResponse could be little changed?
>> > >
>> > > --
>> > > *Xingcan*
>> > >
>> >
>>
>>
>>
>> --
>> *Xingcan*
>>
>

-- 
*Xingcan*

Reply via email to