For a leader change yes, but this is partition reassignment which
completes when all the reassigned replicas are in sync with the
original replica(s). You can check the status of the command using the
option I mentioned earlier.
On Tue, Oct 15, 2013 at 7:02 PM, Kane Kane wrote:
> I thought if i h
Oh i see, what is the better way to initiate the leader change? As I told
somehow all my partitions have the same leader for some reason. I have 3
brokers and all partitions have leader on single one.
On Wed, Oct 16, 2013 at 12:04 AM, Joel Koshy wrote:
> For a leader change yes, but this is par
Did the reassignment complete? If the assigned replicas are in ISR and
the preferred replicas for the partitions are evenly distributed
across the brokers (which seems to be a case on a cursory glance of
your assignment) you can use this tool:
https://cwiki.apache.org/confluence/display/KAFKA/Repli
Yes, thanks, looks like that's what i need, do you know why it tends to
choose the leader for all partitions on the single broker, despite I have 3?
On Wed, Oct 16, 2013 at 12:19 AM, Joel Koshy wrote:
> Did the reassignment complete? If the assigned replicas are in ISR and
> the preferred repli
There is a ticket for auto-rebalancing, hopefully they'll do auto
redistribution soon
https://issues.apache.org/jira/browse/KAFKA-930
On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane wrote:
> Yes, thanks, looks like that's what i need, do you know why it tends to
> choose the leader for all partitio
That's only the time delay. There seems to be no option to persist on every
message.
On Wed, Oct 16, 2013 at 12:05 AM, Jun Rao wrote:
> In 0.8, we do have "log.flush.interval.ms.per.topic" (see
> http://kafka.apache.org/documentation.html#brokerconfigs for details).
>
> Thanks,
>
> Jun
>
>
> On
For manual offset commits, it will be useful to have some kind of API that
informs the client when a rebalance is going to happen. We can think about
this when we do the client rewrite.
Thanks,
Jun
On Tue, Oct 15, 2013 at 9:21 PM, Jason Rosenberg wrote:
> Jun,
>
> Yes, sorry, I think that was
Make sure that there is no under replicated partitions (use the
--under-replicated option in the list topic command) before you run that
tool.
Thanks,
Jun
On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane wrote:
> Yes, thanks, looks like that's what i need, do you know why it tends to
> choose the
Thanks for advise!
On Wed, Oct 16, 2013 at 7:57 AM, Jun Rao wrote:
> Make sure that there is no under replicated partitions (use the
> --under-replicated option in the list topic command) before you run that
> tool.
>
> Thanks,
>
> Jun
>
>
> On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane wrote:
>
Hello, as I understand send is not atomic, i.e. i have something like this
in my code:
val requests = new ArrayBuffer[KeyedMessage[AnyRef, AnyRef]]
for (message <- messages) {
requests += new KeyedMessage(topic, null, message, message)
}
producer.send(requests)
That means ba
Yes, the change in trunk is that all log configurations are automatically
available at both the log level and the global default level and can be set
at topic creation time or changed later without bouncing any servers.
-Jay
On Tue, Oct 15, 2013 at 5:47 PM, Simon Hørup Eskildsen
wrote:
> Do you
That would be great. Additionally, in the new api, it would be awesome
augment the default auto-commit functionality to allow client code to mark
a message for commit only after processing a message successfully!
On Wed, Oct 16, 2013 at 7:52 AM, Jun Rao wrote:
> For manual offset commits, it w
Hi Kane,
If the producer is async, the send(requests) function call would not
necessarily trigger the real sending action. The sending action is
triggered either if enough time has elapsed or enough messages have been
batched on the client side. One batch of messages to each broker will be
either
Hi, so yeah, as i see here:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=blob;f=core/src/main/scala/kafka/producer/async/DefaultEventHandler.scala;h=c8326a8a991cdfebec0d86003d08ce8d2e2c6986;hb=HEAD#l94looks
like batch to a single broker is atomic indeed, what if i have
messages to all brok
The "atomicity" is per broker-request, hence one batch can be distributed
as produce requests to multiple brokers, and if one produce request failed
it will be retried but not the whole batch.
The produce does record which request were successfully sent in the logs,
but not returned in the send()
Btw, after we complete KAFKA-1000 (offset management in Kafka) it
should be reasonable to commit offsets on every message as long as the
optional metadata portion of the offset commit request is small/empty.
Thanks,
Joel
On Wed, Oct 16, 2013 at 10:35 AM, Jason Rosenberg wrote:
> That would be
Thanks Jun! The sample json returned from the -help of the script is out of
date.
On Sun, Oct 13, 2013 at 5:10 PM, Jun Rao wrote:
> Are you trying to feed the json file to the --manual-assignment-json-file
> option? If so, you need to specify the replicas (see the description of the
> option fo
This looks great. What is the time frame for this effort?
Jason
On Wed, Oct 16, 2013 at 2:19 PM, Joel Koshy wrote:
> Btw, after we complete KAFKA-1000 (offset management in Kafka) it
> should be reasonable to commit offsets on every message as long as the
> optional metadata portion of the o
Could you try the latest 0.8 branch? I think it's fixed there already.
Thanks,
Jun
On Wed, Oct 16, 2013 at 7:34 PM, Calvin Lei wrote:
> Thanks Jun! The sample json returned from the -help of the script is out of
> date.
>
>
> On Sun, Oct 13, 2013 at 5:10 PM, Jun Rao wrote:
>
> > Are you tryi
19 matches
Mail list logo