The log cleaner only runs every log.cleanup.interval.mins and the minimum
value is 1 minute. So, if you are completely out of space, this won't work.
You can bring down the server, delete some old segments and restart the
server. However, if the server is not out of space yet, you can reduce
log.re
The lock statement at the Log.scala level is required to make
truncation/deletion thread safe. My understanding of AtomicReference vs
volatile is that the former is useful if you want to use compound actions
like getAndSet/compareAndSet. Since we don't use those, using volatile
might suffice.
Than
You need to set up the zookeeper service as a "quorum" across Z1, Z2, Z3
and have the brokers and consumers connect to the resulting zookeeper
cluster.
Thanks,
Neha
On Mon, Nov 25, 2013 at 1:24 AM, Arjun wrote:
> Hi,
>
> I am new to Kafka. Was looking at kafka 0.8. What we need is a system
>
Actually, I saw this line in the log : can't rebalance after 4 retries.
What should I expect in this case? All consumers threads failed or only some of
them?
If I increase the number of retries or delay between retries, will that help?
Regards,
Libo
-Original Message-
From: Jun Rao [ma
Thanks for confirming that, Jun.
Regards,
Libo
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Friday, November 29, 2013 8:28 PM
To: users@kafka.apache.org
Subject: Re: upgrade from 0.8-beta1 to 0.8
You should be able to do an in-place upgrade.
Thanks,
Jun
On Fri,
Is the failure on the last rebalance? If so, some partitions will not have
any consumers. A common reason for rebalance failure is that there is
conflict in owning partitions among different consumers in the same group.
Increasing the # retries and the amount of backoff time btw retires should
help
Hi,
Using 0.8 beta1, how can I go about changing the replication factor for a
topic that already exists?
Thanks,
Ryan
Currently, we haven't added that feature to 0.8 or 0.8.1, but most probably
it will be added to 0.8.1 scheduled to release early next year.
Thanks,
Neha
On Mon, Dec 2, 2013 at 10:28 AM, Ryan Berdeen wrote:
> Hi,
>
> Using 0.8 beta1, how can I go about changing the replication factor for a
> to
Thanks for your insights, Jun. That is really helpful. I forgot to mention the
cause of the issue in my previous
Email. We have three brokers. I notice from the log that all three brokers
re-registered themselves with zk.
That means all of them were somehow offline for a short time and then
aut
Hi,
Last week we set-up a new kafka 0.8 cluster, using the beta1 released
available here : http://kafka.apache.org/downloads.html
It worked fine until we tried to replace a node in this cluster.
We shutdown a node, then bring up a new one. The new node is registered in
zookeeper, but it doesn't g
You can keep the broker.id of the new node same as the old node. Then it
will start up and copy everything from the leader for the partitions it is
assigned to. After it is caught up, you can run the
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-2.PreferredRep
Hello - I am trying to understand my trouble passing an Avro message through
Kafka (0.8)
>From what I see, the class tries to create an instance of the encoder but
>fails as it can not find the constructor, although it is there.
Here's the code and subsequent error. Appreciate any help!
Thank y
Ok, thanks.
And now, let's say that I want to add two new nodes to increase the
capacity of our cluster. Is that possible with the current beta1 release ?
2013/12/2 Neha Narkhede
> You can keep the broker.id of the new node same as the old node. Then it
> will start up and copy everything from
We fixed a few bugs in the reassignment logic and the 0.8 final release is
in progress. Recommend you use 0.8 final and see how that goes. We are
fixing it further in 0.8.1, so if you feel adventurous you can try using
trunk :)
Thanks
Neha
On Mon, Dec 2, 2013 at 2:12 PM, Maxime Nay wrote:
> Ok
Hi Brendan,
I would try using rather than and
setting:
props.put("serializer.class", "kafka.serializer.DefaultEncoder");
Cheers
Rob.
On 2 December 2013 22:01, Brenden Cobb wrote:
> Hello - I am trying to understand my trouble passing an Avro message
> through Kafka (0.8)
> From what
So, just to make sure (since switching to another release might be painful)
:
- There is no way to do that using the beta1 release ?
- By 0.8 final, you mean the code in this branch :
https://github.com/apache/kafka/tree/0.8 ?
Thanks
Maxime
2013/12/2 Neha Narkhede
> We fixed a few bugs in the
Interesting. So twitter storm is used to basically process the messages on
kafka? I'll have to read-up on storm b/c I always thought the use case
was a bit different.
On Sun, Dec 1, 2013 at 9:59 PM, Joe Stein wrote:
> Awesome Philip, thanks for sharing!
>
> On Sun, Dec 1, 2013 at 9:17 PM, P
S Ahmed,
This combination of Kafka and Storm to process streaming data is becoming
pretty common. Definitely worth looking at.
The throughput will vary depending on your workload (cpu usage, etc.) and
if you're talking to a backend, of course. But it scales very well.
-Suren
On Mon, Dec 2, 20
Rob- Thanks so much. I've got progress with your suggestions. Hopefully
things will go more smoothly now :)
-Brenden
On 12/2/13 5:28 PM, "Robert Turner" wrote:
>Hi Brendan,
>
>I would try using rather than and
>setting:
>
>props.put("serializer.class", "kafka.serializer.DefaultEncoder");
The default value for "controlled.shutdown.enable" is false.
Does that mean that stopping a broker without a controlled shutdown and using a
"kill ?9" might lead to an under "UnderReplicatedPartitions" state?
Using "controlled shut down" means Kafka will try first to migrate the
partition leaderships from the broker being shut down before really shut it
down so that the partitions will not be unavailable. Disabling it would
mean that during the time when the broker is done until the controller
noticed i
Philip this is definitely useful
> On Dec 2, 2013, at 14:55, Surendranauth Hiraman
> wrote:
>
> S Ahmed,
>
> This combination of Kafka and Storm to process streaming data is becoming
> pretty common. Definitely worth looking at.
>
> The throughput will vary depending on your workload (cpu usa
rebalance.backoff.ms
Thanks,
Jun
On Mon, Dec 2, 2013 at 11:31 AM, Yu, Libo wrote:
> Thanks for your insights, Jun. That is really helpful. I forgot to mention
> the cause of the issue in my previous
> Email. We have three brokers. I notice from the log that all three brokers
> re-registered t
23 matches
Mail list logo