The failure could mean that the reassignment is still in progress. If you
have lots of data, it may take some time to move the data to new brokers.
You could observe the max lag in each broker to see how far behind new
replicas are (see http://kafka.apache.org/documentation.html#monitoring).
Thank
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whycan'tmyconsumers/producersconnecttothebrokers
?
Thanks,
Jun
On Mon, Jul 7, 2014 at 9:18 AM, Kalpa 1977 wrote:
> hi all,
>I am using kafka 0-8-1.
>
> I have created a simple topic test no partition and no r
Most of the testing predates me, however from archaeological expeditions
into old test servers, we’ve used JBOD configurations at some point in the
past for at least some purposes. I assume that RAID 10 was chosen
specifically for the redundancy (previously, deployments and maintenance
had been muc
Thanks for the updated deck. I had not seen that one yet. I noticed in
the preso you are running RAID10 in prod. Any thoughts of going JBOD? In
our testing we saw significant performance improvements. This of course
comes with trade off of manual steps if brokers fail.
Bert
On Monday, July 7
use preferred tool can rebalance leadership,but if the isr are null then the
leader is only -1,how i can recover the leader.
Thanks,
Lax
> Date: Mon, 7 Jul 2014 08:06:16 -0700
> Subject: Re: How recover leader when broker restart
> From: wangg...@gmail.com
> To: users@kafka.apache.org
>
> You
Two biggest features in 0.8.2 are Kafka-based offset management and the new
producer. We are in the final stage of testing them. We also haven't fully
tested the delete topic feature. So, we are probably 4-6 weeks away from
releasing 0.8.2.
For kafka-1180, the patch hasn't been applied yet and we
When I run the tool with the --verify option it says failed for the some
partitions.
The problem is I do not know if it is a zookeeper issue or if the tool
really failed.
I faced one time the zookeeper issue (
https://issues.apache.org/jira/browse/KAFKA-1382) and by killing the
responsible Kafka
How does it get stuck?
-Clark
Clark Elliott Haskins III
LinkedIn DDS Site Reliability Engineer
Kafka, Zookeeper, Samza SRE
Mobile: 505.385.1484
BlueJeans: https://www.bluejeans.com/chaskins
chask...@linkedin.com
https://www.linkedin.com/in/clarkhaskins
There is no place like 127.0.0.1
On 7/
By setting this property
log.retention.mins=10
in the server.properties file, which is passed as argument when starting
the broker.
Virendra
On 7/7/14, 3:31 PM, "Guozhang Wang" wrote:
>How do you set the retention.minutes property? Is it through zk-based
>topics tool?
>
>Guozhang
>
>
>On Mon, J
Hi,
I am trying to add new brokers to an existing 8 nodes Kafka cluster. We
have around 10 topics and the number of partition is set to 50. In order to
test the reassgin-partitions scripts, I tried on a sandbox cluster the
following steps.
I developed a script which is able to parse the reassignm
How do you set the retention.minutes property? Is it through zk-based
topics tool?
Guozhang
On Mon, Jul 7, 2014 at 3:07 PM, Virendra Pratap Singh <
vpsi...@yahoo-inc.com.invalid> wrote:
> I am running a mixed cluster as I mentioned earlier. 1 broker 0.8.0 and
> the other 0.8.1.1. Should the ret
I am running a mixed cluster as I mentioned earlier. 1 broker 0.8.0 and
the other 0.8.1.1. Should the retention of topics for partitions
owned/replicated by the broker running 0.8.1.1 not enforce the server
properties settings as defined for that server.
So this brings an interesting question, in
Hi,
I'm late to the thread... but that "...we intercept log4j..." caught my
attention. Why intercept, especially if it's causing trouble?
Could you use log4j syslog appender and get logs routed to wherever you
want them via syslog, for example?
Or you can have syslog tail log4j log files (e.g. r
We plan to have a working prototype ready end of September.
Guozhang
On Mon, Jul 7, 2014 at 11:05 AM, Jason Rosenberg wrote:
> Great, that's reassuring!
>
> What's the time frame for having a more or less stable version to try out?
>
> Jason
>
>
> On Mon, Jul 7, 2014 at 12:59 PM, Guozhang Wang
Great, that's reassuring!
What's the time frame for having a more or less stable version to try out?
Jason
On Mon, Jul 7, 2014 at 12:59 PM, Guozhang Wang wrote:
> I see your point now. The old consumer does have a hard-coded
> "round-robin-per-topic" logic which have this issue. In the new co
You¹re out of date, Jun. We¹re up to 20 now :)
Our ops presentation on Kafka is a little more up to date on numbers:
http://www.slideshare.net/ToddPalino/enterprise-kafka-kafka-as-a-service
-Todd
On 7/7/14, 7:21 AM, "Jun Rao" wrote:
>LinkedIn's largest Kafka cluster has 16 nodes now. More det
I see your point now. The old consumer does have a hard-coded
"round-robin-per-topic" logic which have this issue. In the new consumer,
we will make the assignment logic customizable so that people can specify
different rebalance algorithms they like.
Also I will soon send out a new consumer desig
hi all,
I am using kafka 0-8-1.
I have created a simple topic test no partition and no replication,
producer and consumer both works fine running in the same system,
However, If i call from the remote system as per the below mentioned
command.
./bin/kafka-producer-perf-test.sh --broker-lis
Guozhang,
I'm not suggesting we parallelize within a partition
The problem with the current high-level consumer is, if you use a regex to
select multiple topics, and then have multiple consumers in the same group,
usually the first consumer will 'own' all the topics, and no amount of
sub-sequ
Hi Jason,
In the new design the consumption is still at the per-partition
granularity. The main rationale of doing this is ordering: Within a
partition we want to preserve the ordering such that message B produced
after message A will also be consumed and processed after message A. And
producers c
Hello Janos,
The approach we took at LinkedIn is the first option, i.e. using different
clusters at different DC, and mirroring data asynchronously. For the offset
inconsistency issue, our applications usually use the offset request with
the timestamp when primary DC was down and conservatively ge
You can use the preferred leader election tool to move the leadership.
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-2.PreferredReplicaLeaderElectionTool
Guozhang
On Mon, Jul 7, 2014 at 7:56 AM, 鞠大升 wrote:
> you can use the preferred leader election tool
you can use the preferred leader election tool to reset leaders to
preferred replicas.
2014年7月7日 PM10:37于 "François Langelier" 写道:
> AFAIK, the simplest way will be to shutdown your 2 others brokers after you
> restarted your broker 1, which will force your topics to have your broker 1
> as leader
Dear Kafka Users,
I would like to use Kafka 0.8.x in a multi-cluster environment so that when my
primary cluster fails, producers and consumers could switch to the secondary
cluster. Clusters would be hosted in different data centers.
A possibility would be mirroring topics (similar to Kafka 0.
I've been looking at the new consumer api outlined here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design
One issue in the current high-level consumer, is that it does not do a good
job of distributing a set of topics between multiple consumers, unless each
topic
AFAIK, the simplest way will be to shutdown your 2 others brokers after you
restarted your broker 1, which will force your topics to have your broker 1
as leader since it's the only one available, and then restart your brokers
2 and 3
But I can't really see why you want your leaders on broker 1...
What's the status for an 0.8.2 release? We are currently using 0.8.0, and
would like to upgrade to take advantage of some of the per-topic retention
options available now in 0.8.1.
However, we'd also like to take advantage of some fixes coming in 0.8.2
(e.g. deleting topics).
Also, we have been
LinkedIn's largest Kafka cluster has 16 nodes now. More detailed info can
be found in
http://www.slideshare.net/Hadoop_Summit/building-a-realtime-data-pipeline-apache-kafka-at-linkedin?from_search=5
Thanks,
Jun
On Mon, Jul 7, 2014 at 3:33 AM, Ersin Er wrote:
> Hi,
>
> LinkedIn has 8 node Kafk
i have 3 broker,when i restart a broker 1,then 1 can not as leader.i want to
know how i can recover broker 1 as a leader.
thanks,
lax
I mean brokers particularly but others are also welcome.
On Jul 7, 2014 3:36 PM, "Otis Gospodnetic"
wrote:
> Hi,
>
> I think it depends on what you mean by largest? Most brokers? Producers?
> Consumers? Messages? Bytes?
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
Hi,
I think it depends on what you mean by largest? Most brokers? Producers?
Consumers? Messages? Bytes?
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Mon, Jul 7, 2014 at 6:33 AM, Ersin Er wrote:
> Hi,
>
> LinkedIn
Hi,
LinkedIn has 8 node Kafka clusters AFAIK, right? I guess there are larger
deployments than LinkedIn's. What's are the largest Kafka deployments you
know of? Any public performance and scalability data published for such
clusters?
Any pointers would be interesting and helpful.
Regards,
--
E
32 matches
Mail list logo