On Mon, 24 Nov 2014, at 10:20, users-h...@kafka.apache.org wrote:
> Hi! This is the ezmlm program. I'm managing the
> users@kafka.apache.org mailing list.
>
> I'm working for my owner, who can be reached
> at users-ow...@kafka.apache.org.
>
> To confirm that you would like
>
>d...@dyachkov
Hi guys,
Nowadays, all kafka administration work (add, tear down node, topic
management, throughput monitor) are done by various different tool talk to
brokers, zookeeper etc. Is there a plan for core team to build a central
universal server providing webservice API to do all the admin work?
Best
That is written up here
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements
Initially there will be a client API for command line but since it works on
the wire protocol anyone can either wrap the API or go directly to the wire
protocol to implement it how
Which Kafka version are you using?
On Fri, Nov 21, 2014 at 5:29 PM, Allen Wang
wrote:
> Yes, we have consumers for the topic fetching at the same time and we can
> see the its BytesPerSec metric. So we find it hard to explain why
> BytesInPerSec would be greater than BytesOutPerSec on the broker
Hello,
First, is there a limit to how many Kafka brokers you can have?
Second, if a Kafka broker node fails and I start a new broker on a new node, is
it correct to assume that the cluster will copy data to that node to satisfy
the replication factor specified for a given topic? In other words
Hi Casey,
1. There's some limit based on size of zookeeper nodes, not sure exactly
where it is though. We've seen 30 node clusters running in production.
2. For your scenario to work, the new broker will need to have the same
broker id as the old one - or you'll need to manually re-assign partiti
Gwen,
Thanks.
1. I had a feeling about zookeeper being the potential bottleneck, but I wasn't
sure.
2. Good to know.
From: Gwen Shapira [gshap...@cloudera.com]
Sent: Monday, November 24, 2014 2:47 PM
To: users@kafka.apache.org
Subject: Re: Two Kafka Ques
Hello Amazing Kafka Creators & User,
I have learnt and use kafka in our Production system, so you can count my
understanding as intermediate.
With the statement that "Kafka has solved the Scalability and Availability
needs for a large scale message publish/subscribe system", I understand
that hav
Hi, Everyone,
I'd like to start a discussion on whether it makes sense to add the
serializer api back to the new java producer. Currently, the new java
producer takes a byte array for both the key and the value. While this api
is simple, it pushes the serialization logic into the application. This
Which web console are you using?
Thanks,
Jun
On Fri, Nov 21, 2014 at 8:34 AM, Sa Li wrote:
> Hi, all
>
> I am trying to get kafka web console work, but seems it only works few
> hours and fails afterwards, below is the error messages on the screen. I am
> assuming something wrong with the DB,
Which version of Kafka are you using? Any error in the controller and the
state-change log?
Thanks,
Jun
On Fri, Nov 21, 2014 at 5:59 PM, Shangan Chen
wrote:
> In the initial state all replicas are in isr list, but sometimes when I
> check the topic state, the replica can never become isr even
1. The new producer takes only the new producer configs.
2. There is no longer a pluggable partitioner. By default, if a key is
provided, the producer hashes the bytes to get the partition. There is an
interface for the client to explicitly specify a partition, if it wants to.
3. Currently, the n
It seems you hit an exception when instantiating the sfl4j logger inside
the producer.
Thanks,
Jun
On Sun, Nov 23, 2014 at 9:29 PM, Haoming Zhang
wrote:
>
>
>
> Hi all,
>
> Basically I used a lot of codes from this project
> https://github.com/stealthly/scala-kafka , my idea is to sent a key/v
do you see error msg "Too many open files"? it tips you should modify
nofile
On Tue, Nov 25, 2014 at 1:26 PM, Jun Rao wrote:
> Which web console are you using?
>
> Thanks,
>
> Jun
>
> On Fri, Nov 21, 2014 at 8:34 AM, Sa Li wrote:
>
> > Hi, all
> >
> > I am trying to get kafka web console work,
All clear, Thank you.
I guess an example will be available when the version is released
Shlomi
On Tue, Nov 25, 2014 at 7:33 AM, Jun Rao wrote:
> 1. The new producer takes only the new producer configs.
>
> 2. There is no longer a pluggable partitioner. By default, if a key is
> provided, the
15 matches
Mail list logo