thanks Jun.
On Wed, Dec 18, 2013 at 10:47 AM, Jun Rao wrote:
> You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh.
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi <
> priyadarshi.push...@gmail.com> wrote:
>
> > i am not able to find run-simula
What's consumer trackingGroup_prod-storm-sup-trk007 doing at the same time?
It's the one that caused the conflict in ZK.
Thanks,
Jun
On Tue, Dec 17, 2013 at 9:19 PM, Drew Goya wrote:
> I explored that possibility but I'm not seeing any ZK session expirations
> in the logs and it doesn't look
I explored that possibility but I'm not seeing any ZK session expirations
in the logs and it doesn't look like these rebalances complete.
They fail due to conflicts in the zookeeper data
On Tue, Dec 17, 2013 at 8:53 PM, Jun Rao wrote:
> Have you looked at
>
> https://cwiki.apache.org/confluenc
RHEL 6.4 64bit
Java 6u35
On Tue, Dec 17, 2013 at 10:57 PM, Jun Rao wrote:
> Which OS are you on?
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 17, 2013 at 11:15 AM, Bryan Baugher wrote:
>
> > Hi,
> >
> > We have been trying out the kafka 0.8.0 beta1 for awhile and recently
> > attempted to upgrade to
You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh.
Thanks,
Jun
On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi <
priyadarshi.push...@gmail.com> wrote:
> i am not able to find run-simulator.sh in 0.8 even after building perf.if
> this tool has been deprecated what are
What's the replication factor of the topic? Is it larger than 1? You can
find out using the list topic command.
Thanks,
Jun
On Tue, Dec 17, 2013 at 2:39 PM, Francois Langelier <
francois.langel...@mate1inc.com> wrote:
> Hi,
>
> I installed zookeeper and kafka 8.0 following the quick start (
>
If a broker never joins an ISR, it could be that the fetcher died
unexpectedly. Did you see any "Error due to " in the log of broker 4?
Another thing to check is the max lag and the per partition lag in jmx.
Thanks,
Jun
On Tue, Dec 17, 2013 at 4:09 PM, Ryan Berdeen wrote:
> Sorry it's taken
Actually, hasNext() only returns false when the consumer connector is
shutdown. Typically, you either set consumer.timeout.ms to -1 or a value
larger than 0. If it's set to 0, my guess is that it throws a timeout
exception immediately if there is no more message.
Thanks,
Jun
On Tue, Dec 17, 201
Did you change fetch.wait.max.ms in the consumer config? If so, did you
make sure that it is smaller than socket.timeout.ms? Also, if you look at
the request log, how long does it take to complete the timed out fetch
request?
Thanks,
Jun
On Tue, Dec 17, 2013 at 2:30 PM, Tom Amon wrote:
> It a
Which OS are you on?
Thanks,
Jun
On Tue, Dec 17, 2013 at 11:15 AM, Bryan Baugher wrote:
> Hi,
>
> We have been trying out the kafka 0.8.0 beta1 for awhile and recently
> attempted to upgrade to 0.8.0 but noticed that the stop server script
> doesn't seem to stop the broker anymore. I noticed
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
?
Thanks,
Jun
On Tue, Dec 17, 2013 at 9:24 AM, Drew Goya wrote:
> Hey all,
>
> I've recently been having problems with consumer groups rebalancing. I'm
> using several high l
i am not able to find run-simulator.sh in 0.8 even after building perf.if
this tool has been deprecated what are other alternatives available now for
perf testing?
Regards,
Pushkar
It is worth mentioning you can reduce the likelyhood of loosing message by
running the controlled shutdown before killing the broker.
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-Whatiscontrolledshutdown?
The connection refused is a bit surprising though. T
Hi Ryan, can you help re-reproduce the issue on virtual machines? If so, I
added two more brokers (so five in total now) in a vagrant file
https://github.com/stealthly/kafka/tree/0.8_hubspot_testing_1
git clone https://github.com/stealthly/kafka/tree/0.8_hubspot_testing_1
cd 0.8_hubspot_testing_1
If there is no more messages, hasNext will return false instead of throwing
an exception.
Guozhang
On Tue, Dec 17, 2013 at 11:53 AM, Yu, Libo wrote:
> Sorry, a typo. Correct my question. When consumer.timeout.ms is set to 0,
> if there is no
> message available, hasNext() throws a timeout exc
Hello Francois,
What is the producer ack value in your console producer? If it is equal to
1 then when a leader is down it is possible to lose data, and hence not
consumed by the consumer.
Guozhang
On Tue, Dec 17, 2013 at 2:39 PM, Francois Langelier <
francois.langel...@mate1inc.com> wrote:
>
That error comes from you calling create message stream twice or the
container you are running in is causing this to be called twice
https://github.com/apache/kafka/blob/0.8/core/src/main/scala/kafka/consumer/
ZookeeperConsumerConnector.scala#L133from , don't do that and you won't get
the error.
You can do this indirectly by monitoring the avg/max latency of operations
on zookeeper. There is no direct way of measuring the requests/sec to
zookeeper since they don't expose the relevant jmx metrics.
Thanks,
Neha
On Tue, Dec 17, 2013 at 11:13 AM, S Ahmed wrote:
> Interesting, wasn't aware
Sorry it's taken so long to reply, the issue went away after I reassigned
partitions. Now it's back.
I haven't checked JMX, because the brokers and zookeeper have been
reporting the same ISR for several hours.
Some more details:
The cluster/topic has
5 brokers (1, 4, 5, 7, 8)
15 partitions (
Hi,
I installed zookeeper and kafka 8.0 following the quick start (
https://kafka.apache.org/documentation.html#quickstart) and when i try to
kill my leader, i got a lot of exception in my producer and consumer
consoles.
Then, after the exceptions stop printing, some of the messages I produce in
It appears that consumers that do not get messages regularly are timing out
every 30 seconds. This interval coincides with the default setting for "
socket.timeout.ms" at the consumer. When the timeout happens it looks like
the broker socket hangs for a few seconds, causing all other connected
cons
Hi,
We have been trying out the kafka 0.8.0 beta1 for awhile and recently
attempted to upgrade to 0.8.0 but noticed that the stop server script
doesn't seem to stop the broker anymore. I noticed here[1] that a commit
was made before the release to change the signal sent to stop the broker
from SIG
Interesting, wasn't aware of that.
Can you comment on how you go about monitoring your ZK cluster in terms of
throughput and if it is reaching its limits? Or is it even possible to do
this?
On Tue, Dec 17, 2013 at 2:01 PM, Benjamin Black wrote:
> ZK was designed from the start as a clustered,
When you say it pauses, do you mean producing and consuming? Can you get
metrics form before that is happening, during and after?
Could be gc pauses ... are you using this
http://kafka.apache.org/documentation.html#java or defaults?
/***
Joe Stein
Founde
There are no compatibility issues. You can roll upgrades through the
cluster one node at a time.
Thanks
Neha
On Tue, Dec 17, 2013 at 9:15 AM, Drew Goya wrote:
> So I'm going to be going through the process of upgrading a cluster from
> 0.8.0 to the trunk (0.8.1).
>
> I'm going to be expanding
Sorry, a typo. Correct my question. When consumer.timeout.ms is set to 0, if
there is no
message available, hasNext() throws a timeout exception, otherwise it returns
true.
Is that the right behavior?
Regards,
Libo
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: T
I'm on Kafka 0.8 final. Both brokers are up. The behavior is my producer
produces messages just fine, then it pauses for a few seconds. Then it
continues. The brokers are not stopping and starting. The broker logs show
that another producer/consumer has a connection error at the same time my
produc
ZK was designed from the start as a clustered, consistent, highly available
store for this sort of data and it works extremely well. Redis wasn't and I
don't know anyone using Redis in production, including me, who doesn't have
stories of Redis losing data. I'm sticking with ZK.
On Tue, Dec 17, 2
I am leaning towards using redis to track consumer offsets etc., but I see
how using zookeeper makes sense since it already part of the kafka infra.
One thing which bothers me is, how are you guys keeping track of the load
on zookeeper? How do you get an idea when your zookeeper cluster is
underp
Hi Bryan,
The broker is meant to only be built for 0.8.0 in 2.8.0 Scala version for
release. The primary purpose of the bin distro is the broker. Consumers &
Producers are all API / library code / wire protocol which is supported in
Scala: 2.8.0, 2.9.1, 2.9.2 and 2.10 along with Java from the A
Ok makes sense, thank you!
On Tue, Dec 17, 2013 at 12:16 PM, Joe Stein wrote:
> Hi Bryan,
>
> The broker is meant to only be built for 0.8.0 in 2.8.0 Scala version for
> release. The primary purpose of the bin distro is the broker. Consumers &
> Producers are all API / library code / wire pr
Hi everyone,
So I see in maven[1] there are a number of options for Kafka 0.8.0 with
different scala versions, but on the downloads page[2] and here[3] I only
see scala 2.8.0 available for binary (deploy/install) download. Will other
binary downloads be made available?
[1] - http://mvnrepository.
Hello,
This issue is known as in this JIRA:
https://issues.apache.org/jira/browse/KAFKA-1067
Guozhang
On Tue, Dec 17, 2013 at 8:48 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> hi,
>
> I've had the same issue with the kafka producer.
>
> you need to use a different partitioner
Hey all,
I've recently been having problems with consumer groups rebalancing. I'm
using several high level consumers which all belong to the same group.
Occasionally one or two of them will get stuck in a rebalance loop. They
attempt to rebalance, but the partitions they try to claim are owned.
So I'm going to be going through the process of upgrading a cluster from
0.8.0 to the trunk (0.8.1).
I'm going to be expanding this cluster several times and the problems with
reassigning partitions in 0.8.0 mean I have to move to trunk(0.8.1) asap.
Will it be safe to roll upgrades through the cl
hi,
I've had the same issue with the kafka producer.
you need to use a different partitioner than the default one provided for
kafka.
I've created a round robin partitioner that works well for equally
distributing data across partitions.
https://github.com/gerritjvv/pseidon/blob/master/pseidon-k
Hi All,
We are having kafka cluster of 2 nodes. (using 0.8.0 final release)
Replication Factor: 2
Number of partitions: 2
I have created a topic "test-topic1" in kafka.
When i am listing status of that topic using bin/kafka-list-topic.sh, the
status is:
topic: test-topic1partition: 0lea
Hi Jason, I just replied on the ticket. If it is a bug the update to
create new filter or fix as bug, same.
Can you post some code to help reproduce the problem? so apples to apples
and such, thanks!
/***
Joe Stein
Founder, Principal Consultant
Big Dat
Thanks for sharing.
Best Regards
Jerry
-Original Message-
From: "Jay Kreps"
To: "users@kafka.apache.org";
;
Cc:
Sent: 2013-12-17 (星期二) 06:00:17
Subject: Logs and distributed systems
For anyone that's interes
39 matches
Mail list logo