Delete topic doesn't work yet. We plan to fix it in trunk.
Thanks,
Jun
On Fri, Nov 8, 2013 at 6:30 PM, hsy...@gmail.com wrote:
> It's in the branch, cool, I'll wait for it's release. actually I find I can
> use ./kafka-delete-topic.sh and ./kafk-create-topic.sh with same topic name
> and keep
I mean I assume the messages not yet consumed before delete-topic will be
delivered before you create same topic, correct?
On Fri, Nov 8, 2013 at 6:30 PM, hsy...@gmail.com wrote:
> It's in the branch, cool, I'll wait for it's release. actually I find I
> can use ./kafka-delete-topic.sh and ./ka
It's in the branch, cool, I'll wait for it's release. actually I find I can
use ./kafka-delete-topic.sh and ./kafk-create-topic.sh with same topic name
and keep the broker running. It's interesting that delete topic doesn't
actually remove the data from the brokers. So what I understand is as long
Hello,
Please check the add-partition tool:
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.AddPartitionTool
Guozhang
On Fri, Nov 8, 2013 at 5:32 PM, hsy...@gmail.com wrote:
> Hi guys, since kafka is able to add new broker into the cluster at runtime,
>
Hi guys, since kafka is able to add new broker into the cluster at runtime,
I'm wondering is there a way to add new partition for a specific topic at
run time? If not what will you do if you want to add more partition to a
topic? Thanks!
Actually from your original mail you do seem to have logs (somewhere -
either in a file or stdout). Do you see zookeeper session expirations
in there prior to the rebalances?
On Fri, Nov 08, 2013 at 04:11:15PM -0500, Ahmed H. wrote:
> Thanks for the input. Yes that directory is open for all users
so I think this was a copy paste error where the quote symbol (") was being
pasted as something different which messed up the json file with the assignment
issues. After trying to replicate this issue I was unsuccessful unless I had
the bad character.
Fr
I had some trouble with maven dependencies when i tried to get a simple
round trip test going. I worked past those and made my test available
here: https://github.com/buildlackey/cep/tree/master/kafka
it should run out of the box.
-cb
On Thu, Nov 7, 2013 at 5:35 PM, S L wrote:
> Hi,
>
> Th
Thanks for the input. Yes that directory is open for all users (rwx).
I don't think that the lack of logging is related to my consumer dying, but
it doesn't help when trying to debug when I have no logs.
I am struggling to find a reason behind this. I deployed the same code, and
same version of K
Marc - thanks again for doing this. Couple of suggestions:
- I would suggest removing the disclaimer and email quotes since this
can become a stand-alone clean document on what the purgatory is and
how it works.
- A diagram would be helpful - it could say, show the watcher map and
the expir
Do you have write permissions in /kafka-log4j? Your logs should be
going there (at least per your log4j config) - and you may want to use
a different log4j config for your consumer so it doesn't collide with
the broker's.
I doubt the consumer thread dying issue is related to yours - again,
logs wo
Copy-jars.sh did not copy the hadoop consumer and kafka jars. I have copied
them to HDFS manually, but still getting the same error.
Looks like I have all the reqd jars now:
[root@idh251-0 test]# hadoop fs -ls /tmp/kafka/lib
Warning: $HADOOP_HOME is deprecated.
Found 7 items
-rw-r--r-- 3 root
Hello,
I am using the beta right now.
I'm not sure if it's GC or something else at this point. To be honest I've
never really fiddled with any GC settings before. The system can run for as
long as a day without failing, or as little as a few hours. The lack of
pattern makes it a little harder to
Thanks Marc! I will also go through it and suggest some edits today.
Guozhang
On Fri, Nov 8, 2013 at 7:50 AM, Marc Labbe wrote:
> Thx for the feedback. It is true I never mention anything about impact on
> users or the fact this is mostly internal business in Kafka. I will try to
> rephrase so
Still get the same error:
[root@idh251-0 hadoop-consumer]# ./run-class.sh
kafka.etl.impl.SimpleKafkaETLJob test/test.properties
:./../../core/target/scala_2.8.0/kafka-*.jar:./../../contrib/hadoop-consumer/lib_managed/scala_2.8.0/compile/*.jar:./../../contrib/hadoop-consumer/target/scala_2.8.0/*.jar
Ok, sorry, missed a step, ran the copy_jars.sh and now retrying
On Fri, Nov 8, 2013 at 8:55 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Hi Neha:
>
> I was following the directions outlined here -
> https://github.com/apache/kafka/tree/0.8/contrib/hadoop-consumer. It does
> not mention anyth
Hi Neha:
I was following the directions outlined here -
https://github.com/apache/kafka/tree/0.8/contrib/hadoop-consumer. It does
not mention anything about registering jars. Can you please provide more
details?
Thanks,
Abhi
On Fri, Nov 8, 2013 at 8:48 AM, Neha Narkhede wrote:
> ClassNotFound
ClassNotFound means the Hadoop job is not able to find the related jar.
Have you made sure the related jars are registered in the distributed cache?
On Fri, Nov 8, 2013 at 8:40 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Can anyone help me with this issue? I feel like I am very close and am
> pr
Can anyone help me with this issue? I feel like I am very close and am
probably making some silly config error.
Kafka team, please provide more detailed notes on how to make this
component work.
Thanks.
On Fri, Nov 8, 2013 at 5:23 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Simplekafkaetljob c
Currently, the only way to send compressed data to Kafka is by enabling
compression on the producer side. To move compression to server side, we
have https://issues.apache.org/jira/browse/KAFKA-595 filed
Thanks,
Neha
On Fri, Nov 8, 2013 at 8:23 AM, arathi maddula wrote:
> Hi,
>
> We have a clus
Hi,
We have a cluster of Kafka servers. We want data of all topics on these
servers to be compressed, Is there some configuration to achieve this?
I was able to compress code by using compression.codec property in
ProducerConfig in Kafka Producer.
But I wanted to know if there is a way of enablin
So, what would be that API for offset commit in 0.8 version? Are there any
docs about it?
On Thu, Nov 7, 2013 at 10:56 PM, 小宇 wrote:
> offset commit API could solve your problem, it's for 0.8 version.
>
> ---Sent from Boxer | http://getboxer.com
>
> Thanks Neha! I guess auto-commit it is for no
Thx for the feedback. It is true I never mention anything about impact on
users or the fact this is mostly internal business in Kafka. I will try to
rephrase some of this.
Marc
On Nov 8, 2013 10:10 AM, "Yu, Libo" wrote:
> I read it and tried to understand it. It would be great to add a summary
>
I read it and tried to understand it. It would be great to add a summary
at the beginning about what it is and how it may impact a user.
Regards,
Libo
-Original Message-
From: Joel Koshy [mailto:jjkosh...@gmail.com]
Sent: Friday, November 08, 2013 2:01 AM
To: users@kafka.apache.org
Sub
Thanks for your reply, Joel.
Regards,
Libo
-Original Message-
From: Joel Koshy [mailto:jjkosh...@gmail.com]
Sent: Thursday, November 07, 2013 5:00 PM
To: users@kafka.apache.org
Subject: Re: add partition tool in 0.8
>
> kafka-add-partitions.sh is in 0.8 but not in 0.8-beta1. Therefor
Thanks. Good to know that it will come to formal 0.8.0 release soon.
I am really looking for the public maven repo for porting sparks on to it
instead of running a local version ;) probably I can do with a beta1 one
firstly.
Best Regards,
Raymond Liu
-Original Message-
From: Joe Stein
Simplekafkaetljob class, as mentioned in the post.
Thanks
Abhi
>From Samsung Galaxy S4
On Nov 7, 2013 8:34 PM, "Jun Rao" wrote:
> Which class is not found?
>
> Thanks,
>
> Jun
>
>
> On Thu, Nov 7, 2013 at 11:56 AM, Abhi Basu <9000r...@gmail.com> wrote:
>
> > Let me describe my environment. Wo
0.8.0 is in process being released and when that is done Scala 2.10 will be
in Maven central.
Until then you can do
./sbt "++2.10 publish-local"
from checking out the source of Kafka as Victor just said, yup.
you will be prompted to sign the jars which you can do with a pgp key or
remove the pg
Are you looking for maven repo?
You can always checkout sources from http://kafka.apache.org/code.html
and build it yourself.
2013/11/8 Liu, Raymond :
> If I want to use kafka_2.10 0.8.0-beta1, which repo I should go to? Seems
> apache repo don't have it. While there are com.sksamuel.kafka and
29 matches
Mail list logo