Are you using ZK-based producer? If so, those watches could be left by the
producers if they haven't been restarted. Could you also use zkCli.sh to
see if deleted topics are there in ZK?
Thanks,
Jun
On Fri, Mar 15, 2013 at 2:19 PM, Jason Rosenberg wrote:
> Jun,
>
> So, I connected to zookeeper
Murtaza,
Thanks for sharing. This looks very interesting.
Jun
On Fri, Mar 15, 2013 at 6:41 PM, Murtaza Doctor
wrote:
> One more application of Kafka & our HBase consumer:
> http://hadoopsummit2013.uservoice.com/forums/196821-enterprise-data-archite
> cture/suggestions/3714756-events-to-insights
One more application of Kafka & our HBase consumer:
http://hadoopsummit2013.uservoice.com/forums/196821-enterprise-data-archite
cture/suggestions/3714756-events-to-insights-in-real-time
Thanks,
murtaza
On 3/11/13 2:17 PM, "Neha Narkhede" wrote:
>http://hadoopsummit2013.uservoice.com/forums/196
Jun,
So, I connected to zookeeper just using telnet, and using the 4 letter
commands.
If I do a dump:
I do not see anything but valid topics, and valid consumer/owners mappings.
If I check watches, I see all the 1000's of bogus topics, e.c.:
wchc:
/brokers/topics/
or
wchp
/bro
You're welcome. Done, patch included in the bug's description:
https://issues.apache.org/jira/browse/KAFKA-809
-Dragos
On 3/15/13 8:12 AM, "Neha Narkhede" wrote:
>Thanks for looking into this, Dragos. We should remove
>""com.github.sgroschupf"
>% "zkclient"% "0.1"," from the build.sbt files
We're successfully using Camus to move data from Kafka 0.7.x into CDH 4.x.
I didn't hit any particular problems getting that to work, I only had
tweaked the pom.xml files.
Craig
On Fri, Mar 15, 2013 at 12:20 PM, Matthew Rathbone
wrote:
> =david, we use a subset of the KafkaETLJob in cdh4 with g
=david, we use a subset of the KafkaETLJob in cdh4 with great success. Just
make sure to compile your mapreduce against CDH4
On Thu, Mar 14, 2013 at 10:28 PM, David Arthur wrote:
> I have used KafkaETLJob to write a job that consumes from Kafka and writes
> to HDFS. Kafka version 0.7.2 rc5 and
Could you check if the following path for a deleted topic exists in ZK? It
should have no children.
/brokers/topics/[topic]
If this is the case, try manually removing those paths from ZK (when the
brokers and the consumers are down).
Thanks,
Jun
On Thu, Mar 14, 2013 at 2:03 PM, Jason Rosenberg
Thanks for looking into this, Dragos. We should remove ""com.github.sgroschupf"
% "zkclient"% "0.1"," from the build.sbt files. Would you like to
create a JIRA and/or attach a patch ?
-Neha
On Thu, Mar 14, 2013 at 5:21 PM, Dragos Manolescu <
dragos.manole...@servicenow.com> wrote:
> I dug i
Are you using the right version of ZkClient? The version of ZkClient used
in Kafka exposes Stat in writeData().
Thanks,
Jun
On Thu, Mar 14, 2013 at 11:20 AM, Dragos Manolescu <
dragos.manole...@servicenow.com> wrote:
> Hi --
>
> For the last couple of days I've been going through the 0.8 code a
https://issues.apache.org/jira/browse/KAFKA-156 is filed for that feature.
Thanks,
Neha
On Fri, Mar 15, 2013 at 12:55 AM, Ryan Chan wrote:
> Hi,
>
> On Fri, Mar 15, 2013 at 1:18 PM, Neha Narkhede
> wrote:
> > Ryan,
> >
> > Have you tried configuring the num.retries option on the log4j produce
If the consumer's fetch offset is not present on the Kafka server, it will
send back the OffsetOutOfRange error code in the response to the consumer.
Then the consumer can issue an OffsetRequest to the server to get the
earliest/latest offset for the partitions. Once the consumer receives the
Offse
I would appreciate it if someone can provide some guidance on how to handle a
consumer offset reset. I know this feature is expected to be baked into 0.8.0
(I'm using 0.7.2). Although I in a local development environment, such an
exercise would allow me to understand Kafka better and build a tro
Nevermind, we pushed a config which started storing data to a different
directory.
On Fri, Mar 15, 2013 at 1:06 AM, Premal Shah wrote:
> Hi,
> We changed the default log retention policy from 168 hrs (7days) to 750
> hrs (approx a month) and restarted the kafka servers. After the restart,
> when
Hi,
We changed the default log retention policy from 168 hrs (7days) to 750 hrs
(approx a month) and restarted the kafka servers. After the restart, when
we ran the console consumer with the --from-beginning flag, it did not get
messages from the beginning. Only messages added after the restart wer
Hi,
On Fri, Mar 15, 2013 at 1:18 PM, Neha Narkhede wrote:
> Ryan,
>
> Have you tried configuring the num.retries option on the log4j producer ?
> When there is a temporary network glitch, it will retry sending the
> messages instead of losing them.
>
Memory is limited and this is our concern. (W
16 matches
Mail list logo