Ryan,
Have you tried configuring the num.retries option on the log4j producer ?
When there is a temporary network glitch, it will retry sending the
messages instead of losing them.
Thanks,
Neha
On Thursday, March 14, 2013, Ryan Chan wrote:
> Hi,
>
> We are currently using the Log4j appender to
Hi,
We are currently using the Log4j appender to write to the Kafka (on
machine on another datacenter), but problem occur when the network is
interrupted as the message is stored in memory.
Are there any local forwarder/client or similar solutions that can be
installed on the localhost and take c
I have used KafkaETLJob to write a job that consumes from Kafka and
writes to HDFS. Kafka version 0.7.2 rc5 and CDH 4.1.2.
Is anything in particular not working?
-David
On 3/14/13 5:31 PM, Matt Lieber wrote:
Just curious, were you able to make Camus work with CDH4 then ?
Cheers,
Matt
__
I got the same problem After I replace the dependencies for zkclient to 0.2
and it compiles.
在 15 Mar, 2013,8:21 AM,Dragos Manolescu 写道:
> I dug into this and found a problem. The kafka build files show
> dependencies on two different versions of the zkclient code:
>
> In core/build.sbt:
I dug into this and found a problem. The kafka build files show
dependencies on two different versions of the zkclient code:
In core/build.sbt:
libraryDependencies ++= Seq(
"org.apache.zookeeper" % "zookeeper" % "3.3.4",
"com.github.sgroschupf" % "zkclient"% "0.1",
"org.xerial.snappy"
This could be a bug with the topic discovery logic in the wildcard
consumer. Please can you file a bug and attach your consumer logs there ?
Thanks,
Neha
On Thu, Mar 14, 2013 at 3:20 PM, Jason Rosenberg wrote:
> Yes, you're description matches what I did. And the brokers have been
> bounced m
Yes, you're description matches what I did. And the brokers have been
bounced many times since then (they are auto-deployed many times a day,
etc.). And the consumers also have been restarted many times since then.
Could it be related to using the white-list topic selector, etc.?
Let me know if
Jason,
Let me see if I understood what you did here. In Kafka 0.7.2, you deleted
the Kafka log files from the server and bounced the broker. This should've
ideally deleted those topics from zookeeper, the consumer reads the same
zookeeper paths that the broker writes. Doing this should cause rebal
Just curious, were you able to make Camus work with CDH4 then ?
Cheers,
Matt
NOTE: This message may contain information that is confidential, proprietary,
privileged or otherwise protected by law. The message is intended solely for
the named addressee. If
Also,
I see a bazillion consecutive log lines like this:
2013-03-14 19:54:13,306 INFO [Thread-4] consumer.ConsumerIterator -
Clearing the current data chunk for this consumer iterator
With the same content (not sure how useful that is!).
Jason
On Thu, Mar 14, 2013 at 2:03 PM, Jason Rosenberg
Hi Neha,
So I did this, but I still see the full list of topics (most of which have
been deleted), in the consumer logs, e.g.:
consumer.ZookeeperConsumerConnector -
samsa-consumer-graphite_alg2.sjc1.square-1363290849309-2816c1cb Topics to
consume = List()
I select topics using the white list top
Hi Neha --
Thanks for the prompt answer. Yes that's what I did to open the project in
IntelliJ.
-Dragos
On 3/14/13 11:22 AM, "Neha Narkhede" wrote:
>You can do ./sbt gen-idea to build the Intellij project .iml files.
>
>Thanks,
>Neha
>
>
>On Thu, Mar 14, 2013 at 11:20 AM, Dragos Manolescu <
>d
You can do ./sbt gen-idea to build the Intellij project .iml files.
Thanks,
Neha
On Thu, Mar 14, 2013 at 11:20 AM, Dragos Manolescu <
dragos.manole...@servicenow.com> wrote:
> Hi --
>
> For the last couple of days I've been going through the 0.8 code as I'm
> porting some 0.7.2 producers and co
Hi --
For the last couple of days I've been going through the 0.8 code as I'm porting
some 0.7.2 producers and consumers to the 0.8 API. While sbt compiles the
sources and indicates that 196 tests pass (I use Scala 2.9.2), I haven't been
able to successfully build Kafka in IntelliJ (after gener
OK, I re-reviewed the Kafka design doc and looked at the topic file mytopic-0.
It definitely isn't 562949953452239 in size (just 293476). Since I am in a
local test configuration, how should I resolve the offset drift and where:
1. In ZK by wiping a snapshot.XXX file? This would also affect anot
If you are never able to commit the offset, it will always try to consume
from the initial fetch offset. Eventually, that offset will be garbage
collected from the broker. So it will automatically reset its fetch offset
to the earliest or latest offset available on the broker. The choice of
resetti
Thanks Jun,
I don't mean to be obtuse, but could you please provide an example? Which file
should I determine size for?
Thanks,
Chris
- Original Message -
From: "Jun Rao"
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 12:18:31 AM
Subject: Re: kafka.common.OffsetOutOfRangeEx
17 matches
Mail list logo