Quick search for "kafka s3 consumer" brings up a bunch of Github projects.
If you don't like any, I would write a kafka consumer in java that writes
to s3. Probably less than 200 lines of code.
F.
On 6/25/13 1:50 PM, "Alan Everdeen" wrote:
>In my application, I have logs that are sent as kafka
The exception is likely due to a race condition btw the logic in ZK watcher
and the closing of ZK connection. It's harmless, except for the weird
exception.
Thanks,
Jun
On Tue, Jun 25, 2013 at 10:07 AM, Hargett, Phil <
phil.harg...@mirror-image.com> wrote:
> Possibly.
>
> I see evidence that i
For that, you should take a look at the controlled shutdown tool:
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#
Thanks,
Jun
On Tue, Jun 25, 2013 at 1:27 AM, Ankit Jain wrote:
> Hi All,
>
> I would like to know about the ways to upgrade or apply new patches in
> Kafka wit
In my application, I have logs that are sent as kafka messages, and I need
a way to save these logs to an s3 bucket using a kafka sink.
I was wondering what the recommended approach would be to accomplish this
task, assuming I am using Kafka 0.8.
Thank you for your time,
Alan Everdeen
To debug this I would leave only one broker in the list and look at the
broker log on that machine and the producer machine. Make sure that the
log4j config file is in your class path otherwise it will not initialize
properly. Did you try having the broker on the same machine with the
producer? Doe
Possibly.
I see evidence that its being stopped / started every 30 seconds in same cases
(due to my code). It's entirely possible that I have a race, too, in that 2
separate pieces of code could be triggering such a stop / start.
Gives me something to track down. Thank you!!
On Jun 25, 2013,
This typically only happens when the consumerConnector is shut down. Are
you restarting the consumerConnector often?
Thanks,
Jun
On Tue, Jun 25, 2013 at 9:40 AM, Hargett, Phil <
phil.harg...@mirror-image.com> wrote:
> Seeing this exception a LOT (3-4 times per second, same log topic).
>
> I'm
Do you see any "Failed to send" in WARN? If so, resend could introduce
duplicates. A common cause of "Failed to send" is socket timeout. In the
beta1 release, we have increased the default request timeout from 1.5 sec
to 10sec.
Thanks,
Jun
On Mon, Jun 24, 2013 at 10:55 PM, Markus Roder wrote:
Seeing this exception a LOT (3-4 times per second, same log topic).
I'm using external code to feed data to about 50 different log topics over a
cluster of 3 Kafka 0.8 brokers. There are 3 ZooKeeper instances as well, all
of this is running on EC2. My application creates a high-level consumer
You should see the cause in WARN. It seems that your log4j is not set up
properly
Thanks,
Jun
On Mon, Jun 24, 2013 at 10:18 PM, Yogesh Sangvikar <
yogesh.sangvi...@gmail.com> wrote:
> Hi Jun,
>
> The stack trace we found is as follow,
>
> log4j:WARN No appenders could be found for logger
> (ka
Hi Team,
It is interesting to note that, the same code is working fine with kafka
0.8 release ( earlier to kafka-0.8.0-beta1-candidate1) with properties like
"broker.list" and "props.put("producer.type", "sync");" OR
"props.put("producer.type", "async");".
I suppose, the producer.type=sync is wor
We are able to telnet to each of the Kafka nodes from the producer so it
doesn't appear to be a connectivity issue.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.168 9092
Trying x.x.x.168...
Connected to x.x.x.168.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$
Hi Florin,
I work with Yogesh so it is interesting you mention the
'metadata.broker.list' property as this was the first error message we saw.
Consider the following producer code.
Properties props = new Properties();
props.put("broker.list", "x.x.x.x:9092, x.x.x.x :9092, x.x.x.x :9092,
x.x.x.x
Hi All,
I would like to know about the ways to upgrade or apply new patches in Kafka
with zero downtime.
Thanks,
Ankit Jain
NOTE: This message may contain information that is confidential, proprietary,
privileged or otherwise protected by law. The message
I got the same error but I think I had a different issue than you: My code
was written for kafka 0.7 and when I switched to 0.8 I changed the
"zk.connect" property to "metadata.broker.list" but left it with the same
value (which was of course the zookeeper's host and port). In other words
a "pilot
15 matches
Mail list logo