Jay
What is the plan for applying this patch, i wanted to use feature.
On Tue, Mar 3, 2015 at 11:22 PM, Jay Kreps wrote:
> Broker replication is available now and fully documented in the docs. This
> approach to availability has a lot of advantages discussed in that ticket
> and the one below. P
SPM only works for Java consumers or, I guess consumers using the built-in
offset management in kafka
On Tue, Mar 17, 2015 at 11:44 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Mathias,
>
> SPM for Kafka will give you Consumer Offsets by Host, Consumer Id, Topic,
> and Partition, a
I have a .tgz file, I want to extract and read the contents in each file
and load to kafka using java,
anybody help me on this ?
--
Thanks,
Kishore.
If you start first application with 3 threads (high level consumers) it’ll
consume all 6 partitions. When you start one more application (same group
id) with 3 threads all consumers threads (6) will be rebalanced and each
thread will consume one partition.
Dzmitry
You are correct. The consumer threads will be rebalanced.
liuyiming@foxmail.com
From: rmka rmka
Date: 2015-03-19 14:12
To: users
Subject: High level consumer group
If you start first application with 3 threads (high level consumers) it’ll
consume all 6 partitions. When you start one more
If you are looking for sample code below are the links for Producers and
Consumers using kafka java api's
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
As far as extracting tgz file, I guess
in your callback impl object, you can save a reference to the actual
message.
On Wed, Mar 18, 2015 at 10:45 PM, sunil kalva wrote:
> Hi
> How do i access the actual message which is failed to send to cluster using
> Callback interface and onCompletion method.
>
> Basically if the sender is faile
Also you can use the other API that returns a Future and save those futures
into a list and do get() on them to check which message has been sent and
which returned an error so that they can be retried.
Thanks,
Mayuresh
On Thu, Mar 19, 2015 at 9:19 AM, Steven Wu wrote:
> in your callback impl
I suppose sending is controlled by the linger time and max batch size of
the queue. The messages are sent to kafka when either of these meet.
The new kafkaProducer returns a Future. So its the responsibility of the
application to do a .get() on it to see the success or failure.
Thanks,
Mayuresh
future returns RecordMetadata class which contains only metadata not the
actual message.
But i think *steven* had a point like saving the reference in impl class
and retry if there is an exception in callback method.
On Thu, Mar 19, 2015 at 10:27 PM, Mayuresh Gharat <
gharatmayures...@gmail.com> w
Yes. Thats right. I misunderstood, my bad.
Thanks,
Mayuresh
On Thu, Mar 19, 2015 at 11:05 AM, sunil kalva wrote:
> future returns RecordMetadata class which contains only metadata not the
> actual message.
> But i think *steven* had a point like saving the reference in impl class
> and retry i
we load from kafka into hdfs using spark in batch mode, once a day. it's
very simple (74 lines of code) and works fine.
On Fri, Mar 13, 2015 at 4:11 PM, Gwen Shapira wrote:
> Camus uses MapReduce though.
> If Alberto uses Spark exclusively, I can see why installing MapReduce
> cluster (with or w
Koert
I am very new to spark, is it ok to you to share the code base for dumping
data into hdfs from kafka using spark ?
On Fri, Mar 20, 2015 at 12:20 AM, Koert Kuipers wrote:
> we load from kafka into hdfs using spark in batch mode, once a day. it's
> very simple (74 lines of code) and works f
What kind of exceptions are caught and sent to callback method, i think
when there is IOException callback is not called ?
in NetworkClient.java class, from the following code snippet i dont think
callback is called for this exaception ?
try {
this.selector.poll(Math.min(timeout, metadataTimeo
i can not just share this. take a look at KafkaRDD from our spark-kafka
library, or starting with spark 1.3.0 you can use the KafkaRDD that is
included with spark.
On Thu, Mar 19, 2015 at 2:58 PM, sunil kalva wrote:
>
> Koert
> I am very new to spark, is it ok to you to share the code base for
Hi, Everyone,
Our new java producer in 0.8.2 now exposes message offset to the client. If
you are utilizing the returned offset, you will need to make sure that your
broker is on 0.8.2 too. This is because in 0.8.1, we had a bug in the
broker that returns inconsistent offset in the response to a p
So,
I've run into an issue migrating a consumer to use the new 'roundrobin'
partition.assignment.strategy. It turns out that several of our consumers
use the same group id, but instantiate several different consumer instances
(with different topic selectors and thread counts). Often, this is don
Hi,
Is there a plan to update the producer documentation on the wiki located
here:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example
This would be helpful for people working on implementing the new
producer class deployed in 0.8.2.x. If there are any patches available
for
Hi Jason,
The round-robin strategy first takes the partitions of all the topics a
consumer is consuming from, then distributed them across all the consumers.
If different consumers are consuming from different topics, the assigning
algorithm will generate different answers on different consumers.
Hi Becket,
Can you list down an example for this. It would be easier to understand :)
Thanks,
Mayuresh
On Thu, Mar 19, 2015 at 4:46 PM, Jiangjie Qin
wrote:
> Hi Jason,
>
> The round-robin strategy first takes the partitions of all the topics a
> consumer is consuming from, then distributed th
Hello,
What's the best strategy for failover when using mirror-maker to replicate
across datacenters? As I understand offsets in both datacenters will be
different, how consumers should be reconfigured to continue reading from
the same point where they stopped without data loss and/or duplication?
Is there a link to the proposed new consumer non-blocking API?
Thanks,
Rajiv
Err, here:
http://kafka.apache.org/083/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
-Jay
On Thu, Mar 19, 2015 at 9:40 PM, Jay Kreps wrote:
> The current work in progress is documented here:
>
>
> On Thu, Mar 19, 2015 at 7:18 PM, Rajiv Kurian
> wrote:
>
>> Is there a
The current work in progress is documented here:
On Thu, Mar 19, 2015 at 7:18 PM, Rajiv Kurian wrote:
> Is there a link to the proposed new consumer non-blocking API?
>
> Thanks,
> Rajiv
>
Those are pretty much the best javadocs I've ever seen. :)
Nice job, Kafka team.
-James
> On Mar 19, 2015, at 9:40 PM, Jay Kreps wrote:
>
> Err, here:
> http://kafka.apache.org/083/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
>
> -Jay
>
> On Thu, Mar 19, 2015 at 9:
:-)
On Thursday, March 19, 2015, James Cheng wrote:
> Those are pretty much the best javadocs I've ever seen. :)
>
> Nice job, Kafka team.
>
> -James
>
> > On Mar 19, 2015, at 9:40 PM, Jay Kreps > wrote:
> >
> > Err, here:
> >
> http://kafka.apache.org/083/javadoc/index.html?org/apache/kafka/cl
Hi,
We are using 0.8.2.1 currently.
- How to get the consumer offsets from the offsets topic?
- Is there any built-in function which I could use? (like in
AdminUtils.scala)
- Is it ok to start a simple consumer and read the offsets from the topic?
We used to read the offsets from zookeeper pre
27 matches
Mail list logo