Hi James,
The thread was different from the one I was asking. When I issue a
OffsetFetchRequest to the channel, I am getting a ClosedChannelException
and all of them seem to come from a single broker. I don't see anything
problematic with the broker (config, exceptions.etc.) but requests for
Offse
Also, Flume 1.6 has Kafka source, sink and channel. Using Kafka source or
channel with Dataset sink should work.
Gwen
On Tue, May 12, 2015 at 6:16 AM, Warren Henning
wrote:
> You could start by looking at Linkedin's Camus and go from there?
>
> On Mon, May 11, 2015 at 8:10 PM, Rajesh Datla
> w
Hi Kafka-Users,
We have been using kafka 2.8.0-0.8.1.1 in our cluster of 21 brokers with a
replication factor of 2. When one of the broker underwent a complete shutdown,
the partitions of a topic that had an in-sync-replica in the broker that died
is not able to create a new Isr in a healthy no
You could start by looking at Linkedin's Camus and go from there?
On Mon, May 11, 2015 at 8:10 PM, Rajesh Datla
wrote:
> Hi All,
>
> How to integrate Kafka with Hadoop ecosystem.
>
> How to store Kafka messages into HDFS in parquet format
>
> Regards
> Raj
>
Hi All,
How to integrate Kafka with Hadoop ecosystem.
How to store Kafka messages into HDFS in parquet format
Regards
Raj
I don't think call allDone will cause hasNext() to exit. The new consumer has a
timed poll() function on it's API I think.
With the old consumer, interrupting the thread calling hasNext might work. Have
you tried that?
Aditya
From: Gomathivinayagam Muthu
I am using the following code to help kafka stream listener threads to exit
out of the blocking call of hasNext() on the consumerIterator. But the
threads never exit, when they receive allDone() signal. I am not sure
whether I am making any mistake. Please let me know is this right approach.
Hi Meghana,
Let me try this out on my cluster that has latest trunk deployed.
Thanks,
Mayuresh
On Mon, May 11, 2015 at 1:53 PM, Meghana Narasimhan <
mnarasim...@bandwidth.com> wrote:
> Hi Mayuresh,
> A small update. The Kafka version I'm currently using is 2.10-0.8.2.1 (not
> 2.11 as previous
Thanks Gwen/Ewen.
I have posted to kafka-clients google group too.
On Mon, May 11, 2015 at 1:40 PM Gwen Shapira wrote:
> You may want to announce this at kafka-clie...@googlegroups.com, a mailing
> list specifically for Kafka clients.
> I'm sure they'll be thrilled to hear about it. It is also
Hi Mayuresh,
A small update. The Kafka version I'm currently using is 2.10-0.8.2.1 (not
2.11 as previously mentioned). The cluster looks fine. Not sure why the
consumer offset checker does not return a valid output and gets stuck.
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic
You may want to announce this at kafka-clie...@googlegroups.com, a mailing
list specifically for Kafka clients.
I'm sure they'll be thrilled to hear about it. It is also a good place for
questions on client development, if you ever need help.
On Mon, May 11, 2015 at 4:57 AM, Yousuf Fauzan
wrote:
After a recent 0.8.2.1 upgrade we noticed a significant increase in used
filesystem space for our Kafka log data. We have another Kafka cluster still on
0.8.1.1 whose Kafka data is being copied over to the upgraded cluster, and it
is clear that the disk consumption is higher on 0.8.2.1 for the s
Vamsi,
There is another thread going on right now about this exact topic:
"Is there a way to know when I've reached the end of a partition (consumed all
messages) when using the high-level consumer?"
http://search-hadoop.com/m/uyzND1Eb3e42NMCWl
-James
On May 10, 2015, at 11:48 PM, Achanta Vams
Thanks everyone.
To answer Charlie's question:
I'm doing some simple stream processing. I have Topics A,B, and C, all using
log compaction and all recordings having primary keys. The data in Topic A is
essentially a routing table that tells me which primary keys in Topics B and C
I should pay
Hey Guozhang,
Thanks for the heads up!
Best
On Thu, May 7, 2015 at 1:26 AM, Guozhang Wang wrote:
> The metrics for checking that would better be "buffer-available-bytes"
> instead of "bufferpool-wait-ratio", checking on its value approaching 0.
>
> Guozhang
>
> On Wed, May 6, 2015 at 3:02 AM,
Hi,
with Kafka 0.8 it is possible to add new partitions on newly added brokers
and supply a partition assignment to put the new partitions mainly on the
new brokers (4 and 5 are the new brokers):
bin/kafka-add-partitions.sh --topic scale-test-001 \
--partition 14 \
--replica-assignment-list
4:5,4
Thanks Manikumar for you super fast replies. Let me go through the docs and
will raise my questions, if any.
Thanks,
Dinesh
On 11 May 2015 at 11:46, Manikumar Reddy wrote:
> All the consumers in the same consumer group will share the load across
> given topic/partitions.
> So for any consumer f
17 matches
Mail list logo