Hi,
In Kafka cluster, how do brokers find other brokers? Is there a gossip
style protocol used? or does it use zookeeper ephermal nodes to figure out
live brokers?
Thanks,
Unmesh
I do not see why this is a limitation. Any data storage application you use
will be limited by physical capacity of the nodes.
Distributed applications like Kafka (Distributed message broker), HDFS (
Distributed file system), Cassandra ( distributed key value dB), by design
allow to store huge amou
are wrong Unmesh. Kafka design forces a partition to be on a single
node only.
My question is around the scalability of the partition itself.
How to overcome the restriction of a single node for a partition ?
Any clues anyone...
On Wed, Jun 1, 2016 at 5:24 PM, Unmesh Joshi wrote:
> I do not see
Hi,
I was trying to experiment with Kafka streams, and had following code
KTable, Integer> aggregated = locationViews
.map((key, value) -> {
GenericRecord parsedRecord = parse(value);
String parsedKey = parsedRecord.get("region").toString() +
parsedRecord.get("loca
parsedRecord.get("region").toString() + parsedRecord.get("location").
> toString();") to be not null, but it has been resolved in the recent fixes.
>
> Guozhang
>
> On Mon, Jun 20, 2016 at 3:41 AM, Unmesh Joshi
> wrote:
>
> > Hi,
> >
>
ure.java:107)
at
org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
On Wed, Jun 22, 2016 at 1:12 AM, Guozhang Wang wrote:
the logs from
Kafka nodes and all is working just fine.
Thanks,
Unmesh
On Wed, Jun 22, 2016 at 9:29 AM, Unmesh Joshi wrote:
> I could reproduce it with following steps. Adding Stacktrace in the end.
>
> 1. Create a stream and consume it without Windowing.
>
> KTable aggregation