hysical instance.
> Having connectivity issues to your brokers is IMO
> a problem with the deployment and not at all
> with how kafka streams is designed and works.
>
> Kafka Streams moves hundreds of GB per day for us.
>
> Hope this helps.
>
> Best Jan
>
>
>
>
is if Kafka Streams should be reimplemented as
> Apache Storm?
> -wim
>
> On Wed, 29 Nov 2017 at 15:10 Adrienne Kole
> wrote:
>
> > Hi,
> >
> > The purpose of this email is to get overall intuition for the future
> plans
> > of streams library.
> >
>
Hi,
The purpose of this email is to get overall intuition for the future plans
of streams library.
The main question is that, will it be a single threaded application in the
long run and serve microservices use-cases, or are there any plans to
extend it to multi-node execution framework with les
Hi,
Is there a way to limit the consumer rate from kafka? (say, max 400K p/s
from whole topic including n partitions)
By default I think it is not limited by any parameter but only by network,
performance and etc.
Cheers
Adrienne
you do "aggregate by key" Streams will
> automatically separate the keys and compute an aggregate per key.
> Thus, you do not need to worry about which keys is hashed to what
> partition.
>
> - -Matthias
>
> On 10/5/16 1:37 PM, Adrienne Kole wrote:
> > Hi,
>
ed 10 partitions). Also, as a general recommendation: It's often a
> good
> > idea to over-partition your topics. For example, even if today 10
> machines
> > (and thus 10 partitions) would be sufficient, pick a higher number of
> > partitions (say, 50) so you have
itions to determine which partitions to go to, so records
> with the same key will be assigned to the same partition. Would that be OK
> for your case?
>
>
> Guozhang
>
>
> On Tue, Oct 4, 2016 at 3:00 PM, Adrienne Kole
> wrote:
>
> > Hi,
> >
> > F
Hi,
>From Streams documentation, I can see that each Streams instance is
processing data independently (from other instances), reads from topic
partition(s) and writes to specified topic.
So here, the partitions of topic should be determined beforehand and should
remain static.
In my usecase I w
Hi,
I am trying to implement simple scenario on streams library of kafka.
I insert data to kafka topic 1 tuple/second.
Streams library is connected to particular topic and what it does is:
1. construct 8 second windows with 4 second sliding time,
2. sum values of tuples (p
Hi,
How can I measure the latency and throughput in Kafka Streams?
Cheers
Adrienne
see
>
> http://docs.confluent.io/3.0.0/streams/developer-guide.html#streams-developer-guide-serdes
> )
>
>
> -Matthias
>
> On 06/19/2016 03:06 PM, Adrienne Kole wrote:
> > Hi,
> >
> > I want to implement wordcount example with reduce function in K
Hi,
I want to implement wordcount example with reduce function in KTable.
However, I get the error:
Exception in thread "StreamThread-1"
org.apache.kafka.common.errors.SerializationException: Size of data
received by LongDeserializer is not 8
Here is my code:
KTable source = builder.t
ut-kafka-part-2/ <
> http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka-part-2/>
> >
> > Thanks
> > Eno
> >
> >> On 15 Jun 2016, at 17:21, Adrienne Kole
> wrote:
> >>
> >> Hi community,
> >>
> >> Probably it is
Hi community,
Probably it is very basic question, as I am new to Kafka Streams.
I am trying to initialize KTable or KStream from kafka topic. However, I
don't know how to avoid getting null keys. So,
KTable source =
builder.stream(Serdes.String(),Serdes.String(),"t1");
source.print()
14 matches
Mail list logo