Short answer seems to be that my Kafka LogRetentionTime was such that the
metrics I was writing were getting purged from kafka during the test.
Dropped metrics.
On Thu, Jun 15, 2017 at 1:32 PM, Caleb Welton wrote:
> I have encapsulated the repro into a small self contained project:
>
I have encapsulated the repro into a small self contained project:
https://github.com/cwelton/kstreams-repro
Thanks,
Caleb
On Thu, Jun 15, 2017 at 11:30 AM, Caleb Welton wrote:
> I do have a TimestampExtractor setup and for the 10 second windows that
> are emitted all the values expec
regating whatever showed up in that 10 second
> > window.
> >
> > On Wed, Jun 14, 2017 at 8:43 PM, Caleb Welton
> wrote:
> >
> >> Disabling the cache with:
> >>
> >> ```
> >> streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_
&g
t
>
> -Matthias
>
>
>
> On 6/14/17 4:08 PM, Caleb Welton wrote:
> > Update, if I set `StreamsConfig.NUM_STREAM_THREADS_CONFIG=1` then the
> > problem does not manifest, at `StreamsConfig.NUM_STREAM_
> THREADS_CONFIG=2`
> > or higher the problem shows up.
>
considerably, but it seems to
slow down to the speed of the output which may be the key.
That said... Changing the number of stream threads should not impact data
correctness. Seems like a bug someplace in kafka.
On Wed, Jun 14, 2017 at 2:53 PM, Caleb Welton wrote:
> I have a topology
I have a topology of
KStream -> KTable -> KStream
```
final KStreamBuilder builder = new KStreamBuilder();
final KStream metricStream = builder.stream(ingestTopic);
final KTable, MyThing> myTable = metricStream
.groupByKey(stringSerde, mySerde)
.reduce(MyThing::merge,
TimeWin
You need to up your OS open file limits, something like this should work:
# /etc/security/limits.conf
* - nofile 65536
On Fri, May 12, 2017 at 6:34 PM, Yang Cui wrote:
> Our Kafka cluster is broken down by the problem “java.io.IOException: Too
> many open files” three times in 3 weeks.
>
>
Hello,
I'm having trouble getting `vagrant/vagrant-up.sh --aws` to work properly.
The issue I'm having is as follows:
1. The vagrant install and provisioning complete successfully.
2. ssh into the cluster locally works and running from there works fine
3. connecting to the cluster can be made to
ker to be running. The benchmark will report
> various numbers, but one of them is the performance of the consumer and the
> performance of streams reading.
>
> Thanks
> Eno
>
> > On 9 Sep 2016, at 18:19, Caleb Welton
> wrote:
> >
> > Same in both cases:
&
,
Could you share your Kafka Streams configuration (i.e., StreamsConfig
properties you might have set before the test)?
Thanks
Eno
On Thu, Sep 8, 2016 at 12:46 AM, Caleb Welton wrote:
> I have a question with respect to the KafkaStreams API.
>
> I noticed during my prototyping wor
I have a question with respect to the KafkaStreams API.
I noticed during my prototyping work that my KafkaStreams application was
not able to keep up with the input on the stream so I dug into it a bit and
found that it was spending an inordinate amount of time in
org.apache.kafka.common.network.S
Hello,
I'm trying to understand best practices related to joining streams using
the Kafka Streams API.
I can configure the topology such that two sources feed into a single
processor:
topologyBuilder
.addSource("A", stringDeserializer, itemDeserializer, "a-topic")
.addSource("B", stringD
12 matches
Mail list logo