I remember observing this issue, that whenever you received a new value and
tries to get the old aggregate value from store, it always returns null and
hence you never aggregate the old value with new record. I remember I fixed
that issue but cannot remember how.. So I would recommend:

1) Try compiling your app with latest trunk of Apache Kafka (hence Kafka
Streams) and see if this issue goes away. Note that there are slight
operator API changes: stream.countByKey() becomes stream.groupBy().count().

2) If that does not solve the problem, and if you counting is windowed,
i.e. countByKey(Windows.Of...), then double check on the input record's
extracted timestamps and see if they actually fall into different window
buckets on AWS if it is wall-clock time / etc whereas they fall into the
same window when running locally.


Guozhang




On Wed, Aug 3, 2016 at 4:39 PM, Srinidhi Muppalla <srinid...@trulia.com>
wrote:

> Hi Guozahang,
>
> I believe we are using RocksDB. We are not using the Processor API, just
> simple map and countByKey functions so it is using the default KeyValue
> Store.
>
> Thanks!
>
> Srinidhi
>
> Hello Srinidhi,
>
> Are you using RocksDB as well like in the WordCountDemo for your
> aggregation operator?
>
> Guozhang
>
>
> On Tue, Aug 2, 2016 at 5:20 PM, Srinidhi Muppalla <srinid...@trulia.com>
> wrote:
>
> > Hey All,
> >
> > We are having issues successfully storing and accessing a Ktable on our
> > cluster which happens to be on AWS. We are trying to store a Ktable of
> > counts of ’success' and ‘failure’ strings, similar to the WordCountDemo
> in
> > the documentation. The Kafka Streams application that creates the KTable
> > works locally, but doesn’t appear be storing the state on our cluster.
> Does
> > anyone have any experience working with Ktables and AWS or knows what
> > configs related to our Kafka brokers or Streams setup could be causing
> this
> > failure on our server but not on my local machine? Any insight into what
> > could be causing this issue would be helpful.
> >
> > Here is what the output topic we are writing the Ktable to looks like
> > locally:
> >
> > SUCCESS 1
> > SUCCESS 2
> > FAILURE 1
> > SUCCESS 3
> > FAILURE 2
> > FAILURE 3
> > FAILURE 4.
> >
> > Here is what it looks like on our cluster:
> >
> > SUCCESS 1
> > SUCCESS 1
> > FAILURE 1
> > SUCCESS 1
> > FAILURE 1
> > FAILURE 1
> > FAILURE 1.
> >
> > Thanks,
> > Srinidhi
> >
> >
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang

Reply via email to