I was able to make this error disappear by upgrading my client library from
0.10.0.0 to 0.10.0.1
On Wed, Jan 18, 2017 at 10:40 AM, Ryan Thompson
wrote:
> Hello,
>
> I'm attempting to upgrade an application from 0.8 to 0.10 broker / client
> libs, and integrate kafka streams
Hello,
I'm attempting to upgrade an application from 0.8 to 0.10 broker / client
libs, and integrate kafka streams. I am currently using the following
producer / consumer configs:
Producer:
Properties props = new Properties();
props.put("bootstrap.servers", brokerList);
leted, but I'm not sure
yet.
Thanks,
Ryan
On Thu, Aug 11, 2016 at 9:23 PM, Ryan Thompson
wrote:
> Hello,
>
> I've implemented something quite similar to the SimpleConsumer example on
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.
> 0+SimpleConsumer+Example
>
Hello,
I've implemented something quite similar to the SimpleConsumer example on
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
I'm using it to traverse a specific range of offsets.
I find that sometimes, in the middle of this traversal, I end up hitting an
"Offse
Hello,
Say I have a stream, and want to determine whether or not a given "density"
of of records match a given condition. For example, let's say I want to
how many of the last 10 records have a numerical value greater than 100.
Does the kafka streams DSL (or processor API) provide a way to do th
Hello,
I'm wondering if fault tolerant state management with kafka streams works
seamlessly if partitions are scaled up. My understanding is that this is
indeed a problem that stateful stream processing frameworks need to solve,
and that:
with samza, this is not a solved problem (though I also u