t is impossible to tell what is happening with the full logs.
>
> Thanks,
> Damian
>
>> On Mon, 7 Aug 2017 at 22:46 Shekar Tippur wrote:
>>
>> Damien,
>>
>> Thanks for pointing out the error. I had tried a different version of
>> initializing the
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
On Fri, Aug 4, 2017 at 4:16 PM, Shekar Tippur wrote:
> Damian,
>
> I am getting a syntax error. I have responded on gist.
> Appreciate any inputs.
>
> - Shekar
>
> On Sat, Jul 29, 2017 at 1:57 AM, Damian Guy wrote:
>
>> H
Damian,
I am getting a syntax error. I have responded on gist.
Appreciate any inputs.
- Shekar
On Sat, Jul 29, 2017 at 1:57 AM, Damian Guy wrote:
> Hi,
>
> I left a comment on your gist.
>
> Thanks,
> Damian
>
> On Fri, 28 Jul 2017 at 21:50 Shekar Tippur wrote:
>
> Cheers,
> Damian
> On Fri, 28 Jul 2017 at 19:22, Shekar Tippur wrote:
>
> > Thanks a lot Damien.
> > I am able to get to see if the join worked (using foreach). I tried to
> add
> > the logic to query the store after starting the streams:
> > Looks like the
n created until the application is up and running and
> they are dependent on the underlying partitions.
>
> To check that a stateful operation has produced a result you would normally
> add another operation after the join, i.e.,
> stream.join(other,...).foreach(..) or stream.join(other,...).
One more thing.. How do we check if the stateful join operation resulted in
a kstream of some value in it (size of kstream)? How do we check the
content of a kstream?
- S
On Thu, Jul 27, 2017 at 2:06 PM, Shekar Tippur wrote:
> Damien,
>
> Thanks a lot for pointing out.
>
>
/ END CODE /
- S
On Thu, Jul 27, 2017 at 10:05 AM, Damian Guy wrote:
>
> It is part of the ReadOnlyKeyValueStore interface:
>
>
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
>
> On Thu, 2
> This would return an iterator containing all of the values (inclusive) from
> "test_host" -> "test_hosu".
>
>> On Thu, 27 Jul 2017 at 14:48 Shekar Tippur wrote:
>>
>> Can you please point me to an example? Can from and to be a string?
>>
>
>> On Wed, 26 Jul 2017 at 22:34 Shekar Tippur wrote:
>>
>> Hello,
>>
>> I am able to get the kstream to ktable join work. I have some use cases
>> where the key is not always a exact match.
>> I was wondering if there is a way to lookup
Hello,
I am able to get the kstream to ktable join work. I have some use cases
where the key is not always a exact match.
I was wondering if there is a way to lookup keys based on regex.
For example,
I have these entries for a ktable:
test_host1,{ "source": "test_host", "UL1": "test1_l1" }
test_
hen giving it the non-null keyed record seems
> because, you are using "SnowServerDeserialzer" (is it set as the default
> key deserializer) which expects a SnowServerPOJOClass while the key "joe"
> is typed String. You need to override the key deserialize when constructing
e non-null keyed record seems
> because, you are using "SnowServerDeserialzer" (is it set as the default
> key deserializer) which expects a SnowServerPOJOClass while the key "joe"
> is typed String. You need to override the key deserialize when constructing
ble cache = builder.table(Serdes.String(),
> rawSerde, "cache", "local-cache");
>
>
> Guozhang
>
>
> On Sun, Jun 25, 2017 at 11:30 PM, Shekar Tippur wrote:
>
> > Guozhang
> >
> > I am using 0.10.2.1 version
> >
> > - Shekar
> >
> &
ble cache = builder.table(Serdes.String(),
> rawSerde, "cache", "local-cache");
>
>
> Guozhang
>
>
> On Sun, Jun 25, 2017 at 11:30 PM, Shekar Tippur wrote:
>
> > Guozhang
> >
> > I am using 0.10.2.1 version
> >
> > - Shekar
> >
> &
e is similar to the other thread you asked on the mailing list.
>
> Also, could you provide your used Kafka Streams version?
>
>
> Guozhang
>
>
> On Sun, Jun 25, 2017 at 12:45 AM, Shekar Tippur wrote:
>
> > Hello,
> >
> > I am having trouble impl
.StreamTask.addRecords(StreamTask.java:158)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:605)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)
- Shekar
On Sun, Jun 25, 2017 at 10:36 AM, Guozhang Wang wrote:
> Hi She
Hello,
I am having trouble implementing streams to table join.
I have 2 POJO's each representing streams and table data structures. raw
topic contains streams and cache topic contains table structure. The join
is not happening since the print statement is not being called.
Appreciate any pointer
+1
Thanks
Is there anyway I can get a small working example to start with?
- Shekar
On Wed, Jul 13, 2016 at 10:39 AM, Shekar Tippur wrote:
> Dean,
>
> I am having trouble getting this to work.
>
> import akka.actor.ActorSystem;
> import akka.kafka.scaladsl.Producer;
> import akka.
.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more
On Sat, Jul 2, 2016 at 9:42 PM, Shekar Tippur wrote:
> Dean,
>
> Thanks a lot for the link. I am going through the documentation.
>
> - Shekar
>
> On Wed, Jun 29, 2016
lt;http://twitter.com/deanwampler>
> http://polyglotprogramming.com
>
> On Wed, Jun 29, 2016 at 11:03 AM, Shekar Tippur wrote:
>
> > Thanks for the suggestion Lohith. Will try that and provide a feedback.
> >
> > - Shekar
> >
> > On Tue, Jun 28, 2016
d from it. With Cassandra
> TTL, the row will be deleted after TTL is passed. No manual cleanup is
> required.
>
> Best regards / Mit freundlichen Grüßen / Sincères salutations
> M. Lohith Samaga
>
>
>
> -Original Message-
> From: Shekar Tippur [mailto:ctip...@gmai
make each stage of your pipeline to
> write
> > to a Cassandra (or other DB) and your API will read from it. With
> Cassandra
> > TTL, the row will be deleted after TTL is passed. No manual cleanup is
> > required.
> >
> > Best regards / Mit freundlichen Grüßen
I am looking at building a reactive api on top of Kafka.
This API produces event to Kafka topic. I want to add a unique session id
into the payload.
The data gets transformed as it goes through different stages of a
pipeline. I want to specify a final topic where I want the api to know that
the pro
.java:140)
Caused by: org.apache.kafka.common.errors.TimeoutException: Batch
containing 1 record(s) expired due to timeout while requesting
metadata from brokers for test1-0
On Sun, Jun 26, 2016 at 2:19 AM, Shekar Tippur wrote:
> Enrico,
>
> I dint quite get it. Can you please elaborate?
&g
"));
> > producer.send(new ProducerRecord("test",
> > Integer.toString(i),"xyz"));
> > producer.send(new ProducerRecord("test",
> > Integer.toString(i),"xyz"));
> > producer.send(new ProducerRecord("
send(new ProducerRecord("test",
Integer.toString(i),"xyz"));
producer.send(new ProducerRecord("test",
Integer.toString(i),"xyz"));
producer.send(new ProducerRecord("test",
Integer.toString(i),"xyz"));
}
On Sat, Jun
On Jun 24, 2016, at 13:49, Shekar Tippur wrote:
>
> Intersting. So if we introduce a sleep after the first send then it produces
> properly?
>
> Here is my log. Clearly there is a conn reset.
>
> [2016-06-24 13:42:48,620] ERROR Closing socket for /127.0
ntId] DEBUG
> o.a.kafka.common.metrics.Metrics - Added sensor with name topic.test.bytes
> 16:20:25.043 [kafka-producer-network-thread | testClientId] DEBUG
> o.a.kafka.common.metrics.Metrics - Added sensor with name
> topic.test.compression-rate
> 16:20:25.043 [kafka-producer-ne
I just see this on kafka.log file
[2016-06-24 13:27:14,346] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)
On Fri, Jun 24, 2016 at 1:05 PM, Shekar Tippur wrote:
> Hello,
>
> I have a simple Kafka producer directly taken off of
>
>
> https://kafka.apac
Hello,
I have a simple Kafka producer directly taken off of
https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
I have changed the bootstrap.servers property.
props.put("bootstrap.servers", "localhost:9092");
I dont see any events added to the t
31 matches
Mail list logo