Congratulations Bill!
On Wed, 13 Feb 2019 at 16:51, Satish Duggana
wrote:
> Congratulations Bill!
>
> On Thu, Feb 14, 2019 at 6:41 AM Marcelo Barbosa
> wrote:
> >
> > Wow! Congrats Bill!
> > Cheers,
> > Barbosa
> > Em quarta-feira, 13 de fevereiro de 2019 23:03:54 BRST, Guozhang
> Wang esc
The return value from the `TransformSupplier` should always be a `new
YourTransformer(..)` as there will be one for each task and they are
potentially processed on multiple threads.
On Mon, 24 Sep 2018 at 16:07 Stéphane. D. wrote:
> Hi,
>
> We just stumbled upon an issue with KStream.transform()
The count is stored in RocksDB which is persisted to disk. It is not
in-memory unless you specifically use an InMemoryStore.
On Wed, 1 Aug 2018 at 12:53 Kyle.Hu wrote:
> Hi, bosses:
> I have read the word count demo of Kafka Stream API, it is cool that
> the Kafka Stream keeps the status,
t;
>
>
>
>
>
>
> Apache Kafka is in use at large and small companies worldwide, including
>
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
>
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
>
>
>
>
>
Hi,
When you create your window store do you have `retainDuplicates` set to
`true`? i.e., assuming you use `Stores.persistentWindowStore(...)` is the
last param `true`?
Thanks,
Damian
On Mon, 2 Jul 2018 at 17:29 Christian Henry
wrote:
> We're using the latest Kafka (1.1.0). I'd like to note th
Hi Raymond,
If you want all messages delivered in order then you should create the
topic with 1 partition. If you want ordering guarantees for messages with
the same key, then you need to produce the messages with a key.
Using the console producer you can do that by adding
--property "parse.key=tr
s
> 10s - 20s
> 20s - 30s
> 40s - 40s
> If this is correct, then is there another common way to handle a scenario
> like the one above?
>
> thanks in advance,
>
> Peter
>
>
>
>
>
>
>
> On Fri, May 18, 2018 at 6:27 PM, Damian Guy wrote:
>
>> Hi,
Hi,
In order to join the two streams they need to have the same key and the
same number of partitions in each topic. If they don't have the same key
you can force a repartition by using:
`stream.selectKey(KeyValueMapper)`
if the number of partitions is also different you could do:
`stream.select
Hi,
I think it **might** be related to this:
final Serializer httpSessionSerializer = new
JsonPOJOSerializer<>();
serdeProps.put("JsonPOJOClass", Http.class);
httpSessionSerializer.configure(serdeProps, false);
final Deserializer httpSessionDeserializer = new
JsonPOJODe
9 runs but the connect tool
> > throws a bunch of exceptions (https://www.codepile.net/pile/yVg8XJB8)
> > -: Connect quickstart on Windows fails (Java 8:
> > https://www.codepile.net/pile/xJGra6BP, Java 9:
> > https://www.codepile.net/pile/oREYeORK)
> >
> > Given
Java 9 are not breaking the functionality, my vote
> is a +1 (non-binding).
>
> Thanks.
> --Vahid
>
>
>
>
> From: Damian Guy
> To: d...@kafka.apache.org, users@kafka.apache.org,
> kafka-clie...@googlegroups.com
> Date: 03/15/2018 07:55 AM
> Subject:
Hello Kafka users, developers and client-developers,
This is the fourth candidate for release of Apache Kafka 1.1.0.
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
A few highlights:
* Significant Controller improvements (much faster and session expiration
edge cases f
ail is about 1.0 release. You may
> want
> > to replace that with 1.1.0 release plan link[1].
> >
> > 1 - https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=75957546
> >
> > Thanks,
> > Satish.
> >
> > On Wed, Mar 14, 2018 a
Hello Kafka users, developers and client-developers,
This is the third candidate for release of Apache Kafka 1.1.0.
This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:
https://cwiki.apache.org/confluence/pages/viewpage.action?page
> released yet, it might be better to have 3.4.10 included instead.
>
> Jeff
> Heroku
>
>
> On Tue, Mar 6, 2018 at 1:19 PM, Ted Yu wrote:
>
> > +1
> >
> > Checked signature
> > Ran test suite - apart from flaky testMetricsLeak, other tests passed.
>
dical, asynchronous calls to
> streams.store, which, if the problem occurs, always result in exceptions
> being thrown. I expected to retrieve the local data this way.
>
> On 7 March 2018 at 16:20, Damian Guy wrote:
>
> > If you have multiple streams instances then the st
If you have multiple streams instances then the store might only be
available on one of the instances. Using `KafkaStreams.store(..)` will only
locate stores that are currently accessible by that instance. If you need
to be able to locate stores on other instances, then you should probably
have a r
tests: https://builds.apache.org/job/kafka-1.1-jdk7/68
System tests: https://jenkins.confluent.io/job/system-test-kafka/job/1.1/30/
/**
Thanks,
Damian Guy
>> deleteLogDirEventNotifications
>> path to be deleted correctly from Zookeeper. The patch should be committed
>> later today.
>>
>> Thanks,
>>
>> Jun
>>
>> On Thu, Mar 1, 2018 at 1:47 PM, Damian Guy wrote:
>>
>>> Thanks Jason.
Thanks Jason. Assuming the system tests pass i'll cut RC1 tomorrow.
Thanks,
Damian
On Thu, 1 Mar 2018 at 19:10 Jason Gustafson wrote:
> The fix has been merged to 1.1.
>
> Thanks,
> Jason
>
> On Wed, Feb 28, 2018 at 11:35 AM, Damian Guy wrote:
>
> > Hi Jason,
; org.apache.kafka.connect.runtime.standalone.StandaloneHerder.
> > putConnectorConfig(StandaloneHerder.java:164)
> > >>
> > >> at
> > >>
> > >> org.apache.kafka.connect.cli.ConnectStandalone.main(
> > ConnectStandalone.java:107)
> > &
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 1.1.0.
This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:
https://cwiki.apache.org/confluence/pages/viewpage.action?page
+1
Ran tests, verified streams quickstart works
On Tue, 13 Feb 2018 at 17:52 Damian Guy wrote:
> Thanks Ewen - i had the staging repo set up as profile that i forgot to
> add to my maven command. All good.
>
> On Tue, 13 Feb 2018 at 17:41 Ewen Cheslack-Postava
> wro
't been published there yet.
>
> If that is configured, more compete maven output would be helpful to track
> down where it is failing to resolve the necessary archetype.
>
> -Ewen
>
> On Tue, Feb 13, 2018 at 3:03 AM, Damian Guy wrote:
>
> > Hi Ewen,
> >
&g
Hi Ewen,
I'm trying to run the streams quickstart and I'm getting:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-archetype-plugin:3.0.1:generate
(default-cli) on project standalone-pom: The desired archetype does not
exist (org.apache.kafka:streams-quickstart-java:1.0.1)
Something
There is an overload `leftJoin(KTable, ValuJoiner, Joined)`
Joined is where you specify the Serde for the KTable and for the resulting
type. We don't need the Serde for the stream at this point as the value has
already been deserialized.
HTH,
Damian
On Tue, 13 Feb 2018 at 05:39 Debasish Ghosh
w
Hi Brilly,
My initial guess is that it is the overhead of committing. Commit is
synchronous and you have the commit interval set to 50ms. Perhaps try
increasing it.
Thanks,
Damian
On Tue, 13 Feb 2018 at 07:49 TSANG, Brilly
wrote:
> Hi kafka users,
>
> I created a filtering stream with the Proc
This might be a good read for you:
https://www.confluent.io/blog/put-several-event-types-kafka-topic/
On Thu, 18 Jan 2018 at 20:57 Maria Pilar wrote:
> Hi everyone,
>
> I´m working in the configuration of the topics for the integration between
> one API and Data platform system. We have created
+1
On Thu, 18 Jan 2018 at 15:14 Bill Bejeck wrote:
> Thanks for the KIP.
>
> +1
>
> -Bill
>
> On Wed, Jan 17, 2018 at 9:09 PM, Matthias J. Sax
> wrote:
>
> > Hi,
> >
> > I would like to start the vote for KIP-247:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 247%3A+Add+public+
Congratulations Rajini!
On Thu, 18 Jan 2018 at 00:57 Hu Xi wrote:
> Congratulations, Rajini Sivaram. Very well deserved!
>
>
>
> 发件人: Konstantine Karantasis
> 发送时间: 2018年1月18日 6:23
> 收件人: d...@kafka.apache.org
> 抄送: users@kafka.apache.org
> 主题: Re: [ANNOUNCE] N
Can't think of anyone me deserving! Congratulations Matthias!
On Sat, 13 Jan 2018 at 00:17, Ismael Juma wrote:
> Congratulations Matthias!
>
> On 12 Jan 2018 10:59 pm, "Guozhang Wang" wrote:
>
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce Matthias J. Sax as our
> > n
Did you stop the broker before stoping zookeeper?
On Wed, 10 Jan 2018 at 10:38 Ted Yu wrote:
> I think that is the default signal.
> From the script:
>
> SIGNAL=${SIGNAL:-TERM}
>
> FYI
>
> On Wed, Jan 10, 2018 at 2:35 AM, Sam Pegler <
> sam.peg...@infectiousmedia.com>
> wrote:
>
> > Have you tri
,1,2
>
> Or
>
> N2
> assigned partitions: 3,4,1
> standby partitions: 2,5,6
>
> N3
> assigned partitions: 5,6,2
> standby partitions: 1,3,4
>
> -Sameer.
>
> On Tue, Jan 9, 2018 at 2:27 PM, Damian Guy wrote:
>
> > On Tue, 9 Jan 2018 at 07:42 Samee
State Store restoration is done on the same thread as processing. It is
actually interleaved with processing, so we keep the poll time small so
that if there is no data immediately available we can continue to process
data from other running tasks.
On Tue, 9 Jan 2018 at 08:03 Sameer Kumar wrote:
On Tue, 9 Jan 2018 at 07:42 Sameer Kumar wrote:
> Hi,
>
> I would like to understand how does rebalance affect state stores
> migration. If I have a cluster of 3 nodes, and 1 goes down, the partitions
> for node3 gets assigned to node1 and node2, does the rocksdb on node1/node2
> also starts upda
topic
> > and rerun.
> >
> > Thank you already for the insights!
> > -wim
> >
> > On Fri, 15 Dec 2017 at 14:08 Damian Guy wrote:
> >
> >> Hi,
> >>
> >> It is likely due to the timestamps you are extracting and using as the
> >
Hi,
It is likely due to the timestamps you are extracting and using as the
record timestamp. Kafka uses the record timestamps for retention. I suspect
this is causing your segments to roll and be deleted.
Thanks,
Damian
On Fri, 15 Dec 2017 at 11:49 Wim Van Leuven
wrote:
> Hello all,
>
> We are
Hi Artur,
KafkaStreams 0.10.0.0 is quite old and a lot has changed and been fixed
since then. If possible i'd recommend upgrading to at least 0.11.0.2 or 1.0.
For joins you need to ensure that the topics have the same number of
partitions (which they do) and that they are keyed the same.
Thanks,
Hi,
You don't need to set the serde until you do another operation that
requires serialization, i.e., if you followed the join with a `to()`,
`groupBy()` etc, you would pass in the serde to that operation.
Thanks,
Damian
On Thu, 16 Nov 2017 at 10:53 sy.pan wrote:
> Hi, all:
>
> Recently I have
afka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
>
>
> Only when I change the key of the first stream to Array[Byte], things
> work ok .. li
Hi,
That shouldn't be a problem, the inner most store is of type
`KeyValueStore`, however the outer store will be
`KeyValueStore`.
It should work fine.
Thanks,
Damian
On Wed, 15 Nov 2017 at 08:37 Debasish Ghosh
wrote:
> Hello -
>
> In my Kafka Streams 0.11 application I have the following tran
Hi,
This KIP didn't make it into 1.0, so it can't be done at the moment.
Thanks,
Damian
On Mon, 13 Nov 2017 at 14:00 Artur Mrozowski wrote:
> Hi,
> I wonder if anyone could shed some light on how to implement CoGroup in
> Kafka Streams in currrent version 1.0, as mentioned in this blog post
>
Hi,
The configurations apply to all streams consumed within the same streams
application. There is no way of overriding it per input stream.
Thanks,
Damian
On Mon, 13 Nov 2017 at 04:49 Boris Lublinsky
wrote:
> I am writing Kafka Streams implementation (1.0.0), for which I have 2
> input stream
Hi Ranjit, it sounds like you might want to use a global table for this.
You can use StreamsBuilder#globalTable(String, Materialized) to create the
global table. You could do something like:
KeyValueBytesStoreSupplier supplier =
Stores.inMemoryKeyValueStore("global-store");
Materialized>
materiali
dIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
>
> A big thank you for the following 108 contributors to this release!
>
> Abhishek Mendhekar, Xi Hu, Andras Beni, Andrey Dyachkov, Andy Chambers,
> Apurva Mehta, Armin Braun,
Count will always use a StateStore, but if you want you can use an InMemory
store if you don't want a persistent store. You can do this by using the
overloaded `count(StateStoreSupplier)` method. You would use
`Stores.create(name).inMemory()...` to create the inmemory store
On Wed, 1 Nov 2017 at 1
246e83da992b3e725
> 2.https://gist.github.com/Pk007790/a05226007ca90cdd36c362d09d19bda6.
>
> On Tue, Oct 24, 2017 at 3:29 PM, Damian Guy wrote:
>
> > It would depend on what your topology looks like, which you haven't show
> > here. But if there may be internal topics g
.
On Mon, 30 Oct 2017 at 15:48 Sameer Kumar wrote:
> Actually I am using Key Value store, I do use join as part of my DAG(until
> for the same has been set at 240 mins). The sink processor is key-value, is
> there any option to control it.
>
> -Sameer.
>
> On Mon, Oct 30, 2017 a
The topics in question are both changelogs for window stores. The retention
period for them is calculated as the Window retention period, which is the
value that is passed to `JoinWindows.until(...)` (default is 1 day) plus
the value of the config
StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RE
It would depend on what your topology looks like, which you haven't show
here. But if there may be internal topics generated due to repartitioning
which would cause the extra tasks.
If you provide the topology we would be able to tell you.
Thanks,
Damian
On Tue, 24 Oct 2017 at 10:14 pravin kumar
Hi Tony,
The issue is that the GlobalStore doesn't use the Processor when restoring
the state. It just reads the raw records from the underlying topic. You
could work around this by doing the processing and writing to another
topic. Then use the other topic as the source for your global-store.
It
Hi Ahmad,
>1. Given SessionTime can continue to expand the window that is
>considered part of the same session, i.e., it's based on data arriving
> for
>that key. What happens with retention time?
As the session expands the data for the session will continue to be
retained as it is
> > > > > >
> > > > > > > Thank you for responding so quickly. This is the topology. I've
> > > > > > simplified
> > > > > > > it a bit, but these are the steps it goes through, not sure if
> > that
> > > > is
> > &
Hi Chris,
You can only join on the key of the table, so i don't think this would work
as is. Also, the global table is updated in a different thread and there is
no guarantee that it would have been updated before the purchase.
Perhaps you could do it by making the key of the product table versio
Hi Johan,
Do you have any logs? The state store restoration changed significantly in
0.11.0.1. If you could get some logs at trace level, that would be useful.
Also if you could provide your topology (removing anything
proprietary/sensitive).
Thanks,
Damian
On Tue, 17 Oct 2017 at 05:55 Johan Gen
Hi,
No, context.forward() always needs to be called from the StreamThread. If
you call it from another thread the behaviour is undefined and in most
cases will be incorrect, likely resulting in an exception.
On Tue, 10 Oct 2017 at 09:04 Murad Mamedov wrote:
> Hi, here is the question:
>
> Transf
Hi,
No that isn't supported.
Thanks,
Damian
On Fri, 6 Oct 2017 at 04:18 Stas Chizhov wrote:
> Hi
>
> Is there a way to serve read read requests from standby replicas?
> StreamsMeatadata does not seem to provide standby end points as far as I
> can see.
>
> Thank you,
> Stas
>
If you are using the confluent schema registry then the will be cached by
the SchemaRegistryClient.
Thanks,
Damian
On Tue, 3 Oct 2017 at 09:00 Ted Yu wrote:
> I did a quick search in the code base - there doesn't seem to be caching as
> you described.
>
> On Tue, Oct 3, 2017 at 6:36 AM, Kristop
You can set ProducerConfig.RETRIES_CONFIG in your StreamsConfig, i.e,
Properties props = new Properties();
props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
...
On Fri, 29 Sep 2017 at 13:17 Sameer Kumar wrote:
> I guess once stream app are enabled exactly-once, producer idempotence g
Hi Roshan,
KafkaStreams apps run as a client application. It does not run on the
broker. You develop an application and give it an `application.id` - you
deploy how over many instances of that application you like and they all
share the same topology. I suggest you take a look at the docs here:
ht
ire log. Hope it helps.
>
> Ara.
>
>
> On Sep 25, 2017, at 7:59 AM, Damian Guy wrote:
>
> Hi, is that the complete log? It looks like there might be 2 tasks that are
> still restoring:
> 2017-09-22 14:08:09 DEBUG AssignedTasks:90 - stream-thread
> [argyle-streams-fp-Str
Hi, is that the complete log? It looks like there might be 2 tasks that are
still restoring:
2017-09-22 14:08:09 DEBUG AssignedTasks:90 - stream-thread
[argyle-streams-fp-StreamThread-6] transitioning stream task 1_18 to
restoring
2017-09-22 14:08:09 DEBUG AssignedTasks:90 - stream-thread
[argyle-s
; On Fri, Sep 15, 2017 at 2:31 PM, Damian Guy wrote:
>
> > Hi,
> >
> > Do you have the logs for the other instance?
> >
> > Thanks,
> > Damian
> >
> > On Fri, 15 Sep 2017 at 07:19 dev loper wrote:
> >
> > > Dear Kafka Users,
&g
Hi,
Do you have the logs for the other instance?
Thanks,
Damian
On Fri, 15 Sep 2017 at 07:19 dev loper wrote:
> Dear Kafka Users,
>
> I am fairly new to Kafka Streams . I have deployed two instances of Kafka
> 0.11 brokers on AWS M3.Xlarge insatnces. I have created a topic with 36
> partitions
/www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
kafka_2.12-0.11.0.1.tgz
<https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.0/kafka_2.12-0.11.0.0.tgz>
>*
A big thank you for the following 33 contributors to this release!
Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or,
ava
│ └── WordCount.java
└── resources
└── log4j.properties
Doesn't render properly - at least for me.
On Mon, 11 Sep 2017 at 09:08 Damian Guy wrote:
> Hi Guozhang, from what i'm looking at the {{fullDotVersion}} is replaced
>
> Guozhang
>
>
>
> On Thu, Sep 7, 2017 at 2:20 AM, Magnus Edenhill
> wrote:
>
> > +1 (non-binding)
> >
> > Verified with librdkafka regression test suite
> >
> > 2017-09-06 11:52 GMT+02:00 Damian Guy :
> >
> > > Resending as i wa
It is shown in the table what happens when you get null values for a key.
On Fri, 8 Sep 2017 at 12:31 Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:
> Hi Kafka Users,
>
> KTable-KTable Join Semantics is explained in detailed [here][1]. But,
> it's not clear when the input recor
Resending as i wasn't part of the kafka-clients mailing list
On Tue, 5 Sep 2017 at 21:34 Damian Guy wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.11.0.1.
>
> This is a bug fix release and i
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.11.0.1.
This is a bug fix release and it includes fixes and improvements from 49
JIRAs (including a few critical bugs).
Release notes for the 0.11.0.1 release:
http://home.apache.org/~d
Thanks Sameer, yes this looks like a bug. Can you file a JIRA?
On Mon, 4 Sep 2017 at 12:23 Sameer Kumar wrote:
> Hi,
>
> I am using InMemoryStore along with GlobalKTable. I came to realize that I
> was losing on data once I restart my stream application while it was
> consuming data from kafka t
Hi,
If you can then i'd recommend upgrading to a newer version. As you said
many bugs have been fixed since 0.10.0.1
On Wed, 23 Aug 2017 at 05:08 Balaprassanna Ilangovan <
balaprassanna1...@gmail.com> wrote:
> Hi,
>
> I have the following three question regarding Apache Kafka streams.
>
> 1. I a
Hi,
If the userData value is null then that would usually mean that there
wasn't a record with the provided key in the global table. So you should
probably check if you have the expected data in the global table and also
check that your KeyMapper is returning the correct key.
Thanks,
Damian
On
Duy, if it is in you logic then you need to handle the exception yourself.
If you don't then it will bubble out and kill the thread.
On Fri, 18 Aug 2017 at 10:27 Duy Truong wrote:
> Hi Eno,
>
> Sorry for late reply, it's not a deserialization exception, it's a pattern
> matching exception in my
>
> On Wed, Aug 16, 2017 at 1:56 PM, Damian Guy wrote:
>
> > I believe it is related to a bug in the state directory cleanup. This has
> > been fixed on trunk and also on the 0.11 branch (will be part of 0.11.0.1
> > that will hopefully be released soon). The fix is in
Sameer,
It might be that put, delete, putIfAbsent etc operations can be
non-synchronized. However for get and range operations that can be
performed by IQ, i.e, other threads, we need to guard against the store
being closed by the StreamThread, hence the synchronization.
Thanks,
Damian
On Wed, 1
Aug 15, 2017 at 9:33 PM, Damian Guy wrote:
>
>> Sameer, the log you attached doesn't contain the logs *before* the
>
>
>> exception happened.
>>
>> On Tue, 15 Aug 2017 at 06:13 Sameer Kumar wrote:
>>
>> > I have added a attachement containing
Sameer, the log you attached doesn't contain the logs *before* the
exception happened.
On Tue, 15 Aug 2017 at 06:13 Sameer Kumar wrote:
> I have added a attachement containing complete trace in my initial mail.
>
> On Mon, Aug 14, 2017 at 9:47 PM, Damian Guy wrote:
>
> &g
Do you have the logs leading up to the exception?
On Mon, 14 Aug 2017 at 06:52 Sameer Kumar wrote:
> Exception while doing the join, cant decipher more on this. Has anyone
> faced it. complete exception trace attached.
>
> 2017-08-14 11:15:55 ERROR ConsumerCoordinator:269 - User provided listene
The issue was fixed by this:
https://issues.apache.org/jira/browse/KAFKA-5562 it is on trunk, but will
likely be back ported to 0.11
On Wed, 9 Aug 2017 at 10:57 Damian Guy wrote:
> Hi,
>
> This is a bug in 0.11. You can work around it by setting
> StreamsConfig.STATE_DIR_CLEANUP_DEL
Hi,
This is a bug in 0.11. You can work around it by setting
StreamsConfig.STATE_DIR_CLEANUP_DELAY_MS_CONFIG to Long.MAX_VALUE
Also, if you have logs it would be easier to either attach them or put them
in a gist. It is a bit hard to read in an email.
Thanks,
Damian
On Wed, 9 Aug 2017 at 10:10
treams/state/internals/ChangeLoggingKeyValueBytesStore.java#L56
> the 'null' is not checked ...
>
> On Tue, Aug 8, 2017 at 12:52 PM, Damian Guy wrote:
>
> > Hi,
> > The null values are treated as deletes when they are written to the
> store.
> > You can see here:
> > https://gi
Hi,
The null values are treated as deletes when they are written to the store.
You can see here:
https://github.com/apache/kafka/blob/0.11.0/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L261
On Tue, 8 Aug 2017 at 11:22 Bart Vercammen wrote:
> Hi,
>
> I noticed
StreamThread.java:553)
>
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
>
> On Fri, Aug 4, 2017 at 4:16 PM, Shekar Tippur wrote:
>
> > Damian,
> >
> > I am getting a syntax error. I have responded on gist.
> >
thKeySerdes(…)
> > .withValueSerdes(…)
> > .withJoinType(“outer”)
> >
> > etc?
> >
> > I like the approach since it still remains declarative and it’d reduce
> the number of overloads by quite a bit.
> >
> > Eno
> >
> >> On Jun
Hi, Yes the issue is in 0.10.2 also.
On Tue, 1 Aug 2017 at 17:37 Eric Lalonde wrote:
>
> > On Aug 1, 2017, at 8:00 AM, Damian Guy wrote:
> >
> > It is a bug in 0.10.2 or lower. It has been fixed in 0.11 by
> > https://issues.apache.org/jira/browse/KAFKA-4494
>
&g
It is a bug in 0.10.2 or lower. It has been fixed in 0.11 by
https://issues.apache.org/jira/browse/KAFKA-4494
On Tue, 1 Aug 2017 at 15:40 Marcus Clendenin wrote:
> Hi All,
>
>
>
> I have a kafka streams application that is doing a join between a KTable
> and a KStream and it seems that after it
Hi Garrett,
The global state store doesn't use consumer groups and doesn't commit
offsets. The offsets are checkpointed to local disk, so they won't show up
with the ConsumerGroupCommand.
That said it would be useful to see the lag, so maybe raise a JIRA for it?
Thanks,
Damian
On Tue, 1 Aug 201
nk
> I should good.
>
> Or am I missing something ?
>
> regards.
>
> On Tue, Aug 1, 2017 at 1:10 PM, Damian Guy wrote:
>
>> Hi,
>>
>> On Tue, 1 Aug 2017 at 08:34 Debasish Ghosh
>> wrote:
>>
>>> Hi -
>>>
>>> I have a Kafk
Hi,
On Tue, 1 Aug 2017 at 08:34 Debasish Ghosh wrote:
> Hi -
>
> I have a Kafka Streams application that needs to run on multiple instances.
> It fetches metadata from all local stores and has an http query layer for
> interactive queries. In some cases when I have new instances deployed,
> stor
Hi,
I left a comment on your gist.
Thanks,
Damian
On Fri, 28 Jul 2017 at 21:50 Shekar Tippur wrote:
> Damien,
>
> Here is a public gist:
> https://gist.github.com/ctippur/9f0900b1719793d0c67f5bb143d16ec8
>
> - Shekar
>
> On Fri, Jul 28, 2017 at 11:45 AM, Damian Guy
}
>
> KeyValueIterator kviterator =
> keyValueStore.range("test_nod","test_node");
> }
> }
> });
>
>
> On Fri, Jul 28, 2017 at 12:52 AM, Damian Guy wrote:
>
> > Hi,
> > The store won't be queryable
t;
> regards.
>
> On Fri, Jul 28, 2017 at 8:18 PM, Damian Guy wrote:
>
>> Do you have any logs that might help to work out what is going wrong?
>>
>> On Fri, 28 Jul 2017 at 14:16 Damian Guy wrote:
>>
>>> The config looks ok to me
>>>
>>> O
Do you have any logs that might help to work out what is going wrong?
On Fri, 28 Jul 2017 at 14:16 Damian Guy wrote:
> The config looks ok to me
>
> On Fri, 28 Jul 2017 at 13:24 Debasish Ghosh
> wrote:
>
>> I am setting APPLICATION_SERVER_CONFIG, which is possibly wha
treamsConfig.STATE_DIR_CONFIG, config.stateStoreDir)
> >
> > // Set the commit interval to 500ms so that any changes are flushed
> > frequently and the summary
> > // data are updated with low latency.
> > settings.put(StreamsConfig.COMMIT_INTERVAL_MS_C
Hi,
Do you have the application.server property set appropriately for both
hosts?
The second stack trace is this bug:
https://issues.apache.org/jira/browse/KAFKA-5556
On Fri, 28 Jul 2017 at 12:55 Debasish Ghosh
wrote:
> Hi -
>
> In my Kafka Streams application, I have a state store resulting f
It is due to a bug. You should set
StreamsConfig.STATE_DIR_CLEANUP_DELAY_MS_CONFIG to Long.MAX_VALUE - i.e.,
disabling it.
On Fri, 28 Jul 2017 at 10:38 Sameer Kumar wrote:
> Hi,
>
> I am facing this error, no clue why this occurred. No other exception in
> stacktrace was found.
>
> Only thing di
eStoreTypes.keyValueStore(), newstreams);
> > } catch (InterruptedException e) {
> > e.printStackTrace();
> > }
> > *KeyValueIterator kviterator
> > = keyValueStore.range("test_nod","test_node");*
> > }else {
> >
> > *
t ktable?
>
> Sent from my iPhone
>
> > On Jul 27, 2017, at 07:57, Damian Guy wrote:
> >
> > Yes they can be strings,
> >
> > so you could do something like:
> > store.range("test_host", "test_hosu");
> >
> > This wo
e:
> Can you please point me to an example? Can from and to be a string?
>
> Sent from my iPhone
>
> > On Jul 27, 2017, at 04:04, Damian Guy wrote:
> >
> > Hi,
> >
> > You can't use a regex, but you could use a range query.
> > i.e, keyValueStor
1 - 100 of 314 matches
Mail list logo