Hi,
We currently run 0.7.x on our clusters and are now finally getting around
to upgrading to kafka latest. One thing that has been holding us back is
that we can no longer use a VIP to front the clusters. I understand we
could use a VIP for metadata lookups, but we have 100,000 + producers to at
s
> (ZK under that level of load etc) or are you just looking for experiences
> running 0.8.x with that many producers?
>
> B
>
>
> > On 25 Aug 2015, at 10:29, Damian Guy wrote:
> >
> > Hi,
> >
> > We currently run 0.7.x on our clusters and are n
Can you do:
producer.send(...)
...
producer.send(...)
producer.flush()
By the time the flush returns all of your messages should have been sent
On 8 September 2015 at 11:50, jinxing wrote:
> if i wanna send the message syncronously i can do as below:
> future=producer.send(producerRecord, callb
Hi,
I've been trying out the new consumer and have noticed that i get duplicate
messages when i stop the consumer and then restart (different processes,
same consumer group).
I consume all of the messages on the topic and commit the offsets for each
partition and stop the consumer. On the next ru
I turned off compression and still get duplicates, but only 1 from each
topic.
Should the initial fetch offset for a partition be committed offset +1 ?
Thanks,
Damian
On 15 September 2015 at 14:07, Damian Guy wrote:
> Hi,
>
> I've been trying out the new consumer and have noti
Hi,
Assuming i am using the latest kafka (trunk), exclusively with the new
consumer, and i want to monitor consumer lag across all groups - how would
i go about discovering the consumer groups? Is there an API call for this?
Thanks,
Damian
You can try altering some config on your producers, see here:
https://kafka.apache.org/documentation.html#producerconfigs
To control how many messages are buffered and how often the buffer is flushed:
queue.buffering.max.ms
queue.buffering.max.messages
To control the behaviour when the buffer is
Hi,
If you are using the Scala Producer then yes it will drop messages. It will
try up to num retries times and then throw a FailedToSendMessageException.
This is caught in the ProducerSendThread and logged, you'd see something
like:
"Error in handling batch of 10 events ..."
If you don't want to
I would think not
I'm bringing up a new 0.9 cluster and i'm getting the below Exception (and
the same thing on all nodes) - the IP address is the IP for the host the
broker is running on. I think DNS is a bit stuffed on these machines and
maybe that is the cause, but... any ideas?
[2015-11-17
the broker got a
> request more than the default allowed size (~10MB). How many
> topic/partitions do you have on this cluster? Do you have clients running
> on the broker host?
>
> Thanks,
>
> Jun
>
>
> On Tue, Nov 17, 2015 at 4:10 AM, Damian Guy wrote:
>
>>
Hi,
We have had some temporary nodes in our kafka cluster and i now need to
move assigned partitions off of those nodes onto the permanent members. I'm
familiar with the kafka-reassign-partitions script, but ... How do i get it
to work with the __consumer_offsets partition? It currently seems to
ecember 2015 at 15:32, Ben Stopford wrote:
> Hi Damian
>
> The reassignment should treat the offsets topic as any other topic. I did
> a quick test and it seemed to work for me. Do you see anything suspicious
> in the controller log?
>
> B
> > On 16 Dec 2015, at 14:5
And in doing so i've answered my own question ( i think! ) - i don't
believe the topic has been created on that cluster yet...
On 18 December 2015 at 10:56, Damian Guy wrote:
> I was just trying to get it generate the json for reassignment and the
> output was empty, i.e.,
Hi,
I believe it is a broker property.
It will create the topic with the name you provide.
The topic will not get deleted unless you manually delete it.
It wont get re-created on subsequent calls (unless you've deleted it)
HTH,
Damian
On 20 January 2016 at 13:14, Joe San wrote:
> I doubt that
Hi,
Pass the key into the callback you provide to kafka. You then have it
available when the callback is invoked.
Cheers,
Damian
On 11 February 2016 at 10:59, Franco Giacosa wrote:
> Hi,
>
> Is there a way to get the record key on the callback of the send() for a
> record? I would like to be ab
Hi,
It is a bug in the consumer that has been fixed by KAFKA-2978. You should
try building the consumer from the latest 0.9.0 branch (or the 0.9.0.1 RC).
I've had the same issue and confirmed it works fine on the latest 0.9.0.
Thanks,
Damian
On 14 February 2016 at 18:50, Anurag Laddha wrote:
>
Hi,
You need to have at least replication factor brokers.
replication factor = 1 is no replication.
HTH,
Damian
On 16 February 2016 at 14:08, Sean Morris (semorris)
wrote:
> Should your number of brokers be atleast one more then your replication
> factor of your topic(s)?
>
> So if I have a r
ne broker?
>
> Thanks,
> Sean
>
> On 2/16/16, 9:14 AM, "Damian Guy" wrote:
>
> >Hi,
> >
> >You need to have at least replication factor brokers.
> >replication factor = 1 is no replication.
> >
> >HTH,
> >Damian
> >
Hi,
I had the same issue and managed to work around it by simulating a
heartbeat to kafka. It works really well, i.e., we have had zero issues
since it was implemented
I have somthing like this:
void process() {
records = consumer.poll(timeout)
dispatcher.dispatch(records)
while(!dispa
Hi Ted - if the data is keyed you can use a key compacted topic and
essentially keep the data 'forever',i.e., you'll always have the latest
version of the data for a given key. However, you'd still want to backup
the data someplace else just-in-case.
On 16 February 2016 at 21:25, Ted Swerve wrote
Steven,
In practice, data shouldn't be migrating that often. If it is then you
probably have bigger problems. You should be able to use the metadata api
to find the instance the key should be on and then when you check that node
you can also check with the metadata api that the key should still be
On Fri, Jun 9, 2017 at 1:34 PM Guozhang Wang wrote:
> >
> > > Hello all,
> > >
> > >
> > > The PMC of Apache Kafka is pleased to announce that we have invited
> > Damian
> > > Guy as a committer to the project.
> > >
> > > Da
Hi,
You should provide the serdes in the `groupByKey()` operation. The `map`
will trigger a re-partition in the `groupByKey` as you have changed the
key.
In fact you could replace the `map` and `groupByKey` with:
m.groupBy(mapper, Serdes.String(),
Serdes.String()).count("HostAggregateCount")
Tha
by quite a bit.
>
> Eno
>
> > On Jun 21, 2017, at 3:37 PM, Damian Guy wrote:
> >
> > Hi,
> >
> > I'd like to get a discussion going around some of the API choices we've
> > made in the DLS. In particular those that relate to stateful operati
Hi,
Yes the key format used by a window store changelog is the same format as
is stored in RocksDB. You can see what the format is by looking here:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/WindowStoreUtils.java
Thanks,
Damian
On Thu
of(10)).withJoinType(JoinType.LEFT).build());
}
I'm not going to say which way i'm leaning, yet!
Thanks,
Damian
On Thu, 29 Jun 2017 at 11:47 Damian Guy wrote:
>
>> However, I don't understand your argument about putting aggregate()
>> after the withXX() -- all
eam.groupBy().count()
As i said above, everything that happens before the final aggregate call
can be applied to any of them. So it makes sense to me to do those things
ahead of the final aggregate call.
> Last about builder pattern. I am convinced that we need some "terminal"
> operato
It looks like Task [4_2] is stuck restoring state, though it doesn't look
like there is much state to restore.
It might be helpful if you take some thread dumps to see where it is
blocked.
Thanks,
Damian
On Fri, 30 Jun 2017 at 16:04 Dmitriy Vsekhvalnov
wrote:
> Set org.apache.kafka.streams to
dvsekhvalnov/146ba41c8e78316941098997c9d2f18a#file-thread-dump
>
> On Fri, Jun 30, 2017 at 6:10 PM, Damian Guy wrote:
>
> > It looks like Task [4_2] is stuck restoring state, though it doesn't look
> > like there is much state to restore.
> > It might be helpful if you take some
Hi Ian,
We had another report of what looks like the same issue. Will look into it.
Thanks,
Damian
On Fri, 30 Jun 2017 at 16:38 Ian Duffy wrote:
> Hi All,
>
> I was wondering if any of those who know stream internals should shed any
> light on the following exception:
>
> org.apache.kafka.stre
Hi Ian,
Can you check if the file exists and it is indeed a file rather then a
directory?
Thanks,
Damian
On Fri, 30 Jun 2017 at 16:45 Damian Guy wrote:
> Hi Ian,
>
> We had another report of what looks like the same issue. Will look into it.
>
> Thanks,
> Damian
>
> O
ll at at org.rocksdb.RocksDB.put.
>
> No, AWS machine.
>
> kafka-streams v0.10.2.1
>
> May be some option for RockDB that can unlock it? Also i can try to run app
> locally against same env to see if it make difference (though it will be
> different OS).
>
> On Fri, Jun 30, 2017 at 6:3
Hi Debasish,
You can just implement the QueryableStoreType interface. You can take a
look here:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/QueryableStoreTypes.java#L77
for
an example.
Then you just pass your implementation to `kafkaStreams.stor
Hi,
It is because you are calling `context.timestamp` during `commit`. At this
point there is no `RecordContext` associated with the `ProcessorContext`,
hence the null pointer. The `RecordContext` is only set when streams is
processing a record. You probably want to log the change when you write t
t;
> override def flush(): Unit = {
> if (loggingEnabled) {
> changeLogger.logChange(changelogKey, bf
> }
> }
>
> which in turn calls logChange that gives the error.
>
> Am I missing something ?
>
> regards.
>
> On Mon, Jul 3, 2017 at 2:27 PM, Damian Guy wrote:
ractProcessorContext.timestamp(AbstractProcessorContext.java:150)
> at
> com.lightbend.fdp.sample.kstream.processor.BFStoreChangeLogger.logChange(BFStoreChangeLogger.scala:25)
> at
> com.lightbend.fdp.sample.kstream.processor.BFStore.flush(BFStore.scala:89)
> at
> org.apache.kafka.streams.processor.internals.ProcessorS
Hi Dmitriy,
It is possibly related to the broker setting `offsets.retention.minutes` -
this defaults to 24 hours. If an offset hasn't been updated within that
time it will be removed. So if your env was sitting idle for longer than
this period, then rebalanced, you will likely start consuming the
I would argue to go a bit slower and more carefull on
> this one. At some point we need to get it right. Peeking over to the hadoop
> guys with their hughe userbase. Config files really work well for them.
>
> Best Jan
>
>
>
>
>
> On 30.06.2017 09:31, Damian Guy wrote:
>
Hi Debasish,
It looks like it is possibly a bug in the Kafka Consumer code.
In your streams app you probably want to add an UncaughtExceptionHandler,
i.e, via `KafkaStreams#setUncaughtExceptionHandler(...)` and terminate the
process when you receive an uncaught exception. I guess Mesos should
auto
rocess .. Thanks for
> > your prompt response ..
> >
> > regards.
> >
> > On Tue, Jul 4, 2017 at 9:30 PM, Damian Guy wrote:
> >
> >> Hi Debasish,
> >>
> >> It looks like it is possibly a bug in the Kafka Consumer code.
> >> In yo
t;
> > On 07.07.2017 17:23, Guozhang Wang wrote:
> >
> > I messed the indentation on github code repos; this would be
> > easier to read:
> >
> > https://codeshare.io/GLWW8K
> >
> >
> > Guozhan
+1
On Wed, 31 May 2017 at 13:36 Jim Jagielski wrote:
> +1
> > On May 27, 2017, at 9:27 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> >
> > Sure, that sounds good.
> >
> > I suggested that to keep command line behavior consistent.
> > Plus, removal of ACL access is something that
Hi,
I have two questions:
> 1°/ Is the format written on this topic easily readable using the same
> Serde I use for the state store or does Streams change it in any way?
>
If it is a KeyValue Store then you can use your Serdes to read from the
changelog.
> 2°/ since the topic will be used by s
Hi Debasish,
It might be that it is blocked in `streams.close()`
You might want to to try the overload that has a long and TimeUnit as
params, i.e., `streams.close(1, TimeUnit.MINUTES)`
Thanks,
Damian
On Wed, 26 Jul 2017 at 09:11 Debasish Ghosh
wrote:
> Hi -
>
> I have a Kafka streams applicat
Sameer,
For a KeyValue store the changelog topic is a compacted topic so there is
no retention period. You will always retain the latest value for a key.
Thanks,
Damian
On Wed, 26 Jul 2017 at 08:36 Sameer Kumar wrote:
> Hi,
>
> Retention period for state stores are clear(default, otherwise spe
; On Jul 6, 2017, at 7:50 AM, Ian Duffy wrote:
> >
> > Hi Damian,
> >
> > Sorry for the delayed reply have been out of office.
> >
> > I'm afraid I cannot check. We have alarms on our auto scaling groups for
> > stream instances to kill them should th
tion period, lets say 2 days. and if the value on day1 for key1 =
> 4 and data for key1 doesnt come for next 3 days. Would it still retail the
> same value(key1=4) on day4.
>
> -Sameer.
>
> On Wed, Jul 26, 2017 at 2:22 PM, Damian Guy wrote:
>
> > Sameer,
> >
e-than-one-topic-partition
> but unlike this use case I don't make any change in the partition of any
> topic in between the restarts. BTW my application uses stateful streaming
> and hence Kafka creates any internal topics. Not sure if it's related to
> this exception though. But t
umar
> wrote:
>
> > got it. Thanks.
> >
> > On Wed, Jul 26, 2017 at 3:24 PM, Damian Guy
> wrote:
> >
> >> The changelog is one created by kafka streams, then it is a compacted
> >> topic
> >> and the retention period is irrelevant. If
l not exist there. But I expect
> Kafka will create it from the corresponding backed up topic. Hence the
> exception looks a bit confusing to me.
>
> Thoughts ?
>
> regards.
>
> On Wed, Jul 26, 2017 at 3:43 PM, Damian Guy wrote:
>
> > The exception indicates that streams
anks,
Damian
> regards.
>
> On Wed, Jul 26, 2017 at 7:51 PM, Damian Guy wrote:
>
>> Hi,
>>
>> It looks to me that there is currently no leader for the partition, i.e.,
>> leader -1. Also there are no replicas? Something up with your brokers?
>>
>> Tha
Hi,
You can't use a regex, but you could use a range query.
i.e, keyValueStore.range(from, to)
Thanks,
Damian
On Wed, 26 Jul 2017 at 22:34 Shekar Tippur wrote:
> Hello,
>
> I am able to get the kstream to ktable join work. I have some use cases
> where the key is not always a exact match.
> I
e:
> Can you please point me to an example? Can from and to be a string?
>
> Sent from my iPhone
>
> > On Jul 27, 2017, at 04:04, Damian Guy wrote:
> >
> > Hi,
> >
> > You can't use a regex, but you could use a range query.
> > i.e, keyValueStor
t ktable?
>
> Sent from my iPhone
>
> > On Jul 27, 2017, at 07:57, Damian Guy wrote:
> >
> > Yes they can be strings,
> >
> > so you could do something like:
> > store.range("test_host", "test_hosu");
> >
> > This wo
eStoreTypes.keyValueStore(), newstreams);
> > } catch (InterruptedException e) {
> > e.printStackTrace();
> > }
> > *KeyValueIterator kviterator
> > = keyValueStore.range("test_nod","test_node");*
> > }else {
> >
> > *
It is due to a bug. You should set
StreamsConfig.STATE_DIR_CLEANUP_DELAY_MS_CONFIG to Long.MAX_VALUE - i.e.,
disabling it.
On Fri, 28 Jul 2017 at 10:38 Sameer Kumar wrote:
> Hi,
>
> I am facing this error, no clue why this occurred. No other exception in
> stacktrace was found.
>
> Only thing di
Hi,
Do you have the application.server property set appropriately for both
hosts?
The second stack trace is this bug:
https://issues.apache.org/jira/browse/KAFKA-5556
On Fri, 28 Jul 2017 at 12:55 Debasish Ghosh
wrote:
> Hi -
>
> In my Kafka Streams application, I have a state store resulting f
treamsConfig.STATE_DIR_CONFIG, config.stateStoreDir)
> >
> > // Set the commit interval to 500ms so that any changes are flushed
> > frequently and the summary
> > // data are updated with low latency.
> > settings.put(StreamsConfig.COMMIT_INTERVAL_MS_C
Do you have any logs that might help to work out what is going wrong?
On Fri, 28 Jul 2017 at 14:16 Damian Guy wrote:
> The config looks ok to me
>
> On Fri, 28 Jul 2017 at 13:24 Debasish Ghosh
> wrote:
>
>> I am setting APPLICATION_SERVER_CONFIG, which is possibly wha
t;
> regards.
>
> On Fri, Jul 28, 2017 at 8:18 PM, Damian Guy wrote:
>
>> Do you have any logs that might help to work out what is going wrong?
>>
>> On Fri, 28 Jul 2017 at 14:16 Damian Guy wrote:
>>
>>> The config looks ok to me
>>>
>>> O
}
>
> KeyValueIterator kviterator =
> keyValueStore.range("test_nod","test_node");
> }
> }
> });
>
>
> On Fri, Jul 28, 2017 at 12:52 AM, Damian Guy wrote:
>
> > Hi,
> > The store won't be queryable
Hi,
I left a comment on your gist.
Thanks,
Damian
On Fri, 28 Jul 2017 at 21:50 Shekar Tippur wrote:
> Damien,
>
> Here is a public gist:
> https://gist.github.com/ctippur/9f0900b1719793d0c67f5bb143d16ec8
>
> - Shekar
>
> On Fri, Jul 28, 2017 at 11:45 AM, Damian Guy
Hi,
On Tue, 1 Aug 2017 at 08:34 Debasish Ghosh wrote:
> Hi -
>
> I have a Kafka Streams application that needs to run on multiple instances.
> It fetches metadata from all local stores and has an http query layer for
> interactive queries. In some cases when I have new instances deployed,
> stor
nk
> I should good.
>
> Or am I missing something ?
>
> regards.
>
> On Tue, Aug 1, 2017 at 1:10 PM, Damian Guy wrote:
>
>> Hi,
>>
>> On Tue, 1 Aug 2017 at 08:34 Debasish Ghosh
>> wrote:
>>
>>> Hi -
>>>
>>> I have a Kafk
Hi Garrett,
The global state store doesn't use consumer groups and doesn't commit
offsets. The offsets are checkpointed to local disk, so they won't show up
with the ConsumerGroupCommand.
That said it would be useful to see the lag, so maybe raise a JIRA for it?
Thanks,
Damian
On Tue, 1 Aug 201
It is a bug in 0.10.2 or lower. It has been fixed in 0.11 by
https://issues.apache.org/jira/browse/KAFKA-4494
On Tue, 1 Aug 2017 at 15:40 Marcus Clendenin wrote:
> Hi All,
>
>
>
> I have a kafka streams application that is doing a join between a KTable
> and a KStream and it seems that after it
Hi, Yes the issue is in 0.10.2 also.
On Tue, 1 Aug 2017 at 17:37 Eric Lalonde wrote:
>
> > On Aug 1, 2017, at 8:00 AM, Damian Guy wrote:
> >
> > It is a bug in 0.10.2 or lower. It has been fixed in 0.11 by
> > https://issues.apache.org/jira/browse/KAFKA-4494
>
&g
thKeySerdes(…)
> > .withValueSerdes(…)
> > .withJoinType(“outer”)
> >
> > etc?
> >
> > I like the approach since it still remains declarative and it’d reduce
> the number of overloads by quite a bit.
> >
> > Eno
> >
> >> On Jun
StreamThread.java:553)
>
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
>
> On Fri, Aug 4, 2017 at 4:16 PM, Shekar Tippur wrote:
>
> > Damian,
> >
> > I am getting a syntax error. I have responded on gist.
> >
Hi,
The null values are treated as deletes when they are written to the store.
You can see here:
https://github.com/apache/kafka/blob/0.11.0/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L261
On Tue, 8 Aug 2017 at 11:22 Bart Vercammen wrote:
> Hi,
>
> I noticed
treams/state/internals/ChangeLoggingKeyValueBytesStore.java#L56
> the 'null' is not checked ...
>
> On Tue, Aug 8, 2017 at 12:52 PM, Damian Guy wrote:
>
> > Hi,
> > The null values are treated as deletes when they are written to the
> store.
> > You can see here:
> > https://gi
Hi,
This is a bug in 0.11. You can work around it by setting
StreamsConfig.STATE_DIR_CLEANUP_DELAY_MS_CONFIG to Long.MAX_VALUE
Also, if you have logs it would be easier to either attach them or put them
in a gist. It is a bit hard to read in an email.
Thanks,
Damian
On Wed, 9 Aug 2017 at 10:10
The issue was fixed by this:
https://issues.apache.org/jira/browse/KAFKA-5562 it is on trunk, but will
likely be back ported to 0.11
On Wed, 9 Aug 2017 at 10:57 Damian Guy wrote:
> Hi,
>
> This is a bug in 0.11. You can work around it by setting
> StreamsConfig.STATE_DIR_CLEANUP_DEL
Do you have the logs leading up to the exception?
On Mon, 14 Aug 2017 at 06:52 Sameer Kumar wrote:
> Exception while doing the join, cant decipher more on this. Has anyone
> faced it. complete exception trace attached.
>
> 2017-08-14 11:15:55 ERROR ConsumerCoordinator:269 - User provided listene
Sameer, the log you attached doesn't contain the logs *before* the
exception happened.
On Tue, 15 Aug 2017 at 06:13 Sameer Kumar wrote:
> I have added a attachement containing complete trace in my initial mail.
>
> On Mon, Aug 14, 2017 at 9:47 PM, Damian Guy wrote:
>
> &g
Aug 15, 2017 at 9:33 PM, Damian Guy wrote:
>
>> Sameer, the log you attached doesn't contain the logs *before* the
>
>
>> exception happened.
>>
>> On Tue, 15 Aug 2017 at 06:13 Sameer Kumar wrote:
>>
>> > I have added a attachement containing
Sameer,
It might be that put, delete, putIfAbsent etc operations can be
non-synchronized. However for get and range operations that can be
performed by IQ, i.e, other threads, we need to guard against the store
being closed by the StreamThread, hence the synchronization.
Thanks,
Damian
On Wed, 1
>
> On Wed, Aug 16, 2017 at 1:56 PM, Damian Guy wrote:
>
> > I believe it is related to a bug in the state directory cleanup. This has
> > been fixed on trunk and also on the 0.11 branch (will be part of 0.11.0.1
> > that will hopefully be released soon). The fix is in
Duy, if it is in you logic then you need to handle the exception yourself.
If you don't then it will bubble out and kill the thread.
On Fri, 18 Aug 2017 at 10:27 Duy Truong wrote:
> Hi Eno,
>
> Sorry for late reply, it's not a deserialization exception, it's a pattern
> matching exception in my
Hi,
If the userData value is null then that would usually mean that there
wasn't a record with the provided key in the global table. So you should
probably check if you have the expected data in the global table and also
check that your KeyMapper is returning the correct key.
Thanks,
Damian
On
Hi,
If you can then i'd recommend upgrading to a newer version. As you said
many bugs have been fixed since 0.10.0.1
On Wed, 23 Aug 2017 at 05:08 Balaprassanna Ilangovan <
balaprassanna1...@gmail.com> wrote:
> Hi,
>
> I have the following three question regarding Apache Kafka streams.
>
> 1. I a
Thanks Sameer, yes this looks like a bug. Can you file a JIRA?
On Mon, 4 Sep 2017 at 12:23 Sameer Kumar wrote:
> Hi,
>
> I am using InMemoryStore along with GlobalKTable. I came to realize that I
> was losing on data once I restart my stream application while it was
> consuming data from kafka t
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.11.0.1.
This is a bug fix release and it includes fixes and improvements from 49
JIRAs (including a few critical bugs).
Release notes for the 0.11.0.1 release:
http://home.apache.org/~d
Resending as i wasn't part of the kafka-clients mailing list
On Tue, 5 Sep 2017 at 21:34 Damian Guy wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.11.0.1.
>
> This is a bug fix release and i
It is shown in the table what happens when you get null values for a key.
On Fri, 8 Sep 2017 at 12:31 Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:
> Hi Kafka Users,
>
> KTable-KTable Join Semantics is explained in detailed [here][1]. But,
> it's not clear when the input recor
>
> Guozhang
>
>
>
> On Thu, Sep 7, 2017 at 2:20 AM, Magnus Edenhill
> wrote:
>
> > +1 (non-binding)
> >
> > Verified with librdkafka regression test suite
> >
> > 2017-09-06 11:52 GMT+02:00 Damian Guy :
> >
> > > Resending as i wa
ava
│ └── WordCount.java
└── resources
└── log4j.properties
Doesn't render properly - at least for me.
On Mon, 11 Sep 2017 at 09:08 Damian Guy wrote:
> Hi Guozhang, from what i'm looking at the {{fullDotVersion}} is replaced
/www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
kafka_2.12-0.11.0.1.tgz
<https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.0/kafka_2.12-0.11.0.0.tgz>
>*
A big thank you for the following 33 contributors to this release!
Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or,
Hi,
Do you have the logs for the other instance?
Thanks,
Damian
On Fri, 15 Sep 2017 at 07:19 dev loper wrote:
> Dear Kafka Users,
>
> I am fairly new to Kafka Streams . I have deployed two instances of Kafka
> 0.11 brokers on AWS M3.Xlarge insatnces. I have created a topic with 36
> partitions
; On Fri, Sep 15, 2017 at 2:31 PM, Damian Guy wrote:
>
> > Hi,
> >
> > Do you have the logs for the other instance?
> >
> > Thanks,
> > Damian
> >
> > On Fri, 15 Sep 2017 at 07:19 dev loper wrote:
> >
> > > Dear Kafka Users,
&g
Hi, is that the complete log? It looks like there might be 2 tasks that are
still restoring:
2017-09-22 14:08:09 DEBUG AssignedTasks:90 - stream-thread
[argyle-streams-fp-StreamThread-6] transitioning stream task 1_18 to
restoring
2017-09-22 14:08:09 DEBUG AssignedTasks:90 - stream-thread
[argyle-s
ire log. Hope it helps.
>
> Ara.
>
>
> On Sep 25, 2017, at 7:59 AM, Damian Guy wrote:
>
> Hi, is that the complete log? It looks like there might be 2 tasks that are
> still restoring:
> 2017-09-22 14:08:09 DEBUG AssignedTasks:90 - stream-thread
> [argyle-streams-fp-Str
Hi Roshan,
KafkaStreams apps run as a client application. It does not run on the
broker. You develop an application and give it an `application.id` - you
deploy how over many instances of that application you like and they all
share the same topology. I suggest you take a look at the docs here:
ht
You can set ProducerConfig.RETRIES_CONFIG in your StreamsConfig, i.e,
Properties props = new Properties();
props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
...
On Fri, 29 Sep 2017 at 13:17 Sameer Kumar wrote:
> I guess once stream app are enabled exactly-once, producer idempotence g
If you are using the confluent schema registry then the will be cached by
the SchemaRegistryClient.
Thanks,
Damian
On Tue, 3 Oct 2017 at 09:00 Ted Yu wrote:
> I did a quick search in the code base - there doesn't seem to be caching as
> you described.
>
> On Tue, Oct 3, 2017 at 6:36 AM, Kristop
Hi,
No that isn't supported.
Thanks,
Damian
On Fri, 6 Oct 2017 at 04:18 Stas Chizhov wrote:
> Hi
>
> Is there a way to serve read read requests from standby replicas?
> StreamsMeatadata does not seem to provide standby end points as far as I
> can see.
>
> Thank you,
> Stas
>
Hi,
No, context.forward() always needs to be called from the StreamThread. If
you call it from another thread the behaviour is undefined and in most
cases will be incorrect, likely resulting in an exception.
On Tue, 10 Oct 2017 at 09:04 Murad Mamedov wrote:
> Hi, here is the question:
>
> Transf
Hi Johan,
Do you have any logs? The state store restoration changed significantly in
0.11.0.1. If you could get some logs at trace level, that would be useful.
Also if you could provide your topology (removing anything
proprietary/sensitive).
Thanks,
Damian
On Tue, 17 Oct 2017 at 05:55 Johan Gen
Hi Chris,
You can only join on the key of the table, so i don't think this would work
as is. Also, the global table is updated in a different thread and there is
no guarantee that it would have been updated before the purchase.
Perhaps you could do it by making the key of the product table versio
> > > > > >
> > > > > > > Thank you for responding so quickly. This is the topology. I've
> > > > > > simplified
> > > > > > > it a bit, but these are the steps it goes through, not sure if
> > that
> > > > is
> > &
1 - 100 of 314 matches
Mail list logo