Hi,
I have created a simple window store to count occurrences of a given key.
My pipeline is:
TimeWindows windows = TimeWindows.of(n).advanceBy(n).until(30n);
final StateStoreSupplier supplier =
Stores.create("key-table")
.withKeys(Serdes.String())
Hi Harish,
I believe many people/orgs use it on Windows. We rely on the community to
test/fix/answer any Windows questions, same as with Linux or MacOS. However,
based on what I've observed, perhaps there are more people answering
Linux-related questions.
Eno
> On 14 Jul 2017, at 13:24, haris
Hi,
I was wondering if the broker metrics calculations include replicas?
BrokerTopicMetrics.MessagesInPerSec
BrokerTopicMetrcs.BytesInPerSec
BrokerTopicMetrics.RequestsPerSec
BrokerTopicMetrics.BytesOutPerSec
These are calculated by using all messages (including replicas)?
Regards,
Kevin
Hello Team,
I am exploring Apache Kafka and found that one of the best MQ I have
encountered. I was exploring option to use it in Windows machine and started
some kind of proof of concept work referring installation section on windows
and it work perfectly. Later realized that Kafka documentati
In other words, I think the default should be the exact behavior we have
today plus the remaining group information from DescribeGroupResponse.
On Fri, Jul 14, 2017 at 11:36 AM, Onur Karaman wrote:
> I think if we had the opportunity to start from scratch, --describe would
> have been the follow
I think if we had the opportunity to start from scratch, --describe would
have been the following:
--describe --offsets: shows all offsets committed for the group as well as
lag
--describe --state (or maybe --members): shows the full
DescribeGroupResponse output (including things like generation id
Hey Vahid,
Thanks for the KIP. Looks like a nice improvement. One minor suggestion:
Since consumers can be subscribed to a large number of topics, I'm
wondering if it might be better to leave out the topic list from the
"describe members" option so that the output remains concise? Perhaps we
could
Thanks Eno for the clarification. I did some more digging up and found that
there's a time interval which can be configured to set the compaction
interval. And for the topic compaction takes place for all segments except
the current one being written. These are all useful information. Thanks
again
Hi all,
I'm new to this mailing list, this is my first post, hello and thanks in
advance to everybody spend his/her time to read this.
I'm writing a Java application with kafka_2.11-0.10.1.0, I have just to
copy few data from a database to Solr.
In my application I've imagined to have two types
None of these questions are naive, so no worries. Answer inline:
> During restore why does Kafka replay the whole topic / partition to recreate
> the state in the local state store ? Isn't there any way to just have the
> latest message as the current state ? Because that's what it is .. right ?
So a couple of things that can help hopefully:
- it's worth thinking about how to map the problem into KStreams, KTables and
GlobalKTables. For example, events A seem static and read-only to me, and
possibly the data footprint is small, so probably they should be represented in
the system as a
Thanks Eno ..
regarding the merging part, I was talking about merging topics using
streams only - so that is safe as you mentioned.
Regarding the restore part, I have another question. May be it's a bit
naive too ..
During restore why does Kafka replay the whole topic / partition to
recreate the
Hi Debasish,
Your intuition about the first part is correct. Kafka Streams automatically
assigns a partition of a topic to
a task in an instance. It will never be the case that the same partition is
assigned to two tasks.
About the merging or changing of partitions part, it would help if we kn
On 14/07/17 14:04, Manikumar wrote:
looks like these logs coming immediately after topic creation. did you see
any data loss?
yes
looks like these logs coming immediately after topic creation. did you see
any data loss?
otherwise, these should be normal.
On Fri, Jul 14, 2017 at 5:02 PM, mosto...@gmail.com
wrote:
> we are using a local ZFS
>
>
>
> On 14/07/17 13:31, Tom Crayford wrote:
>
>> Hi,
>>
>> Which folder are you st
we are using a local ZFS
On 14/07/17 13:31, Tom Crayford wrote:
Hi,
Which folder are you storing kafka's data in? By default that's /tmp, which
might be getting wiped by your OS.
Thanks
Tom Crayford
Heroku Kafka
On Fri, Jul 14, 2017 at 11:50 AM, mosto...@gmail.com
wrote:
anyone?
On 13
Hi,
Which folder are you storing kafka's data in? By default that's /tmp, which
might be getting wiped by your OS.
Thanks
Tom Crayford
Heroku Kafka
On Fri, Jul 14, 2017 at 11:50 AM, mosto...@gmail.com
wrote:
> anyone?
>
>
>
> On 13/07/17 17:09, mosto...@gmail.com wrote:
>
>>
>> Hi
>>
>> With
anyone?
On 13/07/17 17:09, mosto...@gmail.com wrote:
Hi
With swiss precission, our kafka test environment seems to truncate
topics at o'clock hours.
This might be confirmed with the following trace, which states
"Truncating log ... to offset 0"
We are still using Kafka 0.10.2.1, but I w
Hi there,
Resending as probably missed earlier to grab your attention.
Regards,
Umesh
-- Forwarded message -
From: UMESH CHAUDHARY
Date: Mon, 3 Jul 2017 at 11:04
Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
configs in WorkerConfig
To: d...@kafka.apache.org
19 matches
Mail list logo