Hi All,
We are using AWS MSK with kafka version 2.6.1. There is a compacted topic
with the below configurations. After reading the documentation my
understanding was that null values in the topic can be removed using delete
retention time but I can see months old keys having null values. Is there
Hi All,
Has anyone implemented rack awareness with AWS MSK? In the broker
configuration I see the following values (use1-az2, use1-az4 & use1-az6).
How are these values determined and how can we inject these values to java
consumers? Thanks!
Regards,
Navneeth
members),
> etc?
>
> Also I'd suggest upgrading to a newer version --- we just released 3.1.0
> --- since we've made many improvements / fix bugs around rebalances and
> assignment logic since 2.6.
>
>
> Guozhang
>
> On Wed, Feb 2, 2022 at 9:37 AM Navneeth Kri
Hi All,
We are facing an issue with our kafka streams application due to uneven
task allocation. There are 100 partitions in the input topic with 100
stream threads processing the data. Everything works well when each task
gets assigned with 1 partition. But when more than one partition is
assigne
N1
37 N9
38 N3
39 N4
40 N5
41 N2
42 N11
43 N8
44 N6
45 N10
46 N8
47 N9
48 N1
49 N2
50 N3
51 N4
52 N3
53 N5
54 N6
55 N8
56 N8
57 N10
58 N11
59 N12
60 N1
61 N2
62 N3
63 N4
64 N5
65 N7
66 N7
67 N8
68 N9
69 N10
70 N11
71 N12
On Wed, Jun 2, 2021 at 9:40 PM Navneeth Krishnan
wrote:
> We are us
We are using kafka version 2.6.1 on broker and 2.6.2 for streams.
Thanks
On Wed, Jun 2, 2021 at 7:18 PM Navneeth Krishnan
wrote:
> Hi All,
>
> We recently migrated from flink to kafka streams in production and we are
> facing a major issue. Any quick help would really be appreciate
Hi All,
We recently migrated from flink to kafka streams in production and we are
facing a major issue. Any quick help would really be appreciated.
There are 72 input data topic partitions and 72 control stream topic
partitions. There is a minimum of 12 nodes with 6 streams threads on each
instan
Hi Bruno/All,
I have a follow up question regarding the same topic. As per you had
mentioned there will be no impact to key value stores even when retention.ms
and clean up policy is provided. Does that mean the change log topic will
not clear the data in the broker even after the retention period
heartbeats.enabled = true
target->source.emit.checkpoints.enabled = true
# customize as needed
sync.topic.acls.enabled = false
emit.heartbeats.interval.seconds = 10
emit.checkpoints.interval.seconds = 10
Thanks
On Sun, Mar 7, 2021 at 10:17 PM Navneeth Krishnan
wrote:
> Thanks, I checked a con
consumer group is in the
> groups allowlist.
>
> Ryanne
>
> On Sun, Mar 7, 2021, 11:09 PM Navneeth Krishnan
> wrote:
>
> > Hi Ryanne,
> >
> > I commented out the alias config and set target to source as false. Now I
> > don't see the error anymore
work as
> you intend. Consider using a custom ReplicationPolicy if you are trying to
> do "identity" replication. There are several implementations floating
> around.
>
> Ryanne
>
> On Sun, Mar 7, 2021, 1:44 AM Navneeth Krishnan
> wrote:
>
> > Hi All,
Hi All,
I'm trying to use mirror maker 2 to replicate data to our new AWS MSK kafka
cluster and I have been running into so many issues and I couldn't find
proper documentation. Need some help and it's very urgent. Thanks
Also I don't see any of my topics created.
Note: There are no consumers on
combo key is in so range
> fetch is not as efficient since you'd need to fetch a much larger range and
> then filter a lot of records.
>
>
> Guozhang
>
> On Tue, Feb 23, 2021 at 4:04 PM Navneeth Krishnan <
> reachnavnee...@gmail.com>
> wrote:
>
> > Thank
Hi All,
Is there a widely used serde for java objects which provides both backward
and forward compatibility? I have been using Kryo with a compatible field
serializer but it seems to be very slow. Any suggestions would really help.
We also have protos in some cases but moving everything proto wou
(combo-key); // here the combo-key is a key> where timestamp is extracted from value
>
> kvStore.put(k, v)
> kvStore.put(combo-key); // it is in
>
> Later when you expire, you do not need to check on kvStore if the value's
> timestamp has changed or not.
>
>
>
>
ere.
> >
> >
> > user ids -->
> > globalktable < keyValueStore periodically queried.
> > \> session store > map (user_id -> null) --/
> >
> > Good luck,
> >
>
larke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:
> Hey Navneeth,
>
> So to understand your problem better - do you only want to stream users
> active within 10 minutes to storage?
>
> Cheers,
>
> Liam
>
> On Tue, Feb 16, 2021 at 9:50 AM Navneeth Krishnan <
>
it to data storage?
>
> Cheers,
>
> Liam Clarke-Hutchinson
>
>
>
> On Mon, 15 Feb. 2021, 9:08 pm Navneeth Krishnan, >
> wrote:
>
> > Hi All,
> >
> > I have a question about how I can use window stores to achieve this use
> > case. Thanks
Hi All,
I have a question about how I can use window stores to achieve this use
case. Thanks for all the help.
A user record will be created when the user first logins and the records
needs to be cleaned up after 10 mins of inactivity. Thus for each user
there will be a TTL but the TTL value will
Hi All,
I have a question about the inMemoryKeyValue store. I was under the
assumption that in-memory stores would not serialize the objects but when I
looked into the implementation I see InMemoryKeyValueStore uses a
NavigableMap of bytes which indicates the user objects have to be
serialized and
using? Also, this doesn't
> > look like the full stacktrace, since we can't see the NPE
> > itself. Can you share the whole thing?
> >
> > Thanks,
> > -John
> >
> >
> > On Tue, 2020-12-15 at 00:30 -0800, Navneeth Krishnan wrote:
> > &g
Hi All,
I have a scheduled function that runs every 10 seconds and in some cases I
see this NPE. Not sure how to debug this issue. Any pointers would really
help. Thanks
context.schedule(this.scheduleInterval, PunctuationType.STREAM_TIME,
this::flush);
2020-12-15 07:40:14.214
[userapp-c2db617d-
-StreamThread-2] INFO State
transition from RUNNING to REBALANCING
Thanks
On Mon, Dec 14, 2020 at 4:07 PM Navneeth Krishnan
wrote:
> Thanks Guozhang for the suggestion.
>
> We are using kafka 2.3.0 and the app.id is set to the same value.
>
> Bouncing off instances work for a small per
Thanks Guozhang for the response. Appreciate the help.
Regards,
Navneeth
On Mon, Dec 14, 2020 at 2:44 PM Guozhang Wang wrote:
> Hello Navneeth,
>
> Please find answers to your questions below.
>
> On Sun, Dec 6, 2020 at 10:33 PM Navneeth Krishnan <
> reachnavnee...@gmail.c
o the same value.
>
> Also which version of Kafka are you using?
>
>
> Guozhang
>
>
>
>
> On Mon, Dec 14, 2020 at 11:29 AM Navneeth Krishnan <
> reachnavnee...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > How does kafka streams partition
Hi All,
How does kafka streams partition assignment work for sources? I have a
stream application reading from a topic which has 24 partitions. There are
6 application containers with 4 stream tasks in each container running but
only 2 instances are assigned with partitions and even within the two
Hi All,
Any advice would really help.
Thanks,
Navneeth
On Sun, Dec 6, 2020 at 10:32 PM Navneeth Krishnan
wrote:
> Hi All,
>
> I have been working on moving an application to kafka streams and I have
> the following questions.
>
> 1. We were planning to use an EFS mount to
Hi All,
I have been working on moving an application to kafka streams and I have
the following questions.
1. We were planning to use an EFS mount to share rocksdb data for KV store
and global state store with which we were hoping to minimize the state
restore time when new instances are brought u
Hi All,
I have a global state and the same in a local cache. Whenever there is an
update to state I'm updating the local cache as well so that they are
consistent. But when the state gets restored the cache is empty. Is there a
way to attach a callback function or some way to get the state data du
Hi All,
I'm frequently getting the below error in one of the application consumers.
>From the error what I can infer is, the offset commit failed due to timeout
after 30 seconds. One suggestion was to increase the timeout but I think it
will just extend the time period. What should be the good way
ics_totaltimems and
> kafka_network_requestmetrics_requestqueuetimems might do you a favor.
>
> And your acks config would also affect the e2e latency
>
> On Wed, 16 Sep 2020 at 05:08, Navneeth Krishnan
> wrote:
>
> > Hi All,
> >
> > We are running kafka in production with 20 brokers
Hi All,
We are running kafka in production with 20 brokers and version 2.3.1. I see
the below errors frequently happening and here is the producer
configuration. Need some urgent help on what could be done to resolve this
issue.
batch.size: 65536
linger.ms: 100
request.timeout.ms: 6
org.apac
Hi All,
Any suggestions on how I can achieve this?
Thanks
On Fri, Apr 3, 2020 at 12:49 AM Navneeth Krishnan
wrote:
> Hi Boyang,
>
> Basically I don’t want to load all the states upfront. For local kv store,
> when the very first message arrives I basically do a http request to a
out a better way to do it.
Thanks
On Thu, Apr 2, 2020 at 1:38 PM Boyang Chen
wrote:
> Hey Navneeth,
>
> could you clarify a bit on what you mean by `lazy load`, specifically how
> you make it happen with local KV store?
>
> On Thu, Apr 2, 2020 at 12:09 PM Navneeth Krishn
Hi All,
Is there a recommend way for lazy loading the global state store. I'm using
PAPI and I have the local KV state stores in lazy load fashion so that I
don't end up loading unnecessary data. Similarly I want to get the global
state store to be loaded only when the request has the id for which
gt; John
>
> On Mon, Mar 9, 2020, at 12:48, Navneeth Krishnan wrote:
> > Hi All,
> >
> > Any suggestions?
> >
> > Thanks
> >
> > On Sat, Mar 7, 2020 at 10:13 AM Navneeth Krishnan <
> reachnavnee...@gmail.com>
> > wrote:
> >
> &g
Hi All,
Any suggestions?
Thanks
On Sat, Mar 7, 2020 at 10:13 AM Navneeth Krishnan
wrote:
> Hi All,
>
> Is there a recommended way of passing state stores around across different
> classes? The problem is state store can be fetched only if you have access
> to the context and
Hi All,
Is there a recommended way of passing state stores around across different
classes? The problem is state store can be fetched only if you have access
to the context and in most of the scenarios look up to state store
somewhere inside another class. I can think of two options. Either add
st
Thanks
On Wed, Jan 8, 2020 at 10:33 AM Ismael Juma wrote:
> Has the behavior changed after an upgrade or has it been consistent since
> the start?
>
> Ismael
>
> On Thu, Jan 2, 2020 at 4:18 PM Navneeth Krishnan >
> wrote:
>
> > Hi All,
> >
> > We have
er side and
> "replica.fetch.min.bytes" at broker to test if cpu usage can be down ?
> 6. you can check some metrics from jmx to analysis, e.g. checking
> "kafka.network:type=RequestMetrics,
> name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower}", if
>
Hi All,
Any suggestions, we are running into this issue in production and any
help would be greatly appreciated.
Thanks
On Mon, Jan 6, 2020 at 9:26 PM Navneeth Krishnan
wrote:
> Hi,
>
> Thanks for the response. We were using version 0.11 previously and all our
> producers/consume
e_10_performance_impact
>
>
>
> On Mon, Jan 6, 2020 at 12:48 PM Navneeth Krishnan <
> reachnavnee...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > Any idea on what can be done? Not sure if we are running into this below
> > bug.
> >
> > https://
Hi All,
Any idea on what can be done? Not sure if we are running into this below
bug.
https://issues.apache.org/jira/browse/KAFKA-7925
Thanks
On Thu, Jan 2, 2020 at 4:18 PM Navneeth Krishnan
wrote:
> Hi All,
>
> We have a kafka cluster with 12 nodes and we are pretty much seeing
Hi All,
We have a kafka cluster with 12 nodes and we are pretty much seeing 90% cpu
usage on all the nodes. Here is all the information. Need some help on
figuring out what the problem is and how to overcome this issue.
*Cluster:*
Kafka version: 2.3.0
Number of brokers in cluster: 12
Node type: 4
our team, you can
> configure auto scaling rules in your cloud formation so that your task/job
> load dynamically controls cluster sizing.
>
> Sent from my iPhone
>
> > On Nov 8, 2019, at 1:40 AM, Navneeth Krishnan
> wrote:
> >
> > Hello All,
> >
> >
Hello All,
I have a streaming job running in production which is processing over 2
billion events per day and it does some heavy processing on each event. We
have been facing some challenges in managing flink in production like
scaling in and out, restarting the job with savepoint etc. Flink provi
>
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamWindowAggregate.java#L105
>
> Hope that helps.
>
>
> -Matthias
>
> On 10/15/19 12:56 AM, Navneeth Krishnan wrote:
> > Hi All,
>
Hi All,
I'm trying to create a tumbling time window of two seconds using PAPI. I
have a TimestampWindowStore with both retention and window size as 2
seconds and retainDuplicates as false.
Stores.timestampedWindowStoreBuilder(Stores.persistentTimestampedWindowStore("window-store-1",
Durat
Hi All,
I'm waiting for inputs to proceed with my POC and convert existing
applications from flink to kafka streams. It would be really helpful if I
can get some clarity on this.
Thanks
On Wed, Oct 2, 2019 at 7:05 PM Navneeth Krishnan
wrote:
> Hello All,
>
> Any inputs?
>
&g
Hello All,
Any inputs?
Thanks
On Tue, Oct 1, 2019 at 12:40 PM Navneeth Krishnan
wrote:
> Hi All,
>
> I found the below example on how branching can be achieved with kafka
> streams. I want to implement the same with processor API and the way I get
> it is to have the s
Hi All,
I found the below example on how branching can be achieved with kafka
streams. I want to implement the same with processor API and the way I get
it is to have the same input topic processed with multiple process
functions and output the data based on the filters. Is this the right
underst
Hi All,
Any advice on how this could be achieved.
Thanks
On Mon, Sep 9, 2019 at 11:08 PM Navneeth Krishnan
wrote:
> Hi All,
>
> I have streaming ETL job which takes data from one kafka topic, enriches
> and writes it to another topic. I want to read one more topic which has the
&g
Hi All,
I have streaming ETL job which takes data from one kafka topic, enriches
and writes it to another topic. I want to read one more topic which has the
same key and evict the data in state store (added by the processor
function) and send some messages to the same kafka topic. Should I be usin
Hi All,
Any suggestions?
Thanks
On Thu, Aug 1, 2019 at 8:58 PM Navneeth Krishnan
wrote:
> Hi Guozhang,
>
> Thanks for the clarification. What I want to achieve is use of localized
> data. We have much larger state which has to be used at a per instance
> context. So i
Hi All,
I'm new to kafka streams and I'm trying to model my use case that is
currently written on flink to kafka streams. I have couple of questions.
- How can I join two kafka topics using the processor API? I have data
stream and change stream which needs to be joined based on a key.
- I read
gt;
> > You can get local assignment metadata via
> > `KafkaStreams#localThreadMetadata()` though.
> >
> > Hope this helps.
> >
> >
> > -Matthias
> >
> > On 7/29/19 11:29 PM, Navneeth Krishnan wrote:
> > > Hi All,
> > >
>
Hi All,
The main reason for knowing the partitions is to have a localized routing
based on partitions assigned to set a stream tasks. This would really help
in my use case.
Thanks
On Mon, Jul 29, 2019 at 8:58 PM Navneeth Krishnan
wrote:
> Hi,
>
> I'm using the processor topol
Hi,
I'm using the processor topology for my use case and I would like to get
the partitions assigned to a particular stream instance. I looked at the
addSouce function but I don't see a way to add a callback to get notified
when partition assignment or reassignment happens. Please advise.
Thank y
Hi All,
I'm working on a kafka streams application to basically aggreagte data over
5 min intervals.
The input topic is partitioned by id and then I want to use process
function to aggregate data using state store and use punctuator to emit
results. But I'm not sure how I can perform groupBy when
59 matches
Mail list logo