Can anyone help me understand how to debug this issue? I tried setting log
level to trace in consumer logback configuration. But at such times nothing
appears in log, even in trace level. It is like entire code is frozen.
On Thu, Aug 16, 2018 at 6:32 PM Shantanu Deshmukh
wrote:
> I saw a few top
Hello,
We have Kafka 0.10.0.1 running on a 3 broker cluster. We have an
application which consumes from a topic having 10 partitions. 10 consumers
are spawned from this process, they belong to one consumer group.
What we have observed is that very frequently we are observing such
messages in cons
How long did it take to process 50 `ConsumerRecord`s?
On Wed, Aug 22, 2018, 5:16 PM Shantanu Deshmukh
wrote:
> Hello,
>
> We have Kafka 0.10.0.1 running on a 3 broker cluster. We have an
> application which consumes from a topic having 10 partitions. 10 consumers
> are spawned from this process,
Hi Steve,
Application is just sending mails. Every record is just a email request
with very basic business logic. Generally it doesn't take more than 200ms
to process a single mail. Currently it is averaging out at 70-80 ms.
On Wed, Aug 22, 2018 at 3:06 PM Steve Tian wrote:
> How long did it ta
Did you observed any GC-pausing?
On Wed, Aug 22, 2018, 6:38 PM Shantanu Deshmukh
wrote:
> Hi Steve,
>
> Application is just sending mails. Every record is just a email request
> with very basic business logic. Generally it doesn't take more than 200ms
> to process a single mail. Currently it is
How do I check for GC pausing?
On Wed, Aug 22, 2018 at 4:12 PM Steve Tian wrote:
> Did you observed any GC-pausing?
>
> On Wed, Aug 22, 2018, 6:38 PM Shantanu Deshmukh
> wrote:
>
> > Hi Steve,
> >
> > Application is just sending mails. Every record is just a email request
> > with very basic bu
NVM. What's your client version? I'm asking as max.poll.interval.ms
should be introduced since 0.10.1.0, which is not the version you mentioned
in the email thread.
On Wed, Aug 22, 2018, 6:51 PM Shantanu Deshmukh
wrote:
> How do I check for GC pausing?
>
> On Wed, Aug 22, 2018 at 4:12 PM Steve
Ohh sorry, my bad. Kafka version is 0.10.1.0 indeed and so is the client.
On Wed, Aug 22, 2018 at 4:26 PM Steve Tian wrote:
> NVM. What's your client version? I'm asking as max.poll.interval.ms
> should be introduced since 0.10.1.0, which is not the version you mentioned
> in the email thread.
Have you measured the duration between two `poll` invocations and the size
of returned `ConsumrRecords`?
On Wed, Aug 22, 2018, 7:00 PM Shantanu Deshmukh
wrote:
> Ohh sorry, my bad. Kafka version is 0.10.1.0 indeed and so is the client.
>
> On Wed, Aug 22, 2018 at 4:26 PM Steve Tian
> wrote:
>
>
I know average time of processing one record, it is about 70-80ms. I have
set session.timeout.ms so high total processing time for one poll
invocation should be well within it.
On Wed, Aug 22, 2018 at 5:04 PM Steve Tian wrote:
> Have you measured the duration between two `poll` invocations and t
Thank you Brett I identified this metrics as well, thank you for confirming.
I ended up using TotalProduceRequestsPerSec and TotalFetchRequestsPerSec.
Regards
On Tue, Aug 21, 2018 at 9:53 PM Brett Rann
wrote:
> These are in regex form for DataDog's JMX collector, but it should get you
> started:
Thank you Guoshang. I finally solved the issue by shifting from openjdk to
https://hub.docker.com/r/anapsix/alpine-java/ in my container.
Looks like alpine openjdk has musl libc implementation but RocksJava
required glibc.
On Mon, Jul 30, 2018 at 9:31 AM Guozhang Wang wrote:
> Hello Vishnu,
>
>
I am sending some Kafka messages over the Internet. The message sizes
are about 400K. The essential logic of my code is as follows:
Properties config = new Properties();
config.put("bootstrap.servers", "...");
config.put("client.id", "...");
config.put("key.serializer",
"org.apache.kafka.common.se
I would assume, that you refer to "commit markers". Each time you call
commitTransaction(), a special message called commit marker is written
to the log to indicate a successful transaction (there are also "abort
markers" if a transaction gets aborted).
Those markers "eat up" one offset, but wont'
Glad to hear that!
On Wed, Aug 22, 2018 at 8:13 AM, Vishnu Viswanath <
vishnu.viswanat...@gmail.com> wrote:
> Thank you Guoshang. I finally solved the issue by shifting from openjdk to
> https://hub.docker.com/r/anapsix/alpine-java/ in my container.
> Looks like alpine openjdk has musl libc imple
Hi Guys,
We are trying to secure the Kafka-Cluster in order to enforce topic level
security based on sentry roles. We are seeing a big performance impact after
SSL_SASL is enabled. I read multiple blog posts describing the performance
impact but that also said that the impact would be negligibl
Hi Kafka users,
*tldr questions;*
*1. Is it normal or expected for the coordinator load state to last for 6
hours? Is this load time affected by log retention settings, message
production rate, or other parameters?*
*2. Do non-pykafka clients handle COORDINATOR_LOAD_IN_PROGRESS by consuming
only
Hi Ashok,
Your implementation looks okay to me: I did not know how "handleTasks" is
implemented, just that if you are iterating over the store, you'd need to
close the iterator after used it.
One thing I suspect is that your memory usage combing the streams cache
plus rocksDB's own buffering may
Hello Nan,
Thanks for the detailed information you shared. When Kafka Streams is
normally running, no rebalances should be triggered unless some of the
instances (in your case, docker containers) have soft failures.
I suspect the latency spike is due to the commit intervals: streams will
try to c
Hello,
I need to unit-test java code that should be able to subscribe to kafka
topic and asynchronously receive messages as input... Can anybody recommend
junit mockup framework for doing that?
Thank you,
Hi all
Is there a potential issue with creating a connector using incrementing mode
where the timestamp.column.name has underscores?
If I change the mode to bulk I get a successful status and a new topic created.
e.g.
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
…
“mode”:
I was suspecting that too, but I also noticed the spike is not spaced
around 10s. to further prove it. I put kafka data directory in a memory
based directory. it still has such latency spikes. I am going to test it
on a single broker, single partition env. will report back soon.
On Wed, Aug 22,
I setup a local single node test. producer and broker are sitting at the
same VM. broker only has a single node(localhost) and a single partition.
producer produce message as fast as it could in a single thread. all update
to a SINGLE key(String). the kafka broker data directory is memory based
dir
You are reaching 10gb * 1000 / 64 = 156 MB / s which probably saturated your
hard drive bandwidth ? So you can take a look at your iostats
--
Sent from my iPhone
On Aug 22, 2018, at 8:20 PM, Nan Xu wrote:
I setup a local single node test. producer and broker are sitting at the
same VM. broker
the data directory is memory basd, no hard drive involved.
mount -t tmpfs -o size=25G tmpfs /mnt/ramdisk and use this as data folder.
iostat show 0 write too.
On Wed, Aug 22, 2018 at 11:06 PM Ken Chen wrote:
> You are reaching 10gb * 1000 / 64 = 156 MB / s which probably saturated
> your hard d
Given your application code:
final KStream localDeltaStream = builder.stream(
localDeltaTopic,
Consumed.with(
new Serdes.StringSerde(),
new NodeMutationSerde<>()
)
);
KStream localHi
26 matches
Mail list logo