JDK version change, a
> downgrade to 8 may solve the problem, since the log4j library would
> use a different stack inspector.
>
> Greg
>
> On Sun, May 21, 2023 at 11:30 PM Vic Xu wrote:
> >
> > Hi Greg,
> >
> > I found another possible solution that i
> Hope this helps,
> Greg Harris
>
> On Sun, May 21, 2023 at 6:31 AM Vic Xu wrote:
> >
> > Hello all, I have a Kafka cluster deployed with version 3.2.1 , JDK 11 and
> > log4j 2.18.0. I built my own Kafka image. One of my Kafka brokers is
> > experiencing
Hope this helps,
> Greg Harris
>
> On Sun, May 21, 2023 at 6:31 AM Vic Xu
> wrote:
> >
> > Hello all, I have a Kafka cluster deployed with version 3.2.1 , JDK 11
> and log4j 2.18.0. I built my own Kafka image. One of my Kafka brokers is
> experiencing CPU issues, a
Hello all, I have a Kafka cluster deployed with version 3.2.1 , JDK 11 and
log4j 2.18.0. I built my own Kafka image. One of my Kafka brokers is
experiencing CPU issues, and based on the jstack information, it seems that
log4j is causing the problem due to its usage of StackWalker. How to solve
Congrats Randall
Hi nikhita
It looks like there is a problem with the connection address
"mycoolhost:mycoolport". The address resolves properly to IP and port。
> 在 2021年4月15日,08:51,nikhita kataria 写道:
>
> ava.io.IOException: Can't resolve address: mycoolhost:mycoolport
>at
our large files to an external data store then
> produce a small Apache Kafka message with a reference for where to find the
> file. This would allow your consumer applications to find the file as
> needed rather than storing all that data in the event log.
>
> -- bjm
>
> On Thu, M
anyone can give some suggestion? or an explanation why kafka give a big
latency for large payload.
Thanks,
Nan
On Thu, Mar 14, 2019 at 3:53 PM Xu, Nan wrote:
> Hi,
>
> We are using kafka to send messages and there is less than 1% of
> message is very big, close to 30M. underst
Hi,
We are using kafka to send messages and there is less than 1% of message is
very big, close to 30M. understanding kafka is not ideal for sending big
messages, because the large message rate is very low, we just want let kafka do
it anyway. But still want to get a reasonable latency.
hi,
trying the following program and want to see the metadata for
test_store, and nothing get back, the val metaIter =
streams.allMetadata().iterator() size is 0. I can see data in the store
though, but I need metadata so when I have multiple instance running. I can
find the right store.
is th
just a general question about the rocksdb in the Kafka stream, I see there
is a folder at /tmp/kafka-stream/, which is used by the rocksdb in the
kafka stream. so when a stream app get restarted, can the store data
directly loaded from this folder? because I see there is very heavy traffic
on the n
ore
> false positives.
>
> Also, this only work for prefix queries, ie, if you query with a know
> prefix of the key.
>
> Hope this helps.
>
> -Matthias
>
> On 2/12/19 8:25 AM, Nan Xu wrote:
> > Hi,
> >
> > Just wondering if there is a way to do a s
Hi,
Just wondering if there is a way to do a sql like "select key,
value.fieild1 from ktable where key like abc%"
The purpose of this to select some value from a ktable without a fully
defined key. Store.all then filter on them would be very inefficient if
store is big.
Thanks,
Nan
[2]
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-313%3A+Add+KStream.flatTransform+and+KStream.flatTransformValues
>
>
> Guozhang
>
> On Thu, Feb 7, 2019 at 12:59 PM Nan Xu wrote:
>
> > awesome, this solution is great, thanks a lot.
> >
> > Nan
&
0).to("topic1"..);
> val allStreams(1).to("topic2"..);
> val allStreams(2).to("topic3"..);
>
> HTH,
> Bill
>
>
>
> On Thu, Feb 7, 2019 at 11:51 AM Nan Xu wrote:
>
> > hmm, but my DSL logic at beginning involve some join between di
I, sorry if I
> > didn't make this clear from before.
> >
> > Thanks,
> > Bill
> >
> > On Thu, Feb 7, 2019 at 10:41 AM Nan Xu wrote:
> >
> >> thanks, just to make sure I understand this correctly,.
> >>
> >> I have some proce
the parent node of all 3 sink nodes.
> Then in your Transformer, you can forward the key-value pairs by using one
> of two approaches.
>
> Sending to all child nodes with this call:
>
> context().forward(key, value, To.all()).
>
> Or by listing each child node individually like so
&g
when I do the transform, for a single input record, I need to output 3
different records, those 3 records are in different classes. I want to
send the each type of records to a separate topic, my understanding is I
should use
context.forward inside the transformer like
Transformer{..
context.fo
hi,
I was writing a simple stream app, all it does is producer send a sequence
of path and value, for example
path /0 , value 1
path /0/1, value 2
path /0/1/2, value 3
and kafka stream take those input and produce a ktable store.
There is a rule. if parent path is not exist, then child can not i
Hi,
Wondering is there a way to manually trigger a log compaction for a certain
topic?
Thanks,
Nan
--
This message, and any attachments, is for the intended recipient(s) only, may
contain information that is privileged, co
tly what is the root cause: the
> community can share with your some past experience and a few quick hinters,
> but most likely the issue varies case by case and hence can only be fully
> understandable by yourself.
>
>
> Guozhang
>
> On Sat, Aug 25, 2018 at 6:58 PM, Nan Xu wro
maybe easier to use github.
https://github.com/angelfox123/kperf
On Sat, Aug 25, 2018 at 8:43 PM Nan Xu wrote:
> so I did upgrade to 2.0.0 and still seeing the same result. below is the
> program I am using. I am running everything on a single server. (centos 7,
> 24 core, 32
void spaceMessageWithInterval(){
int i =0 ;
long baseTime = System.nanoTime();
long doneTime = baseTime + duration;
while(true) {
task.run();
pubTime.add(System.nanoTime());
long targetTime = System.nanoTime() + interval;
i
Hi Guozhang,
Here is the very simple kafka producer/consumer/stream app, using the
latest version and just create 2 topics
input and output
all component are just running on localhost.
Thanks,
Nan
-Original Message-
From: Nan Xu [mailto:nanxu1...@gmail.com]
Sent: Friday
Message-
From: Nan Xu [mailto:nanxu1...@gmail.com]
Sent: Friday, August 24, 2018 3:37 PM
To: users@kafka.apache.org
Subject: Re: kafka stream latency
Looks really promising but after upgrade, still show the same result. I
will post the program soon. Maybe you can see where the problem could be
ker machine? it’s
> a
> > Linux kernel parameter.
> >
> > -Sudhir
> >
> > > On Aug 23, 2018, at 4:46 PM, Nan Xu wrote:
> > >
> > > I think I found where the problem is, how to solve and why, still not
> > sure.
> > >
> > >
>
>
>
> which is pure stateless, committing will not touch on an state directory at
> all. Hence committing only involves committing offsets to Kafka.
>
>
> Guozhang
>
>
> On Wed, Aug 22, 2018 at 8:11 PM, Nan Xu wrote:
>
> > I
r hard drive bandwidth ? So you can take a look at your iostats
>
> --
> Sent from my iPhone
>
> On Aug 22, 2018, at 8:20 PM, Nan Xu wrote:
>
> I setup a local single node test. producer and broker are sitting at the
> same VM. broker only has a single node(localhost) and
I setup a local single node test. producer and broker are sitting at the
same VM. broker only has a single node(localhost) and a single partition.
producer produce message as fast as it could in a single thread. all update
to a SINGLE key(String). the kafka broker data directory is memory based
dir
ds). Is that aligned
> with the frequency you observe latency spikes?
>
>
> Guozhang
>
>
> On Sun, Aug 19, 2018 at 10:41 PM, Nan Xu wrote:
>
> > did more test and and make the test case simple.
> > all the setup now is a single physical machine. running 3 docker
&g
more
expecting 100,000 m/s and less than 10ms latency for a single powerful
broker.
Nan
On Mon, Aug 20, 2018 at 12:45 AM Nan Xu wrote:
> I did several test. one is with 10 brokers (remote server),
> one with 3 brokers. (local docker)
>
> both exhibit the same behavior, I was thinking
I did several test. one is with 10 brokers (remote server),
one with 3 brokers. (local docker)
both exhibit the same behavior, I was thinking the same but from at least
the kafka log, I don't see a rebalance happening. and I am sure my cpu is
only used about half. and all broker still running.
; only the latter has rebalance process, while the Kafak brokers do not
> really have "rebalances" except balancing load by migrating partitions.
>
> Guozhang
>
>
>
> On Sun, Aug 19, 2018 at 7:47 PM, Nan Xu wrote:
>
> > right, so my kafka cluster is alr
RUNNING since only after
> that the streams client will start to process the first record.
>
>
> Guozhang
>
>
> On Sat, Aug 18, 2018 at 8:52 PM, Nan Xu wrote:
>
> > thanks, which JMX properties indicate "processing latency spikes" /
> > "through
latency spikes original on write and not on read: you might
> want to have a look into Kafka Streams JMX metric to see if processing
> latency spikes or throughput drops.
>
> Also watch for GC pauses in the JVM.
>
> Hope this helps.
>
>
> -Matthias
>
> On 8/1
btw, I am using version 0.10.2.0
On Fri, Aug 17, 2018 at 2:04 PM Nan Xu wrote:
> I am working on a kafka stream app, and see huge latency variance,
> wondering what can cause this?
>
> the processing is very simple and don't have state, linger.ms already
> change to 5ms.
I am working on a kafka stream app, and see huge latency variance,
wondering what can cause this?
the processing is very simple and don't have state, linger.ms already
change to 5ms. the message size is around 10K byes and published as 2000
messages/s, network is 10G. using a regular consumer wat
s WAN?
>>
>> Hope we can get you some help.
>>
>> Best jan
>>
>>
>>
>>> On 06.12.2017 14:36, Xu, Zhaohui wrote:
>>> Any update on this issue?
>>>
>>> We also run into
Any update on this issue?
We also run into similar situation recently. The mirrormaker is leveraged to
replicate messages between clusters in different dc. But sometimes a portion of
partitions are with high consumer lag and tcpdump also shows similar packet
delivery pattern. The behavior is s
When I set compression.type=gzip, it works very well, but snappy not.
At 2016-07-11 22:41:49, "xu" wrote:
Hi All:
I update broker version from 0.8.2 to 0.10.0 and set
"compression.type=snappy" in server.properties. Version of producers and
consumers is still
Oh, I am talking about another memory leak. the offheap memory leak we had
experienced. Which is about Direct Buffer memory. the callstack as below.
ReplicaFetcherThread.warn - [ReplicaFetcherThread-4-1463989770], Error in
fetch kafka.server.ReplicaFetcherThread$FetchRequest@7f4c1657. Possible
cau
Hi All:
I update broker version from 0.8.2 to 0.10.0 and set
"compression.type=snappy" in server.properties. Version of producers and
consumers is still 0.8.2. I expect all the new data received by brokers is
stored compressedly in log files. But the result is in contrast。
My que
Hi Chris,
If the topic not exist, it will create a new topic with the name which you
give.
Thanks,
Nicole
On Sat, Jun 18, 2016 at 1:55 AM, Chris Barlock wrote:
> If you have a consumer listening on a topic and that topic is deleted is
> the consumer made aware -- perhaps by some exception -- o
1, 2016 at 8:44 PM, Igor Kravzov wrote:
> Hi,
>
> I am unable to see the images. But I use Kafka with HDP right now without
> any problem.
>
> On Tue, May 31, 2016 at 9:33 PM, Shaolu Xu
> wrote:
>
> > Hi All,
> >
> >
> > Anyone used HDP to run k
Hi All,
Anyone used HDP to run kafka, I used it and face a problem.The following is
the error info:
[image: Inline image 2]
The following is my HDP configuration:
[image: Inline image 1]
Should I set some configuration on HDP.
Thanks in advance.
Thanks,
Nicole
h I end up with, is to
significantly increase max_in_flight_requests_per_connection, e.g., to 1024.
On Thu, Apr 28, 2016 at 10:31 AM, Bo Xu wrote:
> PS: The message dropping occurred intermittently, not all at the end. For
> example, it is the 10th, 15th, 18th messages that are missing. It
PS: The message dropping occurred intermittently, not all at the end. For
example, it is the 10th, 15th, 18th messages that are missing. It it were
all at the end, it would be understandable because I'm not using flush() to
force transmitting.
Bo
On Thu, Apr 28, 2016 at 10:15 AM, Bo Xu
I set up a simple Kafka configuration, with one topic and one partition. I
have a Python producer to continuously publish messages to the Kafka server
and a Python consumer to receive messages from the server. Each message is
about 10K bytes, far smaller than socket.request.max.bytes=104857600. Wha
Hi folks,
Recently we run into an odd issue that some partition's latest offset
becomes 0. Here's the snapshot of the Kafka Manager. As you can see
partition 2 and 3 becomes zero.
*Partition*
*Latest Offset*
*Leader*
*Replicas*
*In Sync Replicas*
*Preferred Leader?*
*Under Replicated?*
0
2
:org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition.
(kafka.server.ReplicaFetcherThread)
And after restart that controller, everything works again.
On Tue, Mar 22, 2016 at 6:14 PM, Qi Xu wrote:
> Hi folks, Rajiv, Jun,
> I'd like
Hi folks, Rajiv, Jun,
I'd like to bring up this thread again from Rajiv Kurian 3 months ago.
Basically we did the same thing as Rajiv did. I upgraded two machines (out
of 10) from 0.8.2.1 to 0.9. SO after the upgrade, there will be 2 machines
in 0.9 and 8 machines in 0.8.2.1. And initially it all w
Hi folks,
We just finished the upgrade from 0.8.2 to 0.9 with the instructions in
Kafka web site (that set the protocol version as 0.8.2.x in Kafka server
0.9).
After the upgrade, we want to try the producer with SSL endpoint, but never
worked. Here's the error:
~/kafka_2.11-0.9.0.0$ ./bin/kafka-c
all
> > interfaces
> > #host.name=localhost
> >
>
> On Thu, Jan 28, 2016 at 10:58 PM, costa xu wrote:
>
> > My version is kafka_2.11-0.9.0.0. I find that the kafka listen on
> > multi tcp port on a linux server.
> >
> > [gdata@gdataqosconnd2 kafka_2
My version is kafka_2.11-0.9.0.0. I find that the kafka listen on multi tcp
port on a linux server.
[gdata@gdataqosconnd2 kafka_2.11-0.9.0.0]$ netstat -plnt|grep java
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp
ssues with 0.9.0 and SparkStreaming. However,
> definitely do your own testing to make sure.
>
> On Wed, Nov 25, 2015 at 11:25 AM, Qi Xu wrote:
>
> > Hi Gwen,
> > Yes, we're going to upgrade the 0.9.0 version. Regarding the upgrade, we
> > definitely don't want
ybe wait a day and go with a released and tested version.
>
> On Mon, Nov 23, 2015 at 3:01 PM, Qi Xu wrote:
>
> > Forgot to mention is that the Kafka version we're using is from Aug's
> > Trunk branch---which has the SSL support.
> >
> > Thanks again,
re about to release 0.9.0
> > (with SSL!), maybe wait a day and go with a released and tested version.
> >
> > On Mon, Nov 23, 2015 at 3:01 PM, Qi Xu wrote:
> >
> > > Forgot to mention is that the Kafka version we're using is from Aug's
> > > Trunk
Forgot to mention is that the Kafka version we're using is from Aug's Trunk
branch---which has the SSL support.
Thanks again,
Qi
On Mon, Nov 23, 2015 at 2:29 PM, Qi Xu wrote:
> Loop another guy from our team.
>
> On Mon, Nov 23, 2015 at 2:26 PM, Qi Xu wrote:
>
>&g
Loop another guy from our team.
On Mon, Nov 23, 2015 at 2:26 PM, Qi Xu wrote:
> Hi folks,
> We have a 10 node cluster and have several topics. Each topic has about
> 256 partitions with 3 replica factor. Now we run into an issue that in some
> topic, a few partition (< 10)
Hi folks,
We have a 10 node cluster and have several topics. Each topic has about 256
partitions with 3 replica factor. Now we run into an issue that in some
topic, a few partition (< 10)'s leader is -1 and all of them has only one
synced partition.
>From the Kafka manager, here's the snapshot:
[i
KafkaScheduler)
java.lang.OutOfMemoryError: Java heap space
[2015-09-12 01:01:34,023] ERROR Processor got uncaught exception.
(kafka.network.Processor)
java.lang.OutOfMemoryError: Java heap space
[2015-09-12 01:01:35,676] ERROR Uncaught exception in scheduled task
'kafka-recovery-point-checkpoint' (kafka.utils
Hi,
We're running the Trunk version of Kafka (for its SSL feature) and recently
I'm trying to enable the kafka manager with it.
After enabling that, I find out some machine's Kafka Server is dead.
Looking at the server.log, it has the following logs.
java.lang.OutOfMemoryError: Java heap space
[20
Hi all,
I'm using the Kafka.Net library for implementing the Kafka Producer.
One issue I find out is that sometimes it reads the response from kafka
server, which indicates a huge message size 386073344. Apparently something
must be wrong.
But I'm not sure if it's a special flag that Kafka.net
Hi folks,
I tried to clone the latest version of kafka truck and try to enable the
SSL. The server.properties seems not having any security related settings,
and it seems there's no other config file relevant to SSL either.
So may I know is this feature ready to use now in truck branch?
BTW, we're
Hi Everyone,
I have a question that hopes to get some clarification.
In a Kafka cluster, does every broker have the complete view of the
metadata information?
What's the best practice for aproducer to send metadata request? Is it
recommended to send it to all brokers or just one broker?
In our sce
Hi Everyone,
We're trying the deploy the Kafka behind the network balancer and we have
created the port map for each Kafka brokers under that network balancer--we
only have one public IP and the Kafka clients are in other system and thus
cannot access the brokers via internal IP directly.
So for e
Hi all,
I'm now using https://github.com/airbnb/kafka-statsd-metrics2 to monitor
our Kafka cluster. But there are not metrics about consuming rate and lag,
which are key performance metrics we care about.
So how do you guys monitor consuming rate and lag of each consumer group?
Looking forward to the 0.8.3 release.
BTW, some official management consoles should be better. Non of the ones
mentioned on the official website rocks.
2015-06-10 2:38 GMT+08:00 Ewen Cheslack-Postava :
> The new consumer implementation, which should be included in 0.8.3, only
> needs a bootstrap
Right now, Kafka topics do not support changing replication factor or
partition number after creation. The kafka-reassign-partitions.sh tool can
only reassign existent partitions.
2015-06-11 9:31 GMT+08:00 Gwen Shapira :
> What do the logs show?
>
> On Wed, Jun 10, 2015 at 5:07 PM, Dillian Murph
Hi,
We recently ran into a scenario where we initiate a FetechRequest with a
fixed fetchSize (64k) shown below using Simple Consumer. When the broker
contains an unusually large sized message, this resulted in the broker
returns an empty message set *without any error code*. According to the
docum
2015 at 8:57 AM, Shady Xu wrote:
>
> > Storing and managing offsets by broker will leave high pressure on the
> > brokers which will affect the performance of the cluster.
> >
> > You can use the advanced consumer APIs, then you can get the offsets
> either
> > f
Storing and managing offsets by broker will leave high pressure on the
brokers which will affect the performance of the cluster.
You can use the advanced consumer APIs, then you can get the offsets either
from zookeeper or the __consumer_offsets__ topic. On the other hand, if you
use the simple co
72 matches
Mail list logo