Here is what I see
* The max tasks are a a cap on a Connector across the cluster. If have 8
VMs but 8 max tasks my assumption that there would be 8 * 8 = 72 task
threads was
wring. The logs showed that the partitions were consumed by 8 threads on
the 8 VMs ( 1 per VM ) which was highly un optim
I misspoke
>> I now have 8 VMs 8 cpus with 48 max tasks and it did spread to the the
8 VMs. I then upscaled to 12 VMs and the tasks *have not *migrated as I
would expect .
On Fri, Oct 18, 2019 at 8:00 PM Vishal Santoshi
wrote:
> OK, You will have to explain :)
>
> I had 12 VMs with 8 cpus a
OK, You will have to explain :)
I had 12 VMs with 8 cpus and 8 max tasks. I thought let me give a CPU to
each task, which I presumed is a java thread ( even though I know the
thread would be mostly ip bound ). . I saw the issue I pointed up.
*I now have 8 VMs 8 cpus with 48 max tasks and it did s
Hi,
Is it possible this happens because your producer can't establish a secure
connection to the Kafka brokers but repeatedly tries to? Do you see any SSL
errors in the kafka broker logs or in the logs for your producer?
Harper
On Fri, Oct 18, 2019 at 8:23 AM DHARSHAN SHAS3
wrote:
> Hi ,
>
>
>
What is tasks.max? Consider bumping to something like 48 if you're running
on a dozen nodes.
Ryanne
On Fri, Oct 18, 2019, 1:43 PM Vishal Santoshi
wrote:
> Hey Ryanne,
>
>
> I see a definite issue. I am doing an intense test and I bring
> up 12 VMs ( they are 12 pods with 8 cpus each
Anyone who might have experienced this, or have a known solution? Would
appreciate some insights here.
Sincerely,
Anindya Haldar
Oracle Responsys
> On Oct 17, 2019, at 5:38 PM, Anindya Haldar wrote:
>
> I am trying to test a 3 node Kafka cluster using the producer and consumer
> test perf sc
Hey Ryanne,
I see a definite issue. I am doing an intense test and I bring
up 12 VMs ( they are 12 pods with 8 cpus each ), replicating about 1200
plus topics ( fairly heavy 100mbps ) ... They are acquired and are
staggered as they come up..I see a fraction of these nodes not assigned
Hi ,
I request you all to help me understand why enabling SSL on Kafka nodes
results in increased number of TCP TIME_WAIT connections on Kafka brokers.
Recently, I enabled SSL on
Kafka broker and also enabled SSL on producer (Spring Application) and what
I see is SSL works fine as expected bu
In addition to what Peter said, I would recommend that you stop and delete
all data logs (if your replication factor is set correctly). Upon restart,
they’ll be recreated. This is of course the last time thing to do if you
cannot determine the root cause.
The measure works well for me with my k8s
We found a few more critical issues and so have decided to do one more RC
for 2.3.1. Please review the release notes:
https://home.apache.org/~davidarthur/kafka-2.3.1-rc2/RELEASE_NOTES.html
*** Please download, test and vote by Tuesday, October 22, 9pm PDT
Kafka's KEYS file containing PGP keys
Will do
One more thing the age/latency metrics seem to be analogous as in they
seem to be calculated using similar routines. I would think a metric
tracking
the number of flush failures ( as a GAUGE ) given offset.flush.timeout.ms
would be highly beneficial.
Regards..
On Thu, Oct 17, 2019
Oh sorry a. COUNTER... is more like it
On Fri, Oct 18, 2019, 6:58 AM Vishal Santoshi
wrote:
> Will do
> One more thing the age/latency metrics seem to be analogous as in they
> seem to be calculated using similar routines. I would think a metric
> tracking
> the number of flush failures
Hi all,
I wanted to understand, having large numbers of consumers on
producer latency and brokers. I have around 7K independent consumers. Each
consumer is consuming all partitions of a topic. I have manually assigned
partitions of a topic to a consumer, not using consumer groups. Each
consumer is
Hi Bart,
Before changing anything, I would verify whether or not the affected broker is
trying to catch up. Have you looked at the broker’s log? Do you see any errors?
Check your metrics or the partition directories themselves to see if data is
flowing into the broker.
If you do want to reset
Hi all
We had a Kafka broker failure (too many open files, stupid), and now the
partitions on that broker will no longer become part of the ISR set. It's been
a few days (organizational issues), and we have significant amounts of data on
the ISR partitions.
In order to make the partitions on t
Hi,
I am continuously getting *FETCH_SESSION_ID_NOT_FOUND*. I'm not sure why
its happening. Can anyone please me here what is the problem and what will
be the impact on consumers and brokers.
*Kafka Server Log:*
INFO [2019-10-18 12:09:00,709] [ReplicaFetcherThread-1-8][]
org.apache.kafka.client
16 matches
Mail list logo