Hi,
I don't have the previous logs, so I restarted MM2, that produces the same
results. So new logs:
MM2 starts and seems to be ready, but not mirroring message:
[2020-03-20 10:50:11,985] INFO [Producer
clientId=connector-producer-MirrorCheckpointConnector-0] Cluster ID:
700ZEsu0ShuzPZ6lZE54_Q (o
Thanks Guozhang. That's really helpful!
Are you able to explain a bit more about how it would work for my use case? As
I understand it this 'repartition' method enables us to materialize a stream to
a new topic with a custom partitioning strategy.
But my problem is not how the topic is partitio
Hey,
I am using MM2 to mirror A cluster to B with tasks.max = 4.
I started two instances of MM2 and noticed that all MirrorSourceConnectors
were running in one instance and the rest of the connectors in the other.
This results in a very uneven resource utilization and also it did not
really spre
Peter, in Connect the Connectors are only run on the leader node. Most of
the work is done in the Tasks, which should be divided across nodes. Make
sure you have tasks.max set to something higher than the default of 1.
Ryanne
On Fri, Mar 20, 2020, 8:53 AM Péter Sinóros-Szabó
wrote:
> Hey,
>
> I
I use tasks.max = 4.
I see 4 tasks of MirrorSourceConnectors on MM2 instances A.
I see 4 tasks of MirrorCheckpointConnector and 1 task of
MirrorHeartbeatConnector on MM2 instance B.
The number of tasks are well distributed, but the type of tasks are not.
According to Connect documentation I expec
Hmm, that's weird. I'd expect the type of tasks to be evenly distributed as
well. Is it possible one of the internal topics are misconfigured s.t. the
Herders aren't functioning correctly?
Ryanne
On Fri, Mar 20, 2020 at 11:17 AM Péter Sinóros-Szabó
wrote:
> I use tasks.max = 4.
>
> I see 4 task
Hmm, maybe turn on debugging info and try to figure out what Connect is
doing during that time.
Ryanne
On Fri, Mar 20, 2020 at 6:15 AM Péter Sinóros-Szabó
wrote:
> Hi,
>
> I don't have the previous logs, so I restarted MM2, that produces the same
> results. So new logs:
>
> MM2 starts and seems
Partition assignment, or move specific "task placement" for Kafka
Streams, is a hard-coded algorithm (cf.
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/StickyTaskAssignor.java).
The algorithm actually tires to assign differe
Although it's not the main objective, one side effect of KIP-441 should be
improved balance of the final stable assignment. By warming up standbys
before switching them over to active tasks we can achieve stickiness without
sacrificing balance in the followup rebalance.
This work is targeted for t
Believe this is already fixed
https://issues.apache.org/jira/browse/KAFKA-3572
On Thu, Mar 19, 2020 at 9:27 PM 张祥 wrote:
> Hi,
>
> I notice that there are jmx metrics for deleted topics when using java code
> and jmxterm. Has anyone also run into this ? If yes, what is the reason
> behind this a
Well, I don't know much about herders. If you can give some idea how to
check it, I will try.
Peter
On Fri, 20 Mar 2020 at 17:47, Ryanne Dolan wrote:
> Hmm, that's weird. I'd expect the type of tasks to be evenly distributed as
> well. Is it possible one of the internal topics are misconfigured
Peter, what happens when you add an additional node? Usually Connect will
detect it and rebalance tasks accordingly. I'm wondering if that mechanism
isn't working for you.
Ryanne
On Fri, Mar 20, 2020 at 2:40 PM Péter Sinóros-Szabó
wrote:
> Well, I don't know much about herders. If you can give
Hey guys.
I'm trying to maximize the amount of data I'm batching from Kafka. The
output is me writing the data to a file on server. I'm adding extremely
high values to my consumer configuration and I'm still getting multiple
files written with very small file sizes.
As seen below, I wait a long ti
Hi Ryan,
Firstly, what version Kafka?
Secondly check the broker's message.max.bytes and the topic's
max.message.bytes, I suspect they're set a lot lower (or not at all) and
will override your fetch.min.bytes.
Cheers,
Liam Clarke
On Sat, 21 Mar. 2020, 11:09 am Ryan Schachte,
wrote:
> Hey guys
Hi Liam,
We are running 2.3.1. I was hoping I wouldn't need to modify anything at
the broker level since I do not have control/access to the broker config,
just the consumer configuration. Am I out of luck in that case?
On Fri, Mar 20, 2020 at 3:27 PM Liam Clarke
wrote:
> Hi Ryan,
>
> Firstly,
I do see the default for message.max.bytes is set to 1MB though. That would
be for each record or each poll?
On Fri, Mar 20, 2020 at 3:36 PM Ryan Schachte
wrote:
> Hi Liam,
> We are running 2.3.1. I was hoping I wouldn't need to modify anything at
> the broker level since I do not have control/a
Hi Ryan,
That'll be per poll.
Kind regards,
Liam Clarke
On Sat, 21 Mar. 2020, 11:41 am Ryan Schachte,
wrote:
> I do see the default for message.max.bytes is set to 1MB though. That would
> be for each record or each poll?
>
> On Fri, Mar 20, 2020 at 3:36 PM Ryan Schachte
> wrote:
>
> > Hi Li
Hi Ryan,
If your end goal is just larger files on the server, you don't really need
to mess with the batching configs. You could just write multiple polls
worth of data to a single file.
On Fri, Mar 20, 2020 at 3:50 PM Liam Clarke
wrote:
> Hi Ryan,
>
> That'll be per poll.
>
> Kind regards,
>
Hi,
I am using kafka client 2.0.1 and once in a while I see the following in
the logs:
2020-03-20 09:42:57.960 INFO 160813 --- [pool-1-thread-1]
o.a.kafka.clients.FetchSessionHandler: [Consumer clientId=consumer-1,
groupId=version-grabber-ajna0-mgmt1-1-prd] Error sending fetch request
(sessi
Hey David,
I would like to raise https://issues.apache.org/jira/browse/KAFKA-9701 as a
2.5 blocker. The impact of this bug is that it could throw fatal exception
and kill a stream thread on Kafka Streams level. It could also create a
crashing scenario for plain Kafka Consumer users as well as the
Hi All,
I have a Kafka Consumer that polls the data and gets *paused* for 15-20
mins for the post-processing of the polled records. However, during the
time of pause, the broker assumes that the consumer group is dead(check
below log) and rebalances the consumer group.
*Application Log:*
k8s-work
Hi Boyang,
Is this a regression?
Ismael
On Fri, Mar 20, 2020, 5:43 PM Boyang Chen
wrote:
> Hey David,
>
> I would like to raise https://issues.apache.org/jira/browse/KAFKA-9701 as
> a
> 2.5 blocker. The impact of this bug is that it could throw fatal exception
> and kill a stream thread on Kaf
Ah, I think I figured out part of my issue. If I update the Helm chart
values that impact advertised.listeners and thenI run a 'helm upgrade', it
sometimes does not actually apply the settings immediately. This made
debugging really hard until I figured this out. I don't know
Helm/Kubernetes wel
Ravi,
It is not a bug. Broker assumes that your consumer faced live-lock.
You need to tune property max.poll.interval.ms to increase expected
interval between poll() on consumer side.
-- Lukasz
sob., 21 mar 2020 o 02:19 Ravi Kanth napisał(a):
> Hi All,
>
> I have a Kafka Consumer that polls th
24 matches
Mail list logo