Configuration:
3 br
1 zk
1 connect worker(distributed mode)
8 connectors(sink and source)
Description of connect topics:
[image: Снимок экрана 2019-03-22 в 17.00.56.png]
Environment:
K8s, gcloud, confluent images
After crash of first broker there are several messages that broker is not
a
kipped by the consumer as if they don't exist.
>
> When inspecting the topic manually, use isolation.level=read_committed to
> get the same behavior.
>
> Ryanne
>
> On Fri, Mar 15, 2019, 6:08 AM Федор Чернилин
> wrote:
>
> > I also noticed another important
I also noticed another important thing now. Message which used for join is
uncommitted. I understood it with the help of consumer's setting isolation
level - read_committed. The message got into the topic using the same
stream app. Remind that stream app has processing guarantee
= exactly_once. How
Hello! I encountered the following problem. I have 3 brokers and 1
zookeper. Topics with 10 partitions and replication factor 3. Stream app
with 10 threads, exactly_once and commit interval 1000ms. When I run stream
app, join of 2 my topics doesn't work for specific message. But for all
another mes
Hello. I'm concerned about the following question.
The documentation of Kafka Connect states that
"When a worker fails, tasks are rebalanced across the active workers. *When
a task fails, no rebalance is triggered as a task failure is considered an
exceptional case. As such, failed tasks are not au
Hello! I have question. We have cluster with several connect workers. And we
have many different connectors. We need to set for each connector its own
settings, max.in.flight.requests.per.connection , partitioner.class, acks. But
I have difficulties. How can I do that? Thanks