Hi, I have 100+ sink connectors running with 100+ topics each with roughly 3 partitions per topic. There are running on K8s on 10 pods with 6 cpus and 32Gig mem. The connector in question is Snowflake's sink connector v2.2.0. This worked in the mini batch mode SNOWPIPE, but once i switched over to SNOWPIPE_STREAMING, it no longer works. Tasks are failing with the exception:
State: FAILED Worker ID: 10.136.83.73:8080 Trace: java.lang.IllegalStateException: No current assignment for partition bigpicture.bulk_change-0 at org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:369) at org.apache.kafka.clients.consumer.internals.SubscriptionState.seekUnvalidated(SubscriptionState.java:386) at org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1637) at org.apache.kafka.connect.runtime.WorkerSinkTask.rewind(WorkerSinkTask.java:642) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:327) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:238) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:207) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:229) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:284) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) My questions are: 1. Are these connectors are overloaded? Can kafka connect handle this level of load? 2. say it can, which i've seen it do, could it be that this is caused by underlying rebalancing? If so what would you recommend I do to mitigate? Thanks -BW