Github user fmthoma commented on a diff in the pull request: https://github.com/apache/flink/pull/6021#discussion_r189432794 --- Diff: flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java --- @@ -326,6 +342,24 @@ private void checkAndPropagateAsyncError() throws Exception { } } + /** + * If the internal queue of the {@link KinesisProducer} gets too long, + * flush some of the records until we are below the limit again. + * We don't want to flush _all_ records at this point since that would + * break record aggregation. + */ + private void checkQueueLimit() { + while (producer.getOutstandingRecordsCount() >= queueLimit) { + producer.flush(); + try { + Thread.sleep(500); + } catch (InterruptedException e) { + LOG.warn("Flushing was interrupted."); --- End diff -- I don't think so, `flushSync()` will just swallow the interrupt and block again until the queue is empty. `checkQueueLimit()` OTOH aborts immediately on the first interrupt. So there is a difference, although we could of course discuss which one makes more sense.
---