no experience with this. You can pass any Kafka
> client parameters through the properties.* option[1] and see if the
> setting works for you.
>
> Best,
>
> Dawid
>
> [1]
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/kafka.html#
ith some slots having more operators than others.
>
> Regards,
> David
>
> On Thu, Mar 18, 2021 at 1:03 PM eef hhj wrote:
>
>> Hi team,
>>
>> Currently the SQL generated operator has all the same parallelism by
>> default, and we faced a issue that the in th
Hi team,
Currently the SQL generated operator has all the same parallelism by
default, and we faced a issue that the in the case of multiple join, the
operator at later stage faces larger computation so that the overall
pipeline is back-presured and it causes checkpoint
fail(expired) occasionaly.
Hi team,
We are in a situatoin that we want to reduce the read frequency of Kafka
SQL connector. I did some investigation on the properties of Kafka client,
while it seems it does not have such options. Athough I found the batch
size config('properties.max.partition.fetch.bytes') among the config
iles to read at startup
>> and won't change during running. If you want to need this function, you
>> may need to customize a new connector.
>>
>> Best,
>> Xingbo
>>
>> eef hhj 于2020年11月21日周六 下午2:38写道:
>>
>>> Hi,
>>>
>>>
Hi,
I'm facing a situation where I want the Flink App to dynamically detect the
change of the Filesystem batch data source. As I tried in the following
example in sql-client.sh, it can query all the records under the folder for
the select.
While I'm adding a new file to the folder, the query does