so, I hope that I am
misinterpreting something.
On Tue, Oct 12, 2021 at 4:47 PM Preston Price wrote:
> Thanks for your thoughts here Fabian, I've responded inline but I also
> want to clarify the reason I need the file paths on commit.
> The FileSink works as expected in Azure Dat
Some details about my runtime/environment:
Java 11
Flink version 1.14.0
Running locally in IntelliJ
The error message that I am getting is: Configuration property
{storage_account}.dfs.core.windows.net not found.
Reading through all the docs hasn't yielded much help.
In the Flink docs here
There is an open bug for this here:
https://issues.apache.org/jira/browse/FLINK-24497
For log4j2 these settings worked for me:
# mute obnoxious warnings due to this bug:
https://issues.apache.org/jira/browse/FLINK-24497
logger.flink_annoying_mute.name =
org.apache.flink.connector.kafka.source.metr
ith the librdkafka client and the Go wrapper, the
>> topic-pattern subscribe is reactive. The Flink Kafka connector might behave
>> similarly.
>>
>> Best,
>> Denis
>>
>> On Fri, Oct 15, 2021 at 12:34 AM Preston Price
>> wrote:
>>
>>> No, th
suppose you want to read from different topics every now and then? Does
> the topic-pattern option [1] in Table API Kafka connector meet your needs?
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/table/kafka/#topic-pattern
>
> Preston Price 于20
The KafkaSource, and KafkaSourceBuilder appear to prevent users from
providing their own KafkaSubscriber. Am I overlooking something?
In my case I have an external system that controls which topics we should
be ingesting, and it can change over time. I need to add, and remove topics
as we refresh
Thanks for your thoughts here Fabian, I've responded inline but I also want
to clarify the reason I need the file paths on commit.
The FileSink works as expected in Azure Data Lake with the ABFS connector,
but I want to perform an additional step by telling Azure Data Explorer to
ingest the committ
I am trying to implement a File Sink that persists files to Azure Data
Lake, and then on commit I want to ingest these files to Azure Data
Explorer. Persisting the files is pretty trivial using the ABFS connector.
However, it does not appear to be possible to get any details about
names/paths to t