Can Flink job be running as Rest Server, Where Apache Flink job is
listening on a port (443). When a user calls this URL with payload,
data directly goes to the Apache Flink windowing function.
Right now Flink can ingest data from Kafka or Kinesis, but we have a use
case where we would like to pus
We are creating files in S3 and we want to update the S3 object metadata
with some security-related information for governance purposes.
Right now Apache Flink totally abstracts how and when S3 object gets
created in the system.
Is there a way that we can pass the S3 object metadata and update it
ojects/flink/flink-docs-stable/dev/connectors/streamfile_sink.html#part-file-configuration
>
>
>
> ------
> 发件人:dhurandar S
> 日 期:2020年05月13日 05:13:04
> 收件人:user;
> 主 题:changing the output files names in Streamfilesink from part-00
We want to change the name of the file being generated as the output of our
StreamFileSink.
, when files are generated they are named part-00*, is there a way that we
can change the name.
In Hadoop, we can change RecordWriters and MultipleOutputs. May I please
some help in this regard. This is cau
Hi ,
We have a use case where we have to demultiplex the incoming stream to
multiple output streams.
We read from 1 Kafka topic and as an output we generate multiple Kafka
topics. The logic of generating each new Kafka topic is different and not
known beforehand. Users of the system keep adding n
>
> Hi ,
>
> We have a use case where we have to demultiplex the incoming stream to
> multiple output streams.
>
> We read from 1 Kafka topic and as an output we generate multiple Kafka
> topics. The logic of generating each new Kafka topic is different and not
> known beforehand. Users of the syst
Hi ,
We have a use case where we have to demultiplex the incoming stream to
multiple output streams.
We read from 1 Kafka topic and as an output we generate multiple Kafka
topics. The logic of generating each new Kafka topic is different and not
known beforehand. Users of the system keep adding n