this link the
feature is still open.
https://issues.apache.org/jira/browse/FLINK-6757
I would like to know whether this feature on Apache Atlas with Apache Flink
is released or not. If yes, anyone could share with me the references for
integrating.
Thanks and Regards,
Arjun S
Hi Shammon,
Thank you for your prompt reply.Aslo I'm interested to know if there is an
available feature for integrating Apache Flink with Apache Ranger. If so,
could you kindly share the relevant documentation with me?
Thanks & Regards,
Arjun
Hi,
I'm interested to know if there is an available feature for integrating
Apache Flink with Apache Ranger. If so, could you kindly share the relevant
documentation with me?
Thanks and regards,
Arjun S
Hello team,
I'm currently in the process of configuring a Flink job. This job entails
reading files from a specified directory and then transmitting the data to
a Kafka sink. I've already successfully designed a Flink job that reads the
file contents in a streaming manner and effectively sends them
t; and STREAMING that reads or writes (par...
>
> <https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/datastream/filesystem/>
>
>
>
> On Thursday, 26 October, 2023 at 06:53:23 pm IST, arjun s <
> arjunjoice...@gmail.com> wrote:
>
>
> H
gt;> to do this outside of the Flink job periodically (cron, whatever), because
>>>> on restart it won't reprocess the files that have been committed in the
>>>> checkpoints.
>>>>
>>>>
>>>> https://nightlies.apache.org/flink/flink
Hi team,
I'm also interested in finding out if there is Java code available to
determine the extent to which a Flink job has processed files within a
directory. Additionally, I'm curious about where the details of the
processed files are stored within Flink.
Thanks and regards,
Arjun
Hi team,
I'm also interested in finding out if there is Java code available to
determine the extent to which a Flink job has processed files within a
directory. Additionally, I'm curious about where the details of the
processed files are stored within Flink.
Thanks and regards,
Arjun S
Hi team,
I'm interested in understanding if there is a method available for clearing
the State Backends in Flink. If so, could you please provide guidance on
how to accomplish this particular use case?
Thanks and regards,
Arjun S
Hi team,
I'm currently utilizing the Table API function within my Flink job, with
the objective of reading records from CSV files located in a source
directory. To obtain the file names, I'm creating a table and specifying
the schema using the Table API in Flink. Consequently, when the schema
match
> But it cannot be done for Keyed state for users because every operation
> for it is binded with a specific key currently.
> BTW, Could you also share your business scenario ? It could help us to
> rethink the interface. Thanks!
>
> On Tue, Oct 31, 2023 at 12:02 AM arjun s wrot
-- of `path` option
>
>
> Best,
> Yu Chen
>
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/filesystem/
>
> --
> *发件人:* arjun s
> *发送时间:* 2023年11月6日 20:50
> *收件人:* user@flin
ill recursive all files
> under the directory-- of `path` option
>
>
> Best,
> Yu Chen
>
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/filesystem/
>
> --
s.
>
> Please let me know if there are any other problems.
>
> Best,
> Yu Chen
>
> > 2023年11月7日 18:11,arjun s 写道:
> >
> > Hi Chen,
> > I attempted to configure the 'source.path.regex-pattern' property in the
> table settings as '^cu
shMap or Hap, are stored in RocksDB, and
what type of data is being stored in RocksDB.
Thanks in Advance,
Arjun S
0
>
>
> --
> Best!
> Xuyang
>
>
> At 2023-12-01 15:08:41, "arjun s" wrote:
>
> Hi team,
> I'm new to Flink's window and aggregate functions, and I've configured my
> state backend as RocksDB. Currently, I'm computing the c
Hello team,
I'm currently working on a Flink use case where I need to calculate the sum
of occurrences for each "customer_id" within a 10-minute duration and send
the results to Kafka, associating each "customer_id" with its corresponding
count (e.g., 101:5).
In this scenario, my data source is a
Hello team,
I'm relatively new to Flink's window functions, and I've configured a
tumbling window with a 10-minute duration. I'm wondering about the scenario
where the Flink job is restarted or the Flink application goes down. Is
there a mechanism to persist the aggregated values, allowing the pro
Hi team,
I'm a newcomer to Flink's window functions, specifically utilizing
TumblingProcessingTimeWindows with a configured window duration of 20
minutes. However, I've noticed an anomaly where the window output occurs
within 16 to 18 minutes. This has left me uncertain about whether I
overlooked a
Hi team,
I am currently in the process of deploying Flink on Kubernetes using the
Flink Kubernetes Operator and have encountered a scenario where I need to
pass runtime arguments to my Flink application from a properties file.
Given the dynamic nature of Kubernetes environments and the need for
fl
Hello team,
I'm currently deploying a Flink session cluster on Kubernetes using the
Flink Kubernetes operator. My Flink job, which utilizes the DataStream API
for its logic, requires several external dependencies. , so I've used an
init container to copy all necessary jars to a /mnt/external-jars
21 matches
Mail list logo