Hi Aniket,
Currently the filesystem connector does not support option
'pipeline.generic-types'='false', because the connector will output
`PartitionCommitInfo` messages for the downstream partition committer
operator even when there are no partitions in the sink table. There is a
`List partitions`
Hi
I am trying to create a flinksql program using python udf & using
metrics. This is my sample python file
custom_udf_2.py
```
from pyflink.table.udf import ScalarFunction, udf
from pyflink.table import DataTypes
class MyUDF(ScalarFunction):
def __init__(self):
self.counter = None
d
Hi,
I am trying to write data to parquet files using SQL insert statements. Generic
types are disabled in the execution environment.
There are other queries running in the same job that are counting/aggregating
data. Generic types are disabled as a performance optimization for those
queries.
In
Hello Community,
Does FileSource APIs for Bulk and Record stream formats handle back
pressure by any way like slowing down sending data in piepline further or
reading data from source somehow?
Or does it give any callback/handle so that any action can be taken? Can
you please share details if any?
Any thoughts on this?
On Fri, Apr 21, 2023 at 4:10 PM Sumanta Majumdar
wrote:
> Hi,
>
> Currently we have a streaming use case where we have a flink application
> which runs on a session cluster which is responsible for reading data from
> Kafka source which is basically table transaction events
Hi,
I have a flink job running on EKS, reading and writing data records to S3
buckets.
I am trying to set up access credentials via AWS IAM.
I followed this:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
I have configured: com.amazonaws.auth.WebIdentityTokenC