Re: Getting exception when writing to parquet file with generic types disabled

2023-05-18 Thread Shammon FY
Hi Aniket, Currently the filesystem connector does not support option 'pipeline.generic-types'='false', because the connector will output `PartitionCommitInfo` messages for the downstream partition committer operator even when there are no partitions in the sink table. There is a `List partitions`

python udf with flinksql

2023-05-18 Thread tom yang
Hi I am trying to create a flinksql program using python udf & using metrics. This is my sample python file custom_udf_2.py ``` from pyflink.table.udf import ScalarFunction, udf from pyflink.table import DataTypes class MyUDF(ScalarFunction): def __init__(self): self.counter = None d

Getting exception when writing to parquet file with generic types disabled

2023-05-18 Thread Aniket Sule
Hi, I am trying to write data to parquet files using SQL insert statements. Generic types are disabled in the execution environment. There are other queries running in the same job that are counting/aggregating data. Generic types are disabled as a performance optimization for those queries. In

Backpressure handling in FileSource APIs - Flink 1.16

2023-05-18 Thread Kamal Mittal
Hello Community, Does FileSource APIs for Bulk and Record stream formats handle back pressure by any way like slowing down sending data in piepline further or reading data from source somehow? Or does it give any callback/handle so that any action can be taken? Can you please share details if any?

Re: Issue with Incremental window aggregation using Aggregate function.

2023-05-18 Thread Sumanta Majumdar
Any thoughts on this? On Fri, Apr 21, 2023 at 4:10 PM Sumanta Majumdar wrote: > Hi, > > Currently we have a streaming use case where we have a flink application > which runs on a session cluster which is responsible for reading data from > Kafka source which is basically table transaction events

IRSA with Flink S3a connector

2023-05-18 Thread Anuj Jain
Hi, I have a flink job running on EKS, reading and writing data records to S3 buckets. I am trying to set up access credentials via AWS IAM. I followed this: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html I have configured: com.amazonaws.auth.WebIdentityTokenC