申请退订邮件
您好: 申请退订邮件
[jira] [Created] (FLINK-36974) support overwrite flink config by command line
hiliuxg created FLINK-36974: --- Summary: support overwrite flink config by command line Key: FLINK-36974 URL: https://issues.apache.org/jira/browse/FLINK-36974 Project: Flink Issue Type: New Feature Components: Flink CDC Affects Versions: cdc-3.3.0 Reporter: hiliuxg Fix For: cdc-3.3.0 Support overwrite flink config in the command line, for example: `bin/flink-cdc.sh1732864461789.yaml --flink-conf execution.checkpointing.interval=10min --flink-conf rest.bind-port=42689 --flink-conf yarn.application.id=application_1714009558476_3563 --flink-conf execution.target=yarn-session --flink-conf rest.bind-address=10.5.140.140` The example provided is used to submit a job to a specified host's YARN session cluster with specific Flink configurations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36975) Invalid ConfigOption for "aws.credentials.provider.role.provider" in AWSConfigOptions
Mohsen Rezaei created FLINK-36975: - Summary: Invalid ConfigOption for "aws.credentials.provider.role.provider" in AWSConfigOptions Key: FLINK-36975 URL: https://issues.apache.org/jira/browse/FLINK-36975 Project: Flink Issue Type: Bug Components: Connectors / Kinesis Affects Versions: aws-connector-5.0.0 Reporter: Mohsen Rezaei A bug has been introduced in the new version of the Kinesis connector where the previously available configuration option/key {{aws.credentials.provider.role.provider}} is incorrectly provided as {{aws.credentials.provider.webIdentityToken.file}} under [{{AWSConfigOptions.AWS_ROLE_CREDENTIALS_PROVIDER_OPTION}}|https://github.com/apache/flink-connector-aws/blob/b55dec16855785b5b0af0fb0ff57816e71ad3e31/flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/config/AWSConfigOptions.java#L131]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36972) Unify AWS sink implementation
Ahmed Hamdy created FLINK-36972: --- Summary: Unify AWS sink implementation Key: FLINK-36972 URL: https://issues.apache.org/jira/browse/FLINK-36972 Project: Flink Issue Type: Technical Debt Components: Connectors / AWS Reporter: Ahmed Hamdy Fix For: aws-connector-5.1.0 h1. Description AWS sinks (Kinesis, Firehose, DynamoDB, SQS) are all extensions of Async Sink API, all of these implementations are almost identical in regards of submitting request entries and fatal error handling and partial failure handling. The only difference is in the sdk client implementation. This causes simple changes to be replicated across all modules as in [this pr](https://github.com/apache/flink-connector-aws/pull/186/files) Ideally we want a common AWS sink writer with abstract client bean, different sink implementers would only provide client bean implementation with specific sdk clients while the common logic is unified. h2. Side effects - Some minor API changes might be required like changing {{DynamoDbSinkFailFastException}} and others to {{AwsAsyncSinkFailFastException}} and of course logging traces. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36971) Add Sqs Sink SQL connector
Ahmed Hamdy created FLINK-36971: --- Summary: Add Sqs Sink SQL connector Key: FLINK-36971 URL: https://issues.apache.org/jira/browse/FLINK-36971 Project: Flink Issue Type: Improvement Components: Connectors / AWS Affects Versions: aws-connector-5.1.0 Reporter: Ahmed Hamdy h1. Description - After adding table api connector for sqs sink, we want to add SQL connector h2. Acceptance criteria - SQL connector added for AWS SQS -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36970) Merge result of data type BIGINT and DOUBLE should be DOUBLE instead of STRING
Xiao Huang created FLINK-36970: -- Summary: Merge result of data type BIGINT and DOUBLE should be DOUBLE instead of STRING Key: FLINK-36970 URL: https://issues.apache.org/jira/browse/FLINK-36970 Project: Flink Issue Type: Improvement Components: Flink CDC Reporter: Xiao Huang In SchemaMergingUtils#getLeastCommonType, the merge result of BIGINT and DOUBLE is STRING now. However, considering JSON string, JSON number can be integers or floating point. Data type of integers can be inferred as BIGINT, and floating point can be inferred as DOUBLE. So a JSON number field will be inferred as STRING, which is confusing. The merge result of numerical types should always be a numerical type. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36969) Merge result of data type BIGINT and DOUBLE should be DOUBLE instead of STRING
Xiao Huang created FLINK-36969: -- Summary: Merge result of data type BIGINT and DOUBLE should be DOUBLE instead of STRING Key: FLINK-36969 URL: https://issues.apache.org/jira/browse/FLINK-36969 Project: Flink Issue Type: Improvement Components: Flink CDC Reporter: Xiao Huang In SchemaMergingUtils#getLeastCommonType, the merge result of BIGINT and DOUBLE is STRING now. However, considering JSON string, JSON number can be integers or floating point. Data type of integers can be inferred as BIGINT, and floating point can be inferred as DOUBLE. So a JSON number field will be inferred as STRING, which is confusing. The merge result of numerical types should always be a numerical type. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36973) udf dateformat support LocalZonedTimestampData
hiliuxg created FLINK-36973: --- Summary: udf dateformat support LocalZonedTimestampData Key: FLINK-36973 URL: https://issues.apache.org/jira/browse/FLINK-36973 Project: Flink Issue Type: New Feature Components: Flink CDC Affects Versions: cdc-3.3.0 Reporter: hiliuxg udf dateformat support LocalZonedTimestampData datatype -- This message was sent by Atlassian Jira (v8.20.10#820010)