[ 
https://issues.apache.org/jira/browse/FLINK-37343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Aranovsky updated FLINK-37343:
-----------------------------------
    Description: 
Hey, we have a use-case where we need to sink records into multiple ddb tables 
without a job reset (topology change), we use the DynamicKafka source and we've 
essentially forked the regular ddb connector and added support for providing a 
table as part of the ElementsConverter interface. Since DDB batchWriteItem 
supports multiple items in a single request, I don't really see a downside into 
not including it. There is some serdes cost associated with providing the table 
name as a string on each element we provide to the sink, I'm not sure how 
significant this is.

This is both a question and a feature request: I can merge the fork upstream, 
is this something that makes sense to include in the default connector?

Is the serdes cost negligible?

Can I add the table name field as part of the DynamoDbWriteRequest class? it 
will be a breaking change.

  was:
Hey, we have a use-case where we need to sink record into multiple ddb tables 
without a job reset (topology change), we use the DynamicKafka source and we've 
essentially forked the regular ddb connector and added support for providing a 
table as part of the ElementsConverter interface. Since DDB batchWriteItem 
supports multiple items in a single request, I don't really see a downside into 
not including it. There is some serdes cost associated with providing the table 
name as a string on each element we provide to the sink, I'm not sure how 
significant this is.


This is both a question and a feature request: I can merge the fork upstream, 
is this something that makes sense to include in the default connector?

Is the serdes cost negligible?

Can I add the table name field as part of the DynamoDbWriteRequest class? it 
will be a breaking change.


> Support for Dynamic Table Selection in DynamoDB Sink Connector
> --------------------------------------------------------------
>
>                 Key: FLINK-37343
>                 URL: https://issues.apache.org/jira/browse/FLINK-37343
>             Project: Flink
>          Issue Type: New Feature
>          Components: Connectors / DynamoDB
>            Reporter: Alex Aranovsky
>            Priority: Not a Priority
>
> Hey, we have a use-case where we need to sink records into multiple ddb 
> tables without a job reset (topology change), we use the DynamicKafka source 
> and we've essentially forked the regular ddb connector and added support for 
> providing a table as part of the ElementsConverter interface. Since DDB 
> batchWriteItem supports multiple items in a single request, I don't really 
> see a downside into not including it. There is some serdes cost associated 
> with providing the table name as a string on each element we provide to the 
> sink, I'm not sure how significant this is.
> This is both a question and a feature request: I can merge the fork upstream, 
> is this something that makes sense to include in the default connector?
> Is the serdes cost negligible?
> Can I add the table name field as part of the DynamoDbWriteRequest class? it 
> will be a breaking change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to