Thanks Samrat and  Danny for driving this FLIP.

>> an effective approach is to utilize the latest version of 
>> flink-connector-jdbc
> as a Maven dependency
> 
> When we have stable source/sink APIs and the connector versions are
> decoupled from Flink this makes sense. But right now this would mean that
> the JDBC connector will block the AWS connector for each new Flink version
> support release (1.18, 1.19, 1.20, 2.0 etc). That being said, I cannot
> think of a cleaner alternative, without pulling the core JDBC bits out into
> a dedicated project that is decoupled from and released independently of
> Flink. Splitting flink-connector-redshift into a dedicated repo would
> decouple AWS/JDBC, but obviously introduce a new connector that is blocked
> by both AWS and JDBC. 

Do we have to rely on the latest version of JDBC Connector here? I understand 
that as long as the version of flink minor is the same as the JDBC Connector, 
Could you collect the APIs that Redshift generally needs to use?

Assuming that AWS Connector(Redshift) depends on JDBC Connector and wants a 
higher version of JDBC Connector, I understand that the correct approach is to 
promote the release of JDBC Connector and looks like we have no more options.

Splitting a separate redshift repository does not solve this coupling problem, 
from a user perspective, redshift should also be in the AWS Connector repo.

Best,
Leonard

Reply via email to