Hi Ram,
On 24 Sep 2016, at 18:32, ram kumar wrote:
> To connect redshift ( for staging and production database ) I need to setup
> JDBC connection in Flink Scala.
>
>
>
> Kafka (Source) > Flink (JDBC) ---> AWS ( S3 and
> Redshift) Target.
>
>
> Could you please sug
Using plain JDBC on Redshift will be slow for any reasonable volume but if
you need to do that, you can open a connection to it from a RichFunction
open() method-
I wrote a blog article a while back on Spark-Redshift package works -
https://databricks.com/blog/2015/10/19/introducing-redshift-data-
Many Thanks Felix.
* Flink Use case :*
Extract data from source *(Kafka*) and loading data into target (*AWS S3
and Redshift)*.
we use SCD2 in the Redshift…since data changes need to be captured in the
redshift target.
To connect redshift ( for staging and production database ) I need to se
Hi Ram,
On 24 Sep 2016, at 16:08, ram kumar wrote:
> I am wondering is that possible to add JDBC connection or url as a source or
> target in Flink using Scala.
> Could you kindly some one help me on this? if you have any sample code please
> share it here.
What’s your intended use case? Gett
Hi Team,
I am wondering is that possible to add JDBC connection or url as a source
or target in Flink using Scala.
Could you kindly some one help me on this? if you have any sample code
please share it here.
*Thanks*
*Ram*