Hi Ruben,

I am currently not aware of such an effort, but I definitely do agree that
it is an interesting pattern to investigate. As a motivation you could have
a look at the Spark connector implementations to see the Kudu APIs in use.
For that I would recommend the DataSource API implementation that is now
part of Spark or Ted Malaska's prototype [2] that is bit less complex thus
might be easier to read.

Let us know if you decide to give the implementation a try.

[1]
https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark
[2] https://github.com/tmalaska/SparkOnKudu

Best,

Marton

On Fri, Oct 28, 2016 at 8:33 AM, <ruben.casado.teje...@accenture.com> wrote:

> Hi all,
>
> Is there any PoC about reading/writing from/to Kudu? I think the flow
> kafka-flink-kudu is an interesting pattern. I would like to evaluate it so
> please let me know if there is any existing attempt to avoid starting from
> scratch. Advices are welcomed :)
>
> Best
>
>
> ----------------------------------------
> Rubén Casado Tejedor, PhD
> > accenture digital
> Big Data Manager
> ':+ 34 629 009 429
> *:ruben.casado.teje...@accenture.com
>
>
> ________________________________
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise confidential information. If you have
> received it in error, please notify the sender immediately and delete the
> original. Any other use of the e-mail by you is prohibited. Where allowed
> by local law, electronic communications with Accenture and its affiliates,
> including e-mail and instant messaging (including content), may be scanned
> by our systems for the purposes of information security and assessment of
> internal compliance with Accenture policy.
> ____________________________________________________________
> __________________________
>
> www.accenture.com
>

Reply via email to