Hi Michael and Ted,

Thank you for the reference. Is it true that once I implement a custom data
source, it can be used in all spark supported language? That is Scala,
Java, Python and R. :)
I want to take advantage of the interoperability that is already built in
spark.

Thanks!

Jerry

On Tue, Sep 29, 2015 at 11:31 AM, Michael Armbrust <mich...@databricks.com>
wrote:

> Thats a pretty advanced example that uses experimental APIs.  I'd suggest
> looking at https://github.com/databricks/spark-avro as a reference.
>
> On Mon, Sep 28, 2015 at 9:00 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> See this thread:
>>
>> http://search-hadoop.com/m/q3RTttmiYDqGc202
>>
>> And:
>>
>>
>> http://spark.apache.org/docs/latest/sql-programming-guide.html#data-sources
>>
>> On Sep 28, 2015, at 8:22 PM, Jerry Lam <chiling...@gmail.com> wrote:
>>
>> Hi spark users and developers,
>>
>> I'm trying to learn how implement a custom data source for Spark SQL. Is
>> there a documentation that I can use as a reference? I'm not sure exactly
>> what needs to be extended/implemented. A general workflow will be greatly
>> helpful!
>>
>> Best Regards,
>>
>> Jerry
>>
>>
>

Reply via email to