Cool.

Try it out and let me know if you need some additional features.

I have a latest version which supports few more different input types.

Regards,
Sourav

On Wed, Jan 31, 2018 at 10:03 AM, Andrés Ivaldi <iaiva...@gmail.com> wrote:

> Thanks,
>
> That seems like what I need
>
>
>
>
> On Wed, Jan 31, 2018 at 2:48 PM, Sourav Mazumder <
> sourav.mazumde...@gmail.com> wrote:
>
>> Hi Andres,
>>
>> Check out this - https://github.com/sourav-mazu
>> mder/Data-Science-Extensions/tree/master/spark-datasource-rest.
>>
>> This component can help you doing everything you are looking for.
>>
>> You can get the binary file for your immediate use from the release
>> section of the repository
>>
>> Please let me know if you need any help in building the jar.
>>
>> Regards,
>> Sourav
>>
>> On Wed, Jan 31, 2018 at 9:28 AM, Andrés Ivaldi <iaiva...@gmail.com>
>> wrote:
>>
>>> Hello, I'm newbie with Apache Zepellin, what I need is very basic, just
>>> get a csv or json data from WS then transform the data with spark, and show
>>> some reports.
>>>
>>> What could be the best approach to do it?
>>>
>>> Create a JAR with helpers to access the WS in a easy way then with
>>> Zeppelin access to it and process the data with spark.
>>>
>>> Directly give the spark context to the helper and process all there?
>>>
>>> Or Zeppeling already have something out of the box for it?
>>>
>>> I'm thinking on something like this with the helper
>>>
>>> val jsonData = MyHelper.getCustomersMetrics()
>>> //All spark code to process jsonData
>>>
>>> or directly the DataFrame as result
>>>
>>> val df = MyHelper.getCustomerMetrics(sc)
>>>
>>> regards.
>>>
>>> --
>>> Ing. Ivaldi Andres
>>>
>>
>>
>
>
> --
> Ing. Ivaldi Andres
>

Reply via email to