Thanks for the pointers....

I will look into oryx2 design and see whether we need a spary/akka http
based backend...I feel we will specially when we have a model database for
a number of scenarios (say 100 scenarios build a different ALS model)

I am not sure if we really need a full blown database and so the first
version will simply be parquet files...

On Sun, Oct 19, 2014 at 1:14 PM, Jayant Shekhar <jay...@cloudera.com> wrote:

> Hi Deb,
>
> Do check out https://github.com/OryxProject/oryx.
>
> It does integrate with Spark. Sean has put in quite a bit of neat details
> on the page about the architecture. It has all the things you are thinking
> about:)
>
> Thanks,
> Jayant
>
>
> On Sat, Oct 18, 2014 at 8:49 AM, Debasish Das <debasish.da...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Is someone working on a project on integrating Oryx model serving layer
>> with Spark ? Models will be built using either Streaming data / Batch data
>> in HDFS and cross validated with mllib APIs but the model serving layer
>> will give API endpoints like Oryx
>> and read the models may be from hdfs/impala/SparkSQL
>>
>> One of the requirement is that the API layer should be scalable and
>> elastic...as requests grow we should be able to add more nodes...using play
>> and akka clustering module...
>>
>> If there is a ongoing project on github please point to it...
>>
>> Is there a plan of adding model serving and experimentation layer to
>> mllib ?
>>
>> Thanks.
>> Deb
>>
>>
>>
>

Reply via email to