sary to keep all the batch data in the memory. Something
> like a pipeline should be OK.
>
> Is it difficult to implement on top of the current implementation?
>
> Thanks.
>
> ---
> Bin Wang
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
Yes
On Wed, Mar 18, 2015 at 1:35 PM, Niranda Perera
wrote:
> Thanks Arush.
>
> this is governed by the conf/spark-defaults.conf config, is it?
>
> On Wed, Mar 18, 2015 at 1:30 PM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote:
>
>> You can fix the ports
fix these ports or give an set of ports for the worker
> to choose from?
>
> cheers
>
> --
> Niranda
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ror: assertion failed:
> >> com.google.protobuf.InvalidProtocolBufferException
> >> at
> scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1212)
> >>
> >> The answer in the mailing list to that thread was about using maven ..
> so
> >> that is not useful here.
> >>
> >
> >
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
Arush Kharbanda || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com