you need 1) to publish to inhouse maven, so your application can depend on
your version, and 2) use the spark distribution you compiled to launch your
job (assuming you run with yarn so you can launch multiple versions of
spark on same cluster)

On Sun, Jun 28, 2015 at 4:33 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote:

> How can i import this pre-built spark into my application via maven as i
> want to use the block join API.
>
> On Sun, Jun 28, 2015 at 1:31 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com>
> wrote:
>
>> I ran this w/o maven options
>>
>> ./make-distribution.sh  --tgz -Phadoop-2.4 -Pyarn  -Phive
>> -Phive-thriftserver
>>
>> I got this spark-1.4.0-bin-2.4.0.tgz in the same working directory.
>>
>> I hope this is built with 2.4.x hadoop as i did specify -P
>>
>> On Sun, Jun 28, 2015 at 1:10 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com>
>> wrote:
>>
>>>  ./make-distribution.sh  --tgz --*mvn* "-Phadoop-2.4 -Pyarn
>>> -Dhadoop.version=2.4.0 -Phive -Phive-thriftserver -DskipTests clean package"
>>>
>>>
>>> or
>>>
>>>
>>>  ./make-distribution.sh  --tgz --*mvn* -Phadoop-2.4 -Pyarn
>>> -Dhadoop.version=2.4.0 -Phive -Phive-thriftserver -DskipTests clean package"
>>> ​Both fail with
>>>
>>> + echo -e 'Specify the Maven command with the --mvn flag'
>>>
>>> Specify the Maven command with the --mvn flag
>>>
>>> + exit -1
>>>
>>
>>
>>
>> --
>> Deepak
>>
>>
>
>
> --
> Deepak
>
>

Reply via email to