Is use of SparkContext from a Web container a right way to process spark
jobs or should we use spark-submit in a processbuilder?
Are there any pros or cons of using SparkContext from a Web container..?
How does zeppelin trigger spark jobs from the Web context?
We've dropped Hadoop 1.x support in Spark 2.0.
There is also a proposal to drop Hadoop 2.2 and 2.3, i.e. the minimal
Hadoop version we support would be Hadoop 2.4. The main advantage is then
we'd be able to focus our Jenkins resources (and the associated maintenance
of Jenkins) to create builds fo
Hi All,
When we submit Spark jobs on YARN, during RM failover, we see lot of jobs
reporting below error messages.
*2016-01-11 09:41:06,682 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
Unregistering app attempt : appattempt_1450676950893_0280_01*
2016-01-11 0
It looks like the problem is the vectors of term counts in the corpus
are not always the vocabulary size.
Do you mean some integers not occured in the corpus?
for example, I have the dictionary is 0 - 9 (total 10 words).
The docs are:
0 2 4 6 8
1 3 5 7 9
Then it will be correct
If the docs are:
0 2
Hey All,
While I'm not aware of any critical issues with 1.6.0, there are several
corner cases that users are hitting with the Dataset API that are fixed in
branch-1.6. As such I'm considering a 1.6.1 release.
At the moment there are only two critical issues targeted for 1.6.1:
- SPARK-12624 -
I have a Random forest model for which I am trying to get the featureImportance
vector.
Map categoricalFeaturesParam = new HashMap<>();
scala.collection.immutable.Map categoricalFeatures =
(scala.collection.immutable.Map)
scala.collection.immutable.Map$.MODULE$.apply(JavaConversions.mapAsScalaM
I was now able to reproduce the exception using the master branch and local
mode. It looks like the problem is the vectors of term counts in the
corpus are not always the vocabulary size. Once I padded these with zero
counts to the vocab size, it ran without the exception.
Joseph, I also tried c
Hi,
What is the best IDE for spark development? When I say development
I would like to make changes in SPARK core and be able to contribute back.
I know this question is been asked many times but I don't see a
convincing answer anywhere. I use intelliJ but getting the environmen
Steve,
Thank you for the answer.
How Hortonworks deal with this problem internally ?
You have Spark 1.3.1 in HDP 2.3. Is it compilled with Jackson 2.2.3 ?
Regards,
Maciek
2016-01-13 18:00 GMT+01:00 Steve Loughran :
>
>> On 13 Jan 2016, at 03:23, Maciej Bryński wrote:
>>
>> Thanks.
>> I successfu
> On 13 Jan 2016, at 03:23, Maciej Bryński wrote:
>
> Thanks.
> I successfully compiled Spark 1.6.0 with Jackson 2.2.3 from source.
>
> I'll try to using it.
>
This is the eternal classpath version problem, with Jackson turning out to be
incredibly brittle. After one point update of the 1.x
Hi Richard,
Thanks for providing the background on your application.
> the user types or copy-pastes his R code,
> the system should then send this R code (using ROSE) to R
Unfortunately this type of ad hoc R analysis is not supported. ROSE supports
the execution of any R function or script wit
Thanks.
I successfully compiled Spark 1.6.0 with Jackson 2.2.3 from source.
I'll try to using it.
2016-01-13 11:25 GMT+01:00 Ted Yu :
> I would suggest trying option #1 first.
>
> Thanks
>
>> On Jan 13, 2016, at 2:12 AM, Maciej Bryński wrote:
>>
>> Hi,
>> I/m trying to run Spark 1.6.0 on HDP 2.2
I would suggest trying option #1 first.
Thanks
> On Jan 13, 2016, at 2:12 AM, Maciej Bryński wrote:
>
> Hi,
> I/m trying to run Spark 1.6.0 on HDP 2.2
> Everything was fine until I tried to turn on dynamic allocation.
> According to instruction I need to add shuffle service to yarn classpath.
Hi,
I/m trying to run Spark 1.6.0 on HDP 2.2
Everything was fine until I tried to turn on dynamic allocation.
According to instruction I need to add shuffle service to yarn classpath.
The problem is that HDP 2.2 has jackson 2.2.3 and Spark is using 2.4.4.
So connecting it gives error:
2016-01-11 1
I will try spark 1.6.0 to see it is the bug of 1.5.2.
On Wed, Jan 13, 2016 at 3:58 PM, Li Li wrote:
> I have set up a stand alone spark cluster and use the same codes. it
> still failed with the same exception
> I also preprocessed the data to lines of integers and use the scala
> codes of lda ex
15 matches
Mail list logo