Minor correction: the encoded URL in the staging repo link was wrong.
The correct repo is:
https://repository.apache.org/content/repositories/orgapachespark-1025/
On Wed, Aug 6, 2014 at 11:23 PM, Patrick Wendell wrote:
>
> Hi All,
>
> I've packaged and published a snapshot release of Spark 1.1 f
Hi All,
I've packaged and published a snapshot release of Spark 1.1 for testing.
This is being distributed to the community for QA and preview purposes. It
is not yet an official RC for voting. Going forward, we'll do preview
releases like this for testing ahead of official votes.
The tag of this
It's definitely just a typo. The ordered categories are A, C, B so the
other split can't be A | B, C. Just open a PR.
On Thu, Aug 7, 2014 at 2:11 AM, Matt Forbes wrote:
> I found the section on ordering categorical features really interesting,
> but the A, B, C example seemed inconsistent. Am I i
(Don't use gen-idea, just open it directly as a Maven project in IntelliJ.)
On Thu, Aug 7, 2014 at 4:53 AM, Ron Gonzalez
wrote:
> So I downloaded community edition of IntelliJ, and ran sbt/sbt gen-idea.
> I then imported the pom.xml file.
> I'm still getting all sorts of errors from IntelliJ abou
Thanks, will give that a try.
Sent from my iPad
> On Aug 6, 2014, at 9:26 PM, DB Tsai wrote:
>
> After sbt gen-idea , you can open the intellji project directly without going
> through pom.xml
>
> If u want to compile inside intellji, you have to remove one of the messo
> jar. This is an ope
After sbt gen-idea , you can open the intellji project directly without
going through pom.xml
If u want to compile inside intellji, you have to remove one of the messo
jar. This is an open issue, and u can find the detail in JIRA.
Sent from my Google Nexus 5
On Aug 6, 2014 8:54 PM, "Ron Gonzalez"
So I downloaded community edition of IntelliJ, and ran sbt/sbt gen-idea.
I then imported the pom.xml file.
I'm still getting all sorts of errors from IntelliJ about unresolved
dependencies.
Any suggestions?
Thanks,
Ron
On Wednesday, August 6, 2014 12:29 PM, Ron Gonzalez
wrote:
Hi,
I'm t
Ok I'll give it a little more time, and if I can't get it going, I'll switch. I
am indeed a little disappointed in the Scala IDE plugin for Eclipse so I think
switching to IntelliJ might be my best bet.
Thanks,
Ron
Sent from my iPad
> On Aug 6, 2014, at 1:43 PM, Sean Owen wrote:
>
> I think
See my comment on https://issues.apache.org/jira/browse/SPARK-2878 for the
full stacktrace, but it's in the BlockManager/BlockManagerWorker where it's
trying to fulfil a "getBlock" request for another node. The objects that
would be in the block haven't yet been serialised, and that then causes th
I don't think it was a conscious design decision to not include the
application classes in the connection manager serializer. We should fix
that. Where is it deserializing data in that thread?
4 might make sense in the long run, but it adds a lot of complexity to the
code base (whole separate code
Hi Spark devs,
I’ve posted an issue on JIRA (
https://issues.apache.org/jira/browse/SPARK-2878) which occurs when using
Kryo serialisation with a custom Kryo registrator to register custom
classes with Kryo. This is an insidious issue that non-deterministically
causes Kryo to have different ID nu
I found the section on ordering categorical features really interesting,
but the A, B, C example seemed inconsistent. Am I interpreting this passage
wrong, or are there typos? Aren't the split candidates A | C, B and A, C |
B ?
For example, for a binary classification problem with one categorical
Hi Dibyendu,
This is really awesome. I am still yet to go through the code to understand
the details, but I want to do it really soon. In particular, I want to
understand the improvements, over the existing Kafka receiver.
And its fantastic to see such contributions from the community. :)
TD
On
Forgot to do that step.
Now compilation passes.
On Wed, Aug 6, 2014 at 1:36 PM, Zongheng Yang wrote:
> Hi Ted,
>
> By refreshing do you mean you have done 'mvn clean'?
>
> On Wed, Aug 6, 2014 at 1:17 PM, Ted Yu wrote:
> > I refreshed my workspace.
> > I got the following error with this comma
I think your best bet by far is to consume the Maven build as-is from
within Eclipse. I wouldn't try to export a project config from the
build as there is plenty to get lost in translation.
Certainly this works well with IntelliJ, and by the by, if you have a
choice, I would strongly recommend Int
Hi Ted,
By refreshing do you mean you have done 'mvn clean'?
On Wed, Aug 6, 2014 at 1:17 PM, Ted Yu wrote:
> I refreshed my workspace.
> I got the following error with this command:
>
> mvn -Pyarn -Phive -Phadoop-2.4 -DskipTests install
>
> [ERROR] bad symbolic reference. A signature in package.
I refreshed my workspace.
I got the following error with this command:
mvn -Pyarn -Phive -Phadoop-2.4 -DskipTests install
[ERROR] bad symbolic reference. A signature in package.class refers to term
scalalogging
in package com.typesafe which is not available.
It may be completely missing from the
Hi,
I'm trying to get the apache spark trunk compiling in my Eclipse, but I can't
seem to get it going. In particular, I've tried sbt/sbt eclipse, but it doesn't
seem to create the eclipse pieces for yarn and other projects. Doing mvn
eclipse:eclipse on yarn seems to fail as well as sbt/sbt ec
I did not play with Hadoop settings...everything is compiled with
2.3.0CDH5.0.2 for me...
I did try to bump the version number of HBase from 0.94 to 0.96 or 0.98 but
there was no profile for CDH in the pom...but that's unrelated to this !
On Wed, Aug 6, 2014 at 9:45 AM, DB Tsai wrote:
> One re
One related question, is mllib jar independent from hadoop version (doesnt
use hadoop api directly)? Can I use mllib jar compile for one version of
hadoop and use it in another version of hadoop?
Sent from my Google Nexus 5
On Aug 6, 2014 8:29 AM, "Debasish Das" wrote:
> Hi Xiangrui,
>
> Maintai
Ok...let me look into it a bit more and most likely I will deploy the Spark
v1.1 and then use mllib 1.1 SNAPSHOT jar with it so that we follow your
guideline of not running newer spark component on older version of spark
core...
That should solve this issue unless it is related to Java versions...
One thing I like to clarify is that we do not support running a newer
version of a Spark component on top of a older version of Spark core.
I don't remember any code change in MLlib that requires Spark v1.1 but
I might miss some PRs. There were changes to CoGroup, which may be
relevant:
https://gi
Hi Xiangrui,
Maintaining another file will be a pain later so I deployed spark 1.0.1
without mllib and then my application jar bundles mllib 1.1.0-SNAPSHOT
along with the code changes for quadratic optimization...
Later the plan is to patch the snapshot mllib with the deployed stable
mllib...
Th
23 matches
Mail list logo