Hi,
In certain cases (mostly due to time constraints), we need some model to run
without cross validation. In such a case, since k-fold value for cross
validator cannot be one, we have to maintain two different code paths to
achieve both the scenarios (with and without cross validation).
Would it
I tried the same statement using Spark 1.6.1
There was no error with default memory setting.
Suggest logging a bug.
> On May 1, 2016, at 9:22 PM, Koert Kuipers wrote:
>
> Yeah I got that too, then I increased heap for tests to 8G to get error I
> showed earlier.
>
>> On May 2, 2016 12:09 AM
https://issues.apache.org/jira/browse/SPARK-13745
is really a defect and a blocker unless it is the decision to drop support
for Big Endian platforms. The PR has been reviewed and tested and I
strongly believe this needs to be targeted for 2.0.
On Mon, May 2, 2016 at 12:00 AM Reynold Xin wrote:
Created issue:
https://issues.apache.org/jira/browse/SPARK-15062
On Mon, May 2, 2016 at 6:48 AM, Ted Yu wrote:
> I tried the same statement using Spark 1.6.1
> There was no error with default memory setting.
>
> Suggest logging a bug.
>
> On May 1, 2016, at 9:22 PM, Koert Kuipers wrote:
>
> Yea
Hi,
Since the 2.0.0 branch has been created and is now nearing feature freeze,
can SPARK-11962 get some love please. If we can decide if this should go
into 2.0.0 or 2.1.0, that would be great. Personally, I feel it can totally
go into 2.0.0 as the code is pretty much ready (except for the one bug
this is happening now.
On Fri, Apr 29, 2016 at 12:52 PM, shane knapp wrote:
> (copy-pasta of previous message)
>
> another project hosted on our jenkins (e-mission) needs anaconda scipy
> upgraded from 0.15.1 to 0.17.0. this will also upgrade a few other
> libs, which i've included at the end of
hey everyone!
looks like two of the workers didn't survive a reboot, so i will need
to head to the colo and console in to see what's going on.
sadly, one of the workers that didn't come back is -01, which runs the
doc builds.
anyways, i will post another update within the hour with the status of
Hi Nitin,
Sorry for waking up this ancient thread. That's a fantastic set of JVM
flags! We just hit the same problem, but we haven't even discovered all
those flags for limiting memory growth. I wanted to ask if you ever
discovered anything further?
I see you also set -XX:NewRatio=3. This is a ver
Hi,
Same goes for the PolynomialExpansion in org.apache.spark.ml.feature. It would
be dice to cross-validate with degree 1 polynomial expansion (this is, with no
expansion at all) vs other degree polynomial expansions. Unfortunately, degree
is forced to be >= 2.
--
Julio
> El 2 may 2016, a la
There is a JIRA and PR around for supporting polynomial expansion with
degree 1. Offhand I can't recall if it's been merged
On Mon, 2 May 2016 at 17:45, Julio Antonio Soto de Vicente
wrote:
> Hi,
>
> Same goes for the PolynomialExpansion in org.apache.spark.ml.feature. It
> would be dice to cross
Definitely looks like a bug.
Ted - are you looking at this?
On Mon, May 2, 2016 at 7:15 AM, Koert Kuipers wrote:
> Created issue:
> https://issues.apache.org/jira/browse/SPARK-15062
>
> On Mon, May 2, 2016 at 6:48 AM, Ted Yu wrote:
>
>> I tried the same statement using Spark 1.6.1
>> There wa
I plan to.
I am not that familiar with all the parts involved though :-)
On Mon, May 2, 2016 at 9:42 AM, Reynold Xin wrote:
> Definitely looks like a bug.
>
> Ted - are you looking at this?
>
>
> On Mon, May 2, 2016 at 7:15 AM, Koert Kuipers wrote:
>
>> Created issue:
>> https://issues.apache.
workers -01 and -04 are back up, is is -06 (as i hit the wrong power
button by accident). :)
-01 and -04 got hung on shutdown, so i'll investigate them and see
what exactly happened. regardless, we should be building happily!
On Mon, May 2, 2016 at 8:44 AM, shane knapp wrote:
> hey everyone!
>
Thanks, Shane!
On Monday, May 2, 2016, shane knapp wrote:
> workers -01 and -04 are back up, is is -06 (as i hit the wrong power
> button by accident). :)
>
> -01 and -04 got hung on shutdown, so i'll investigate them and see
> what exactly happened. regardless, we should be building happily!
sorry, I removed others by mistake
thanks a lot, Mario, for explaining. Appreciate it.
On Sun, May 1, 2016 at 11:51 PM, Mario Ds Briggs
wrote:
> Not sure if it was a mistake that you removed others and the group on this
> response
>
> >>
>
>the data duplication in-efficiency (replication to
15 matches
Mail list logo