Hi,
This is the code I am running.
mu = (Vectors.dense([0.8786, -0.7855]),Vectors.dense([-0.1863, 0.7799]))
membershipMatrix = callMLlibFunc("findPredict",
rdd.map(_convert_to_vector), mu)
Regards,
Meethu
On Monday 12 January 2015 11:46 AM, Davies Liu wrote:
Could you post a piece of code h
Could you post a piece of code here?
On Sun, Jan 11, 2015 at 9:28 PM, Meethu Mathew wrote:
> Hi,
> Thanks Davies .
>
> I added a new class GaussianMixtureModel in clustering.py and the method
> predict in it and trying to pass numpy array from this method.I converted it
> to DenseVector and its s
Hi,
Thanks Davies .
I added a new class GaussianMixtureModel in clustering.py and the method
predict in it and trying to pass numpy array from this method.I
converted it to DenseVector and its solved now.
Similarly I tried passing a List of more than one dimension to the
function _py2java ,
Ok, will do.
Thanks for providing some context on this topic.
Alex
On Sun, Jan 11, 2015 at 8:34 PM, Patrick Wendell wrote:
> Priority scheduling isn't something we've supported in Spark and we've
> opted to support FIFO and Fair scheduling and asked users to try and
> fit these to the needs of
Priority scheduling isn't something we've supported in Spark and we've
opted to support FIFO and Fair scheduling and asked users to try and
fit these to the needs of their applications.
In practice from what I've seen of priority schedulers, such as the
linux CPU scheduler, is that strict priority
Yes, if you are asking about developing a new priority queue job scheduling
feature and not just about how job scheduling currently works in Spark, the
that's a dev list issue. The current job scheduling priority is at the
granularity of pools containing jobs, not the jobs themselves; so if you
re
Cody,
While I might be able to improve the scheduling of my jobs by using a few
different pools with weights equal to, say, 1, 1e3 and 1e6, effectively
getting a small handful of priority classes. Still, this is really not
quite what I am describing. This is why my original post was on the dev
lis