Hi,
This is just a thought from my experience setting up Spark to run on a
linux cluster. I found it a bit unusual that some parameters could be
specified as command line args to spark-submit, others as env variables,
and some in a configuration file. What I ended up doing was writing my own
bash s
Hey,
start-slaves.sh script is able to read from slaves file and start slaves node
in multiple boxes.
However in standalone mode if I want to use multiple masters, I’ll have to
start masters in each individual box, and also need to provide the list of
masters’ hostname+port to each worker. ( st
Sounds good to me.
On Tue, Mar 31, 2015 at 6:12 PM, sequoiadb
wrote:
> Hey,
>
> start-slaves.sh script is able to read from slaves file and start slaves
> node in multiple boxes.
> However in standalone mode if I want to use multiple masters, I’ll have to
> start masters in each individual box,
Hi,
We recently added ADMM based proximal algorithm in
breeze.optimize.proximal.NonlinearMinimizer which uses a combination of
BFGS and proximal algorithms (soft thresholding for L1 for example) to
solve large scale constrained optimization problem of form f(x) + g(z). Its
usage is similar to curr
Hi,
previously in 1.2.1, the result row from a Spark SQL query was
a org.apache.spark.sql.api.java.Row.
In 1.3.0 I do not see a sql.api.java package. so does it mean that even the
SQL query result row is an implementation of org.apache.spark.sql.Row such
as GenericRow etc?
--
Niranda
Yup - we merged the Java and Scala API so there is now a single set of API
to support both languages.
See more at
http://spark.apache.org/docs/latest/sql-programming-guide.html#unification-of-the-java-and-scala-apis
On Tue, Mar 31, 2015 at 11:40 PM, Niranda Perera
wrote:
> Hi,
>
> previously