Hi there,
I am new to Spark, and would like to get some help to understand if Spark
can utilize the underlying architectures for better performance. If so, how
does it do it?
For example, assume there is a cluster built with machines of different
CPUs, will Spark check the individual CPU informat
(Cross posted from u...@spark.apache.org)
Hello,
I am in the process of evaluating Spark (1.5.2) for a wide range of use
cases. In particular I'm keen to understand the depth of the integration
with HCatalog (aka the Hive Metastore). I am very encouraged when browsing
the source contained within
Sorry, I never got a chance to circle back with the master logs for this. I
definitely can't share the job code, since it's used to build a pretty core
dataset for my company, but let me see if I can pull some logs together in
the next couple days.
On Tue, Jan 19, 2016 at 10:08 AM, Iulian DragoČ™
It would be good to get to the bottom of this.
Adam, could you share the Spark app that you're using to test this?
iulian
On Mon, Nov 30, 2015 at 10:10 PM, Timothy Chen wrote:
> Hi Adam,
>
> Thanks for the graphs and the tests, definitely interested to dig a
> bit deeper to find out what's cou
Hi,
Just so you know, I am new to Spark, and also very new to ML (this is my
first contact with ML).
Ok, I am trying to write a DSL where you can run some commands.
I did a command that trains the Spark LDA and it produces the topics I want
and I saved it using the save method provided by the LD
I have modified my codes. I can get the total vocabulary size and
index array and freq array from the jsonobject.
JsonArray idxArr = jo.get("idxArr").getAsJsonArray();
JsonArray freqArr=jo.get("freqArr").getAsJsonArray();
int total=jo.get("vocabSize").getAsInt();