>
>
> On Mon, Mar 14, 2016 at 2:31 PM, Jakob Odersky <mailto:ja...@odersky.com>> wrote:
> Have you tried setting the configuration
> `spark.executor.extraLibraryPath` to point to a location where your
> .so's are available? (Not sure if non-local files, such as HDF
What build system are you using to compile your code?
If you use a dependency management system like maven or sbt, then you should be
able to instruct it to build a single jar that contains all the other
dependencies, including third-party jars and .so’s. I am a maven user myself,
and I use the
in, but with its
troublesome scalap dependency removed.
> On Mar 11, 2016, at 6:34 PM, Vasu Parameswaran wrote:
>
> Added these to the pom and still the same error :-(. I will look into sbt as
> well.
>
>
>
> On Fri, Mar 11, 2016 at 2:31 PM, Tristan Nixon <m
So I think in your case you’d do something more like:
val jsontrans = new
JsonSerializationTransformer[StructType].setInputCol(“event").setOutputCol(“eventJSON")
> On Mar 11, 2016, at 3:51 PM, Tristan Nixon wrote:
>
> val jsontrans = new
> JsonSerializationTransformer
I recommend you package all your dependencies (jars, .so’s, etc.) into a single
uber-jar and then submit that. It’s much more convenient than trying to manage
including everything in the --jars arg of spark-submit. If you build with maven
than the shade plugin will do this for you nicely:
https:
You must be relying on IntelliJ to compile your scala, because you haven’t set
up any scala plugin to compile it from maven.
You should have something like this in your plugins:
net.alchim31.maven
scala-maven-plugin
scala-compile-first
process-resources
compile
into a JSON-formatted string.
* Created by Tristan Nixon on 3/11/16.
*/
class JsonSerializationTransformer[T](override val uid: String)
extends UnaryTransformer[T,String,JsonSerializationTransformer[T]]
{
def this() = this(Identifiable.randomUID("JsonSerializationTransformer"))
(nullable = true)
>
>
>
> I want to transform the Column event into String (formatted as JSON).
>
> I was trying to use udf but without success.
>
>
> On Fri, Mar 11, 2016 at 1:53 PM Tristan Nixon <mailto:st...@memeticlabs.org>> wrot
Have you looked at DataFrame.write.json( path )?
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrameWriter
> On Mar 11, 2016, at 7:15 AM, Caires Vinicius wrote:
>
> I have one DataFrame with nested StructField and I want to convert to JSON
> String. There is
Hear, hear. That’s why I’m here :)
> On Mar 10, 2016, at 7:32 PM, Chris Fregly wrote:
>
> Anyway, thanks for the good discussion, everyone! This is why we have these
> lists, right! :)
Very interested, Evan, thanks for the link. It has given me some food for
thought.
I’m also in the process of building a web application which leverage Spark on
the back-end for some heavy lifting. I would be curious about your thoughts on
my proposed architecture:
I was planning on running a s
r I'm running it as super user.
>
> I have java version 1.8.0_73 and SCALA version 2.11.7
>
> Sent from my iPhone
>
>> On 9 Mar 2016, at 21:58, Tristan Nixon wrote:
>>
>> That’s very strange. I just un-set my SPARK_HOME env param, downloaded a
>> fresh
Hmmm… that should be right.
> On Mar 10, 2016, at 11:26 AM, Ashic Mahtab wrote:
>
> src/main/resources/log4j.properties
>
> Subject: Re: log4j pains
> From: st...@memeticlabs.org
> Date: Thu, 10 Mar 2016 11:08:46 -0600
> CC: user@spark.apache.org
> To: as...@live.com
>
> Where in the jar is th
Where in the jar is the log4j.properties file?
> On Mar 10, 2016, at 9:40 AM, Ashic Mahtab wrote:
>
> 1. Fat jar with logging dependencies included. log4j.properties in fat jar.
> Spark doesn't pick up the properties file, so uses its defaults.
It really shouldn’t, if anything, running as superuser should ALLOW you to bind
to ports 0, 1 etc.
It seems very strange that it should even be trying to bind to these ports -
maybe a JVM issue?
I wonder if the old Apple JVM implementations could have used some different
native libraries for cor
That’s very strange. I just un-set my SPARK_HOME env param, downloaded a fresh
1.6.0 tarball,
unzipped it to local dir (~/Downloads), and it ran just fine - the driver port
is some randomly generated large number.
So SPARK_HOME is definitely not needed to run this.
Aida, you are not running thi
ts to a
> single machine(local host)
>
> Sent from my iPhone
>
>> On 9 Mar 2016, at 19:59, Tristan Nixon wrote:
>>
>> Also, do you have the SPARK_HOME environment variable set in your shell, and
>> if so what is it set to?
>>
>>> On Mar 9,
Also, do you have the SPARK_HOME environment variable set in your shell, and if
so what is it set to?
> On Mar 9, 2016, at 1:53 PM, Tristan Nixon wrote:
>
> There should be a /conf sub-directory wherever you installed spark, which
> contains several configuration files.
> I b
r message
>
> When I look at the spark-defaults.conf.template it shows a spark
> example(spark://master:7077) where the port is 7077
>
> When you say look to the conf scripts, how do you mean?
>
> Sent from my iPhone
>
>> On 9 Mar 2016, at 19:32, Tristan Nixon wrote:
&
Yeah, according to the standalone documentation
http://spark.apache.org/docs/latest/spark-standalone.html
the default port should be 7077, which means that something must be overriding
this on your installation - look to the conf scripts!
> On Mar 9, 2016, at 1:26 PM, Tristan Nixon wr
Looks like it’s trying to bind on port 0, then 1.
Often the low-numbered ports are restricted to system processes and
“established” servers (web, ssh, etc.) and
so user programs are prevented from binding on them. The default should be to
run on a high-numbered port like 8080 or such.
What do yo
You can also package an alternative log4j config in your jar files
> On Mar 9, 2016, at 12:20 PM, Ashic Mahtab wrote:
>
> Found it.
>
> You can pass in the jvm parameter log4j.configuration. The following works:
>
> -Dlog4j.configuration=file:path/to/log4j.properties
>
> It doesn't work with
ew to spark and I am just messing around with it.
>
> On Mar 8, 2016 10:23 PM, "Tristan Nixon" <mailto:st...@memeticlabs.org>> wrote:
> My understanding of the model is that you’re supposed to execute
> SparkFiles.get(…) on each worker node, not on the driver.
>
this is a bit strange, because you’re trying to create an RDD inside of a
foreach function (the jsonElements). This executes on the workers, and so will
actually produce a different instance in each JVM on each worker, not one
single RDD referenced by the driver, which is what I think you’re try
My understanding of the model is that you’re supposed to execute
SparkFiles.get(…) on each worker node, not on the driver.
Since you already know where the files are on the driver, if you want to load
these into an RDD with SparkContext.textFile, then this will distribute it out
to the workers,
happening.
>
> On Mon, Mar 7, 2016 at 5:55 PM, Tristan Nixon <mailto:st...@memeticlabs.org>> wrote:
> I’m not sure I understand - if it was already distributed over the cluster in
> an RDD, why would you want to collect and then re-send it as a broadcast
> variable? Why
> Hi Tristan,
>
> This is not static, I actually collect it from an RDD to the driver.
>
> On Mon, Mar 7, 2016 at 5:42 PM, Tristan Nixon <mailto:st...@memeticlabs.org>> wrote:
> Hi Arash,
>
> is this static data? Have you considered including it in your jars an
Hi Arash,
is this static data? Have you considered including it in your jars and
de-serializing it from jar on each worker node?
It’s not pretty, but it’s a workaround for serialization troubles.
> On Mar 7, 2016, at 5:29 PM, Arash wrote:
>
> Hello all,
>
> I'm trying to broadcast a variable
28 matches
Mail list logo