The initial question was to build from source. Any reason to build when
binaries are available at https://spark.apache.org/downloads.html
On Tuesday, December 10, 2019, 03:05:44 AM UTC, Ping Liu
wrote:
Super. Thanks Deepak!
On Mon, Dec 9, 2019 at 6:58 PM Deepak Vohra wrote
...
|
|
|
On Monday, December 9, 2019, 11:27:53 p.m. UTC, Ping Liu
wrote:
Thanks Deepak! Yes, I want to try it with Docker. But my AWS account ran out
of free period. Is there a shared EC2 for Spark that we can use for free?
Ping
On Monday, December 9, 2019, Deepak Vohra
r your help!
Ping
On Fri, Dec 6, 2019 at 5:28 PM Deepak Vohra wrote:
As multiple guava versions are found exclude guava from all the dependecies
it could have been downloaded with. And explicitly add a recent guava version.
org.apache.hadoop
ilding spark, I went to my local Maven repo and removed guava at all.
But after building, I found the same versions of guava have been downloaded.
D:\mavenrepo\com\google\guava\guava>ls
14.0.1 16.0.1 18.0 19.0
On Thu, Dec 5, 2019 at 5:12 PM Deepak Vohra wrote:
Just to clarify, excluding Had
Just to clarify, excluding Hadoop provided guava in pom.xml is an alternative
to using an Uber jar, which is a more involved process.
On Thursday, December 5, 2019, 10:37:39 p.m. UTC, Ping Liu
wrote:
Hi Sean,
Thanks for your response!
Sorry, I didn't mention that "build/mvn ..." doesn
:
Thanks Deepak! I'll try it.
On Thu, Dec 5, 2019 at 4:13 PM Deepak Vohra wrote:
The Guava issue could be fixed in one of two ways:
- Use Hadoop v3- Create an Uber jar,
referhttps://gite.lirmm.fr/yagoubi/spark/commit/c9f743957fa963bc1dbed7a44a346ffce1a45cf2
Managing Java dependencie
olean expression, String errorMessageTemplate,
Object... errorMessageArgs) |
The newer Guava version, checkArgument() all require boolean as first parameter.
For Docker, using EC2 is a good idea. Is there a document or guidance for it?
Thanks.
Ping
On Thu, Dec 5, 2019 at 3:30 PM Deepak
.
For Docker, using EC2 is a good idea. Is there a document or guidance for it?
Thanks.
Ping
On Thu, Dec 5, 2019 at 3:30 PM Deepak Vohra wrote:
Such type exception could occur if a dependency (most likely Guava) version is
not supported by the Spark version. What is the Spark and Guava
Multiple Guava versions could be in the classpath inherited from Hadoop. Use
the Guava version supported by Spark, and exclude other Guava. Also add
spark.executor.userClassPathFirst=true and spark.driver.userClassPathFirst=true
in properties.
On Thursday, December 5, 2019, 11:35:27 PM UT
ndows doesn't have Microsoft
Hyper-V. So I want to avoid using Docker to do major work if possible.
Thanks!
Ping
On Thu, Dec 5, 2019 at 2:24 PM Deepak Vohra wrote:
Several alternatives are available:
- Use Maven to build Spark on Windows.
http://spark.apache.org/docs/latest/building-spark.h
Several alternatives are available:
- Use Maven to build Spark on Windows.
http://spark.apache.org/docs/latest/building-spark.html#apache-maven
- Use Docker image for CDH on WindowsDocker Hub
|
|
| |
Docker Hub
|
|
|
On Thursday, December 5, 2019, 09:33:43 p.m. UTC, Sean Ow
java.util.List[edu.stanford.nlp.util.CoreMap]
To: "Deepak Vohra"
Cc: "user@spark.apache.org"
Date: Thursday, February 26, 2015, 10:43 AM
(Books on Spark are not
produced by the Spark project, and this is not
the right place to ask about them. This
question was already answered
o
Ch 6 listing from Advanced Analytics with Spark generates error. The listing
is
def plainTextToLemmas(text: String, stopWords: Set[String],pipeline:
StanfordCoreNLP) : Seq[String] = { val doc = newAnnotation(text)
pipeline.annotate(doc) val lemmas = newArrayBuffer[String]() val
The Spark cluster has no memory allocated.
Memory: 0.0 B Total, 0.0 B Used
From: Surendran Duraisamy <2013ht12...@wilp.bits-pilani.ac.in>
To: user@spark.apache.org
Sent: Sunday, February 22, 2015 6:00 AM
Subject: Running Example Spark Program
Hello All,
I am new to Apache Spark,
Or, use the SparkOnHBase
lab.http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
From: Ted Yu
To: Akhil Das
Cc: sandeep vura ; "user@spark.apache.org"
Sent: Monday, February 23, 2015 8:52 AM
Subject: Re: How to integrate HBASE on Spark
Installing hbase on h
For a beginner Ubuntu Desktop is recommended as it includes a GUI and is easier
to install. Also referServerFaq - Community Help Wiki
| |
| | | | | |
| ServerFaq - Community Help WikiFrequently Asked Questions about the Ubuntu
Server Edition This Frequently Asked Questions document i
16 matches
Mail list logo