Hi Jean,
What does the master UI say? http://10.0.100.81:8080
Do you have enough resources availalbe, or is there any running context
that is depleting all your resources ?
Are your workers registered and alive ? How much memory each? How many
cores each ?
Best
On Mon, Sep 18, 2017 at 11:24 PM,
Hi,
I am trying to connect to a new cluster I just set up.
And I get...
[Timer-0:WARN] Logging$class: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
resources
I must have forgotten something really super obvious.
My
I'm pretty sure you can use a timestamp as a partitionColumn, It's
Timestamp type in MySQL. It's at base a numeric type and Spark requires a
numeric type passed in.
This doesn't work as the where parameter in MySQL becomes raw numerics
which won't query against the mysql Timestamp.
minTimeStam
you can create a Super class "FunSuiteWithSparkContext" that's going to
create a Spark sessions, Spark context, and SQLContext with all the desired
properties.
Then you add the class to all the relevant test suites, and that's pretty
much it.
The other option can be is to pass it as a VM parameter
You specify the schema when loading a dataframe by calling
spark.read.schema(...)...
On Tue, Sep 12, 2017 at 4:50 PM, Sunita Arvind
wrote:
> Hi Michael,
>
> I am wondering what I am doing wrong. I get error like:
>
> Exception in thread "main" java.lang.IllegalArgumentException: Schema
> must be
Hi,
A lot of code base of Spark is based on Builder Pattern, so i was wondering
what are the benefits that Builder Pattern brings to spark.
Some of things that comes in my mind, it is easy on garbage collection and
also user friendly API's.
Are their any other advantages with code running on dis
Have you searched in jira, e.g.
https://issues.apache.org/jira/browse/SPARK-19185
On Mon, Sep 18, 2017 at 1:56 AM, HARSH TAKKAR wrote:
> Hi
>
> Changing spark version if my last resort, is there any other workaround for
> this problem.
>
>
> On Mon, Sep 18, 2017 at 11:43 AM pandees waran wrote:
Hi,
Here are the commands that are used.
-
> spark.default.parallelism=1000
> sparkR.session()
Java ref type org.apache.spark.sql.SparkSession id 1
> sql("use test")
SparkDataFrame[]
> mydata <-sql("select c1 ,p1 ,rt1 ,c2 ,p2 ,rt2 ,avt,avn from test_temp2
where vdr = 'TEST31X' ")
>
> nrow(myda