Hi users,
I'm a bit of newbie to spark infrastructure. And i have a small doubt.
I have a maven project with plugins generated separately in a folder and
normal java command to run is as follows:
`java -Dp4j.pluginsDir=./plugins -jar /path/to/jar`
Now when I run this program in local with spark-su
Thank you Sean. That's very helpful.
- Marshall
On 4/17/20, 9:12 AM, "Sean Owen" wrote:
The second release candidate will come soon. I would guess it all
completes by the end of May, myself, but no guarantees.
On Fri, Apr 17, 2020 at 6:30 AM Marshall Markham
wrote:
>
There might be 3 options:
1. Just as you expect, only ONE application, ONE rdd with regioned containers
and executors automatically allocated and distributed, the ResourceProfile
(https://issues.apache.org/jira/browse/SPARK-27495) may meet the requirement,
treating Region as a type of resource
Looks like you'd like to submit Spark job out of Spark cluster, Apache Livy
[https://livy.incubator.apache.org/] worths a try, which provides a REST
service for Spark in a Hadoop cluster.
Cheers,
-z
From: mailford...@gmail.com
Sent: Thursday, April 16,
Hi Spark users,
Did anyone resolve this issue?
Encoder encoder =
Encoders.bean(AvroGenereatedClass.class);
Dataset ds =
sparkSession.read().parquet(filename).as(encoder);
I'm also facing the same problem: "Cannot have circular references in bean
class, but got the circular reference of class cla