What version of java?
On Feb 1, 2018 11:30 AM, "Mihai Iacob" wrote:
> I am setting up a spark 2.2.1 cluster, however, when I bring up the master
> and workers (both on spark 2.2.1) I get this error. I tried spark 2.2.0 and
> get the same error. It works fine on spark 2.0.2. Have you seen this
>
t;-Djava.security.auth.login.config=./key.conf"
> --conf
> "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./key.conf"
>
>
> On Fri, Mar 31, 2017 at 1:58 AM, Bill Schwanitz wrote:
>
>> I'm working on a poc spark job to pull data f
I'm working on a poc spark job to pull data from a kafka topic with
kerberos enabled ( required ) brokers.
The code seems to connect to kafka and enter a polling mode. When I toss
something onto the topic I get an exception which I just can't seem to
figure out. Any ideas?
I have a full gist up a
I'm working on a poc spark job to pull data from a kafka topic with
kerberos enabled ( required ) brokers.
The code seems to connect to kafka and enter a polling mode. When I toss
something onto the topic I get an exception which I just can't seem to
figure out. Any ideas?
I have a full gist up a
I have had similar issues with some of my spark jobs especially doing
things like repartitioning.
spark.yarn.driver.memoryOverhead driverMemory * 0.10, with minimum of
384 The amount of off-heap memory (in megabytes) to be allocated per
driver in cluster mode. This is memory that accounts for
> //Test
> transformed.show(10)
>
> I hope that helps!
> Subhash
>
>
> On Wed, Mar 1, 2017 at 12:04 PM, Marco Mistroni
> wrote:
>
>> Hi I think u need an UDF if u want to transform a column
>> Hth
>>
>> On 1 Mar 2017 4:22 pm, "Bill Schwani
Hi all,
I'm fairly new to spark and scala so bear with me.
I'm working with a dataset containing a set of column / fields. The data is
stored in hdfs as parquet and is sourced from a postgres box so fields and
values are reasonably well formed. We are in the process of trying out a
switch from pe