I have a custom hive-site.xml for spark in sparks conf directory.
These properties are the minimal ones that you need for spark, I believe.
hive.metastore.kerberos.principal = copy from your hive-site.xml, i.e.
"hive/_h...@foo.com"
hive.metastore.uris = copy from your hive-site.xml, i.e.
thr
I’m trying to create a DF for an external Hive table that is in HBase.
I get the a NoSuchMethodError
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initSerdeParams(Lorg/apache/hadoop/conf/Configuration;Ljava/util/Properties;Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/lazy/LazySimpleSerD
Another thing to check is to make sure each one of you executor nodes has the
JCE jars installed.
try{ javax.crypto.Cipher.getMaxAllowedKeyLength("AES") > 128 } catch { case
e:java.security.NoSuchAlgorithmException => false }
Setting "-Dsun.security.krb5.debug=true” and “-Dsun.security.jgss.d
I have been running 1.5.1 with Hive in secure mode on HDP 2.2.4 without any
problems.
Doug
> On Oct 21, 2015, at 12:05 AM, Ajay Chander wrote:
>
> Hi Everyone,
>
> Any one has any idea if spark-1.5.1 is available as a service on HortonWorks
> ? I have spark-1.3.1 installed on the Cluster and
The error is because the shell is trying to resolve hdp.version and can’t.
To fix this, you need to put a file called java-opts in your conf directory
that has something like this.
-Dhdp.version=2.x.x.x
Where 2.x.x.x is there version of hdp that you are using.
Cheers,
Doug
> On Sep 24, 2015,
Hi Daniel,
Take a look at .coalesce()
I’ve seen good results by coalescing to num executors * 10, but I’m still
trying to figure out the
optimal number of partitions per executor.
To get the number of executors, sc.getConf.getInt(“spark.executor.instances”,-1)
Cheers,
Doug
> On Jul 20, 2015,
If you run Hadoop in secure mode and want to talk to Hive 0.14, it won’t work,
see SPARK-5111
I have a patched version of 1.3.1 that I’ve been using.
I haven’t had the time to get 1.4.0 working.
Cheers,
Doug
> On Jun 19, 2015, at 8:39 AM, ayan guha wrote:
>
> I think you can get spark 1.4
04 PM, Yin Huai wrote:
> Hi Doug,
>
> sqlContext.table does not officially support database name. It only supports
> table name as the parameter. We will add a method to support database name in
> future.
>
> Thanks,
>
> Yin
>
> On Thu, Jun 4, 2015 at 8:10 AM,
>
>
> On Wed, Jun 3, 2015 at 10:45 AM, Doug Balog wrote:
> Hi,
>
> sqlContext.table(“db.tbl”) isn’t working for me, I get a NoSuchTableException.
>
> But I can access the table via
>
> sqlContext.sql(“select * from db.tbl”)
>
> So I know it has the tabl
Hi,
sqlContext.table(“db.tbl”) isn’t working for me, I get a NoSuchTableException.
But I can access the table via
sqlContext.sql(“select * from db.tbl”)
So I know it has the table info from the metastore.
Anyone else see this ?
I’ll keep digging.
I compiled via make-distribution -Pyarn
I bet you are running on YARN in cluster mode.
If you are running on yarn in client mode,
.set(“spark.yarn.maxAppAttempts”,”1”) works as you expect,
because YARN doesn’t start your app on the cluster until you call
SparkContext().
But If you are running on yarn in cluster mode, the driver progr
The “best” solution to spark-shell’s problem is creating a file
$SPARK_HOME/conf/java-opts
with “-Dhdp.version=2.2.0.0-2014”
Cheers,
Doug
> On Mar 28, 2015, at 1:25 PM, Michael Stone wrote:
>
> I've also been having trouble running 1.3.0 on HDP. The
> spark.yarn.am.extraJavaOptions -Dhdp.ve
dy have an opinion ?
Doug
> On Mar 19, 2015, at 5:51 PM, Doug Balog wrote:
>
> I’m seeing the same problem.
> I’ve set logging to DEBUG, and I think some hints are in the “Yarn AM launch
> context” that is printed out
> before Yarn runs java.
>
> My next step is
I’m seeing the same problem.
I’ve set logging to DEBUG, and I think some hints are in the “Yarn AM launch
context” that is printed out
before Yarn runs java.
My next step is to talk to the admins and get them to set
yarn.nodemanager.delete.debug-delay-sec
in the config, as recommended in
htt
14 matches
Mail list logo