Hi,
I have Spark ThriftServer up and running on HTTP, where can I find the
steps to setup Spark ThriftServer on HTTPS?
Regards
I fixed the issue (as no permission on keytab file), please ignore.
On Sun, Aug 13, 2017 at 9:42 AM, Ascot Moss wrote:
> Hi,
>
> Spark: 2.1.0
> Hive: 2.1.1
>
> When starting thrift server, I got the following error:
>
> How can I fix it?
>
> Regards
>
>
>
> (error log)
> 17/08/13 09:28:17 DEBUG
Hi,
Spark: 2.1.0
Hive: 2.1.1
When starting thrift server, I got the following error:
How can I fix it?
Regards
(error log)
17/08/13 09:28:17 DEBUG client.IsolatedClientLoader: shared class:
java.lang.NoSuchFieldError
17/08/13 09:28:17 DEBUG client.IsolatedClientLoader: shared class:
org.apac
Spark creates one connection for each query. The behavior you observed is
because how "nc -lk" works. If you use `netstat` to check the tcp
connections, you will see there are two connections when starting two
queries. However, "nc" forwards the input to only one connection.
On Fri, Aug 11, 2017 a
I have not tried Oracle 12c in memory option. However, if the objects are
created as views, I guess that the data need to be stored in regular tables
first?
The comparisons of these technologies will not be just about whether the
data are stored in columnar format in memory, but also about how to
Splendid Dylan thanks.
In a typical Star schema you have a FACT and DIMENSION tables through whivh
one uses analytical functions to slice and dice so to speak.
Does Incorta uses similar concepts but in memory? If that is the case can
perform similar concepts in memory. For example Oracle 12c in
Yes, it is implemented and already went live in several big companies in
bay area.
Spark Python is being used as the language for doing the typical data
transformation jobs when necessary. It is totally optional.
The data are stored in a Incorta proprietary format when they are presented
in memo
Hi,
i have a requirement where i have to read a dynamic nested JSON for schema
and need to check the data quality based on the schema.
i.e i get the details from a JSON i.e say column 1 should be string, length
kinda... this is dynamic json and nested one. so traditionally i have to
loop the json
--
Regards,
Sudhir K
You don't have to go through hive. It's just spark sql. The application is
just a forked hive thrift server.
On Fri, Aug 11, 2017 at 8:53 PM kant kodali wrote:
> @Ryan it looks like if I enable thrift server I need to go through hive. I
> was talking more about having JDBC connector for Spark SQ
I tried to start spark-thrift server but get following error:
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
java.io.IOException: javax.security.sasl.SaslException: GSS initiate fail
11 matches
Mail list logo