Your Driver seems to be OK.
hive_driver: com.cloudera.hive.jdbc41.HS2Driver
However this is theSQL error you are getting
Caused by: com.cloudera.hiveserver2.support.exceptions.GeneralException:
[Cloudera][HiveJDBCDriver](500051) ERROR processing query/statement. Error
Code: 4, SQL state: TS
Hi,
My environment is set up OK with packages PySpark needs including
PyYAML version 5.4.1
In YARN or local mode a simple skeleton test I have setup picks up yaml.
However with docker image or when the image used inside kubernetes it fails
This is the code used to test
import sys
impor
I have trying to create table in hive from spark itself,
And using local mode it will work what I am trying here is from spark
standalone I want to create the manage table in hive (another spark cluster
basically CDH) using jdbc mode.
When I try that below are the error I am facing.
On Thu, 15 J
As Mich mentioned, no need to use jdbc API, using the DataFrameWriter's
saveAsTable method is the way to go. JDBC Driver is for a JDBC client
(a Java client for instance) to access the Hive tables in Spark via the
Thrift server interface.
-- ND
On 7/19/21 2:42 AM, Badrinath Patchikolla wrot