Hi,
i am on spark 1.6. I am getting error if i try to run a hive query in Spark
that involves joining ORC and non-ORC tables in hive.
Find the error below, any help would be appreciated
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
TungstenExchange hashpartition
Hi Sjoerd,
We've added kafka.group.id config to Spark 3.0...
kafka.group.id string none streaming and batch The Kafka group id to use in
Kafka consumer while reading from Kafka. Use this with caution. By default,
each query generates a unique group id for reading data. This ensures that
each Kafk
This is exactly the issue I am fighting against. Within a good number of
organizations, this is against policy. Another solution is necessary.
From: Spico Florin
Sent: Tuesday, March 24, 2020 11:23:29 AM
To: Sethupathi T
Cc: Sjoerd van Leent ; user@spark.apache
Hello!
Maybe you can find more information on the same issue reported here:
https://jaceklaskowski.gitbooks.io/spark-structured-streaming/spark-sql-streaming-KafkaSourceProvider.html
validateGeneralOptions makes sure that group.id has not been specified and
reports an IllegalArgumentException ot