Hi Kun,
You can use following spark property instead while launching the app
instead of manually enabling it in the code.
spark.sql.catalogImplementation=hive
Kind Regards
Harsh
On Tue, May 26, 2020 at 9:55 PM Kun Huang (COSMOS)
wrote:
>
> Hi Spark experts,
>
> I am seeking for an approach t
Hi Spark experts,
I am seeking for an approach to enable hive support manually on an existing
Spark session.
Currently, HiveContext seems the best way for my scenario. However, this class
has already been marked as deprecated and it is recommended to use
SparkSession.builder.enableHiveSupport
Thanks. Missed that part of documentation. Appreciate your help. Regards.
On Mon, May 25, 2020 at 10:42 PM Jungtaek Lim
wrote:
> Hi,
>
> You need to add the prefix "kafka." for the configurations which should be
> propagated to the Kafka. Others will be used in Spark data source
> itself. (Kafka
Hmm... how would they go to Graphana if they are not getting computed in
your code? I am talking about the Application Specific Accumulators. The
other standard counters such as 'event.progress.inputRowsPerSecond' are
getting populated correctly!
On Mon, May 25, 2020 at 8:39 PM Srinivas V wrote: