Re: How to enable hive support on an existing Spark session?

2020-05-26 Thread HARSH TAKKAR
Hi Kun, You can use following spark property instead while launching the app instead of manually enabling it in the code. spark.sql.catalogImplementation=hive Kind Regards Harsh On Tue, May 26, 2020 at 9:55 PM Kun Huang (COSMOS) wrote: > > Hi Spark experts, > > I am seeking for an approach t

How to enable hive support on an existing Spark session?

2020-05-26 Thread Kun Huang (COSMOS)
Hi Spark experts, I am seeking for an approach to enable hive support manually on an existing Spark session. Currently, HiveContext seems the best way for my scenario. However, this class has already been marked as deprecated and it is recommended to use SparkSession.builder.enableHiveSupport

Re: RecordTooLargeException in Spark *Structured* Streaming

2020-05-26 Thread Something Something
Thanks. Missed that part of documentation. Appreciate your help. Regards. On Mon, May 25, 2020 at 10:42 PM Jungtaek Lim wrote: > Hi, > > You need to add the prefix "kafka." for the configurations which should be > propagated to the Kafka. Others will be used in Spark data source > itself. (Kafka

Re: Using Spark Accumulators with Structured Streaming

2020-05-26 Thread Something Something
Hmm... how would they go to Graphana if they are not getting computed in your code? I am talking about the Application Specific Accumulators. The other standard counters such as 'event.progress.inputRowsPerSecond' are getting populated correctly! On Mon, May 25, 2020 at 8:39 PM Srinivas V wrote: