>
> df = spark.sqlContext.read.csv('out/df_in.csv')
>
shouldn't this be just -
df = spark.read.csv('out/df_in.csv')
sparkSession itself is in entry point to dataframes and SQL functionality .
our bootstrap is a bit messy, in our case no. In the general case yes.
On 9 May 2017 at 16:56, Pu
>
> df = spark.sqlContext.read.csv('out/df_in.csv')
>
shouldn't this be just -
df = spark.read.csv('out/df_in.csv')
sparkSession itself is in entry point to dataframes and SQL functionality .
Thank you,
*Pushkar Gujar*
On Tue, May 9, 2017 at 6:09 PM, Mark Hamstra
wrote:
> Looks to me l
Sounds like it is related with https://github.com/apache/spark/pull/17916
We will allow pick up the internal one if this one gets merged.
On 10 May 2017 7:09 am, "Mark Hamstra" wrote:
> Looks to me like it is a conflict between a Databricks library and Spark
> 2.1. That's an issue for Databrick
Looks to me like it is a conflict between a Databricks library and Spark
2.1. That's an issue for Databricks to resolve or provide guidance.
On Tue, May 9, 2017 at 2:36 PM, lucas.g...@gmail.com
wrote:
> I'm a bit confused by that answer, I'm assuming it's spark deciding which
> lib to use.
>
> O
I'm a bit confused by that answer, I'm assuming it's spark deciding which
lib to use.
On 9 May 2017 at 14:30, Mark Hamstra wrote:
> This looks more like a matter for Databricks support than spark-user.
>
> On Tue, May 9, 2017 at 2:02 PM, lucas.g...@gmail.com > wrote:
>
>> df = spark.sqlContext.
This looks more like a matter for Databricks support than spark-user.
On Tue, May 9, 2017 at 2:02 PM, lucas.g...@gmail.com
wrote:
> df = spark.sqlContext.read.csv('out/df_in.csv')
>>
>
>
>> 17/05/09 15:51:29 WARN ObjectStore: Version information not found in
>> metastore. hive.metastore.schema.v