b
​asically the implicit conversiosn that need it are rdd => dataset and seq
=> dataset​

On Fri, Oct 14, 2016 at 5:47 PM, Koert Kuipers <ko...@tresata.com> wrote:

> for example when do you Seq(1,2,3).toDF("a") it needs to get the
> SparkSession from somewhere. by importing the implicits from
> spark.implicits._ they have access to a SparkSession for operations like
> this.
>
> On Fri, Oct 14, 2016 at 4:42 PM, Jakub Dubovsky <
> spark.dubovsky.ja...@gmail.com> wrote:
>
>> Hey community,
>>
>> I would like to *educate* myself about why all *sql implicits* (most
>> notably conversion to Dataset API) are imported from *instance* of
>> SparkSession and not using static imports.
>>
>> Having this design one runs into problems like this
>> <http://stackoverflow.com/questions/32453886/spark-sql-dataframe-import-sqlcontext-implicits>.
>> It requires the presence of SparkSession instance (the only one we have) in
>> many parts of code. This makes code structuring harder.
>>
>> I assume that there is a *reason* why this design was *chosen*. Can
>> somebody please point me to a resource or explain why is this?
>> What is an advantage of this approach?
>> Or why it is not possible to implement it with static imports?
>>
>> Thanks a lot!
>>
>> Jakub
>>
>>
>

Reply via email to