Hi Michael,

I only see spark 2.0.2 which is what I am using currently. Any idea on when
2.1 will be released?

Thanks,
kant

On Mon, Nov 21, 2016 at 5:12 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> In Spark 2.1 we've added a from_json
> <https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L2902>
> function that I think will do what you want.
>
> On Fri, Nov 18, 2016 at 2:29 AM, kant kodali <kanth...@gmail.com> wrote:
>
>> This seem to work
>>
>> import org.apache.spark.sql._
>> val rdd = df2.rdd.map { case Row(j: String) => j }
>> spark.read.json(rdd).show()
>>
>> However I wonder if this any inefficiency here ? since I have to apply
>> this function for billion rows.
>>
>>
>

Reply via email to