[ 
https://issues.apache.org/jira/browse/SPARK-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15437130#comment-15437130
 ] 

Takeshi Yamamuro commented on SPARK-16998:
------------------------------------------

I checked performance;
{code}
$./bin/spark-shell --master=local[1]

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row

def timer[R](block: => R): R = {
  val t0 = System.nanoTime()
  val result = block
  val t1 = System.nanoTime()
  println("Elapsed time: " + ((t1 - t0 + 0.0) / 1000000000.0)+ "s")
  result
}

val numArray = X
val sqlCtx = new org.apache.spark.sql.SQLContext(sc)
val schema = StructType(StructField("c0", IntegerType):: StructField("c1", 
ArrayType(IntegerType)) :: Nil)
val rdd = sc.parallelize(0 :: Nil, 1).flatMap { _ => (0 until 1000).map(j => 
Row(j, (0 until numArray).toArray)) }
val df = sqlCtx.createDataFrame(rdd, schema).cache
df.queryExecution.executedPlan(0).execute().foreach(x => Unit)
timer {
 df.select($"c0", 
explode($"c1")).queryExecution.executedPlan(2).execute().foreach(x => Unit)
}
{code}

Performance results are as follows;
{code}
numArray: Int = 1024, Elapsed time: 0.485094303s
numArray: Int = 2048, Elapsed time: 1.78344344s
numArray: Int = 4096, Elapsed time: 7.037558308s
numArray: Int = 8192, Elapsed time: 26.498065697s
numArray: Int = 16384, Elapsed time: 117.13229056s
{code}

The elapsed time exponentially grew with the increase of `numArray`.
It seems the root cause of this bottleneck is many object(JoinedRow)-copys 
occurred in `GenerateExec` with join=true.
However, there is not a simpler way to fix this.

> select($"column1", explode($"column2")) is extremely slow
> ---------------------------------------------------------
>
>                 Key: SPARK-16998
>                 URL: https://issues.apache.org/jira/browse/SPARK-16998
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: TobiasP
>
> Using a Dataset containing 10.000 rows, each containing null and an array of 
> 5.000 Ints, I observe the following performance (in local mode):
> {noformat}
> scala> time(ds.select(explode($"value")).sample(false, 0.0000001, 1).collect)
> 1.219052 seconds                                                              
>   
> res9: Array[org.apache.spark.sql.Row] = Array([3761], [3766], [3196])
> scala> time(ds.select($"dummy", explode($"value")).sample(false, 0.0000001, 
> 1).collect)
> 20.219447 seconds                                                             
>   
> res5: Array[org.apache.spark.sql.Row] = Array([null,3761], [null,3766], 
> [null,3196])
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to