Hi Michael,

I got the log you asked for. Note that I manually edited the table name and
the field names to hide some sensitive information.

== Logical Plan ==
Project ['s.id]
 Join Inner, Some((id#106 = 'm.id))
  Project [id#96 AS id#62]
   MetastoreRelation test, m, None
  MetastoreRelation test, s, Some(s)

== Optimized Logical Plan ==
Project ['s.id]
 Join Inner, Some((id#106 = 'm.id))
  Project []
   MetastoreRelation test, m, None
  Project [id#106]
   MetastoreRelation test, s, Some(s)

== Physical Plan ==
Project ['s.id]
 Filter (id#106:0 = 'm.id)
  CartesianProduct
   HiveTableScan [], (MetastoreRelation test, m, None), None
   HiveTableScan [id#106], (MetastoreRelation test, s, Some(s)), None

Best Regards,

Jerry



On Thu, Jul 10, 2014 at 7:16 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> Hi Jerry,
>
> Thanks for reporting this.  It would be helpful if you could provide the
> output of the following command:
>
> println(hql("select s.id from m join s on (s.id=m_id)").queryExecution)
>
> Michael
>
>
> On Thu, Jul 10, 2014 at 8:15 AM, Jerry Lam <chiling...@gmail.com> wrote:
>
>> Hi Spark developers,
>>
>> I have the following hqls that spark will throw exceptions of this kind:
>> 14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
>> org.apache.spark.TaskKilledException [duplicate 17]
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>> 0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
>> on host etl2-node05:
>> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
>> to evaluate expression. type: UnresolvedAttribute, tree: 'm.id
>>
>> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>>
>> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)
>>
>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>
>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>         scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>         scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>         scala.collection.TraversableOnce$class.to
>> (TraversableOnce.scala:273)
>>         scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>         scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>         scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>
>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>
>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>
>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>>
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>         java.lang.Thread.run(Thread.java:662)
>>
>> The hql looks like this (I trimmed the hql down to the essentials to
>> demonstrate the potential bugs, the actual join is more complex and
>> irrelevant to the bug):
>>
>> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>> import hiveContext._
>> hql("USE test")
>> hql("select id from m").registerAsTable("m")
>> hql("select s.id from m join s on (s.id=m.id
>> )").collect().foreach(println)
>>
>> Apparently, spark is unable to understand the m.id in the "(s.id=m.id)".
>> If I change it to:
>> hql("select m_id from m").registerAsTable("m")
>> hql("select s.id from m join s on (s.id
>> =m_id)").collect().foreach(println)
>>
>> It will work. Am I doing something wrong or it is a bug in spark sql?
>>
>> Best Regards,
>>
>> Jerry
>>
>>
>

Reply via email to