Hi Moon,

Thanks for the clarification.

In that case, what I understood is spark.home is sort of getting
deprecated. If that is true my suggestion would be take it out of the
interpreter UI as otherwise it creates lots of confusion.

On the simulation of the split function related problem with SPARK_HOME not
specified, here is some background. I originally tried it in the rzeppelin
branch (R interpreter for Zeppelin) which I'm helping Amos Elberg to test
and debug. And this functionality was not working there. However, after I
saw your mail, I tried the same in main Zeppelin branch and it worked all
fine there. So it was my bad. I'll connect with you and Amos in a separate
thread with the details of what we tested for rzeppelin interpreter.

Regards,
Sourav



On Wed, Nov 11, 2015 at 7:01 PM, moon soo Lee <m...@apache.org> wrote:

> Hi Sourav,
>
> From 0.5.5-incubating (currently in vote), it is recommended to export
> SPARK_HOME to make Zeppelin uses spark-submit command internally.
> In this case, spark.home is not effective.
>
> But i can not get the same error with split function in Spark SQL even
> without SPARK_HOME.
> Could you tell me how to reproduce the problem?
>
> Thanks,
> moon
>
> On Thu, Nov 12, 2015 at 12:46 AM Sourav Mazumder <
> sourav.mazumde...@gmail.com> wrote:
>
>> Hi,
>>
>> If I'm trying to execute the function split in Spark SQL
>>
>> I get following error in some situations.
>>
>> java.util.NoSuchElementException: key not found: split at
>> scala.collection.MapLike$class.default(MapLike.scala:228) at
>> scala.collection.AbstractMap.default(Map.scala:58) at
>> scala.collection.mutable.HashMap.apply(HashMap.scala:64) at
>> org.apache.spark.sql.catalyst.analysis.StringKeyHashMap.apply(FunctionRegistry.scala:92)
>> at
>> org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:57)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:465)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:463)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
>> at
>> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:221)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)
>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at
>> scala.collection.Iterator$class.foreach(Iterator.scala:727) at
>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>> at scala.collection.AbstractIterator.to(Iterator.scala:1157) at
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)
>> at 
>> org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1(QueryPlan.scala:75)
>> at
>> org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1$$anonfun$apply$1.apply(QueryPlan.scala:90)
>>
>> Here are the situation when it works and does not work -
>>
>> 1. Case 1 : SPARK_HOME in zeppelin-env.sh is not specified, spark.home in
>> interpreter UI is not specified - This does not work
>> 2. Case 2 : SPARK_HOME in zeppelin-env.sh is not specified, spark.home in
>> interpreter UI is specified - This does not work
>> 3. Case 3 : SPARK_HOME in zeppelin-env.sh is specified, spark.home in
>> interpreter UI is also specified - This does work
>> 4. Case 4 : SPARK_HOME in zeppelin-env.sh is specified, spark.home in
>> interpreter UI is not specified - This does work
>>
>> Any idea what is going on ?
>>
>> Also wondering what is the order of precedence between APARK_HOME in
>> zeppelin-env.sh and spark.home in interpreter UI.
>>
>> Regards,
>> Sourav
>> .
>>
>

Reply via email to