Could you check the interpreter log ?

Meethu Mathew <meethu.mat...@flytxt.com>于2018年1月3日周三 下午3:05写道:

> Hi,
>
> I have met with a strange issue in running R notebooks in zeppelin(0.7.2).
> Spark intrepreter is in per note Scoped mode and spark version is 1.6.2
>
> Please find the steps below to reproduce the issue:
> 1. Create a notebook (Note1) and run any r code in a paragraph. I ran the
> following code.
>
>> %r
>>
>> rdf <- data.frame(c(1,2,3,4))
>>
>> colnames(rdf) <- c("myCol")
>>
>> sdf <- createDataFrame(sqlContext, rdf)
>>
>> withColumn(sdf, "newCol", sdf$myCol * 2.0)
>>
>>
> 2.  Create another notebook (Note2) and run any r code in a paragraph. I
> ran the same code as above.
>
> Till now everything works fine.
>
> 3. Create third notebook (Note3) and run any r code in a paragraph. I ran
> the same code. This notebook fails with the error
>
>> org.apache.zeppelin.interpreter.InterpreterException: sparkr is not
>> responding
>
>
>  What I understood from the analysis is that  the process created for
> sparkr interpreter is not getting killed properly and this makes every
> third model to throw an error while executing. The process will be killed
> on restarting the sparkr interpreter and another 2 models could be executed
> successfully. ie, For every third model run using the sparkr interpreter,
> the error is thrown. We suspect this as a limitation with zeppelin.
>
> Please help to solve this issue
>
> Regards,
>
>
> Meethu Mathew
>
>

Reply via email to