00:00:00 /usr/lib/R/bin/exec/R
--no-save --no-restore -f /tmp/zeppelin_sparkr-4152305170353311178.R --args
1642312173 58063 /home/meethu/spark-1.6.1-bin-hadoop2.6/R/lib 10601
meethu6745 6470 0 12:10 pts/100:00:00 /usr/lib/R/bin/exec/R
--no-save --no-restore -f /tmp/zeppelin_sparkr-50466016273
hird model run using the sparkr interpreter,
the error is thrown. We suspect this as a limitation with zeppelin.
Please help to solve this issue
Regards,
Meethu Mathew
rocess
>or(features.split(delimiter)[text_colum])).count()
>
>
*Note :: In version 0.7.0 the code was running fine without
using use_unicode and unicode(regex.sub(' ', w),'utf8')*
*Please help to fix this issue.*
Regards,
Meethu Mathew
On Fri, Apr 21, 2017 at
Try putting the csv in the same path in all the nodes or in a mount point
path which is accessible by all the nodes
Regards,
Meethu Mathew
On Wed, May 10, 2017 at 3:36 PM, Sofiane Cherchalli
wrote:
> Yes, I already tested with spark-shell and pyspark , with the same result.
>
> Ca
)
when the executors are spawned in the slaves where R is not installed.
*Do we need to install R and the associated packages in all the nodes ?*
Regards,
Meethu Mathew
Hi Moon,
Yes its fixed in 0.7.1. Thank you
Regards,
Meethu Mathew
On Wed, Apr 26, 2017 at 10:42 PM, moon soo Lee wrote:
> Some bugs related to interpreter process management has been fixed in
> 0.7.1 release [1]. Could you try 0.7.1 or master branch and see if the same
> probl
creates another
SparkContext and then the previous SparkContext will become a dead process
and exist.
Is it a bug of zeppelin or is there any other proper way to unbind the
zeppelin framework?
Zeppelin version is 0.7.0
Regards,
Meethu Mathew
s=False, usecols=[label_column,text_
column],names=['label','msg']).dropna()
- new_training['processed_msg'] = textPreProcessor(new_training['msg'])
This python code is working and I am getting result. In version 0.7.0, I am
getting output without using
n position 4:
> ordinal not in range(128)
All these code is working in 0.7.0 version. There is no change in the
dataset and code. Is there any change in the encoding type in the new
version of zeppelin?
Regards,
Meethu Mathew
8, I
tried
hc = HiveContext.getOrCreate(sc)
but still its returning
.
My pyspark shell and jupyter notebook is returning
without doing anything.
How to get
in the zeppelin notebook ?
Regards,
Meethu Mathew
?
Regards,
Meethu Mathew
"code": "SUCCESS",
"msg": [ {
"type": "TEXT",
"data": "hello world" }
] }}
I think its an issue in the documentation.
Regards,
Meethu Mathew
.
Please improve the suggestion functionality.
Regards,
Meethu Mathew
mmons-csv-1.4.jar --files
/home/me/models/Churn/package/build/dist/fly_libs-1.1-py2.7.egg"
Any progress in this ticket ZEPPELIN-2136
<https://issues.apache.org/jira/browse/ZEPPELIN-2136> ?
Regards,
Meethu Mathew
Hi,
The output of following code prints unexpected dots in the result if there
is a comment in the code. Is it a bug with zeppelin?
*Code :*
%python
v = [1,2,3]
#comment 1
#comment
print v
*output*
... ... [1, 2, 3]
Regards,
Meethu Mathew
Hi,
I have noticed the same problem
Regards,
Meethu Mathew
On Mon, Mar 13, 2017 at 9:56 AM, Xiaohui Liu wrote:
> Hi,
>
> We used 0.7.1-snapshot with our Mesos cluster, almost all our needed
> features (ldap login, notebook acl control, livy/pyspark/rspark/scala,
> etc.) w
}/webapps/webapp and it worked.
But the files or folders added in this folder which is
the ZEPPELIN_WAR_TEMPDIR is deleted after a restart.
How can I add images in the mark down interpreter without using other
webservers?
Regards,
Meethu Mathew
17 matches
Mail list logo