Re: 0.7.0 zeppelin.interpreters change: can't make pyspark be default Spark interperter

2016-11-30 Thread Ruslan Dautkhanov
Got it. Thanks Jeff.

I've downloaded
https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/
interpreter-setting.json
and saved to $ZEPPELIN_HOME/interpreter/spark/
Then Moved  "defaultInterpreter": true,
from json section
"className": "org.apache.zeppelin.spark.SparkInterpreter",
to section
"className": "org.apache.zeppelin.spark.PySparkInterpreter",

pySpark is still not default.



-- 
Ruslan Dautkhanov

On Tue, Nov 29, 2016 at 10:36 PM, Jeff Zhang  wrote:

> No, you don't need to create that directory, it should be in
> $ZEPPELIN_HOME/interpreter/spark
>
>
>
>
> Ruslan Dautkhanov 于2016年11月30日周三 下午12:12写道:
>
>> Thank you Jeff.
>>
>> Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
>> or in $ZEPPELIN_HOME directory?
>> So zeppelin.interpreters in zeppelin-site.xml is deprecated in 0.7?
>>
>> Thanks!
>>
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>> On Tue, Nov 29, 2016 at 6:54 PM, Jeff Zhang  wrote:
>>
>> The default interpreter is now defined in interpreter-setting.json
>>
>> You can update the following file to make pyspark as the default
>> interpreter and then copy it to folder interpreter/spark
>>
>> https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/
>> interpreter-setting.json
>>
>>
>>
>> Ruslan Dautkhanov 于2016年11月30日周三 上午8:49写道:
>>
>> After 0.6.2 -> 0.7 upgrade, pySpark isn't a default Spark interpreter;
>> despite we have org.apache.zeppelin.spark.*PySparkInterpreter*
>> listed first in zeppelin.interpreters.
>>
>> zeppelin.interpreters in zeppelin-site.xml:
>>
>> 
>>   zeppelin.interpreters
>>   org.apache.zeppelin.spark.PySparkInterpreter,org.
>> apache.zeppelin.spark.SparkInterpreter
>> ...
>> 
>>
>>
>>
>> Any ideas how to fix this?
>>
>>
>> Thanks,
>> Ruslan
>>
>>
>>


shiro.ini [urls] authorization: lock Zeppelin to one user

2016-11-30 Thread Ruslan Dautkhanov
Until we have a good multitenancy support in Zeppelin, we'd have to run
individual Zeppelin instances for each user.

We were trying to use following shiro.ini configurations:

> [urls]
> /api/version = anon
> /** = user["rdautkhanov@CORP.DOMAIN"]


Also tried

> /** = authc, user["rdautkhanov@CORP.DOMAIN"]


none works in a sense that other users after successful LDAP authentication
can create their own notebooks in other user's Zeppelin instances.

shiro.ini has [users] and [roles] sections are empty.

[main] section configures LDAP authentication backend which works as
expected.

rdautkhanov@CORP.DOMAIN is actual user name which is used in LDAP
authentication.

How to make [urls] section let only one specific user in?
Again, neither

> /** = user["rdautkhanov@CORP.DOMAIN"]

nor

> /** = authc, user["rdautkhanov@CORP.DOMAIN"]

work as we expect.

LDAP authentication works as expected; we're struggling with authorization
-
to lock Zeppelin in [urls] to one user (or a few users).


Thank you,
Ruslan


Save the date: ApacheCon Miami, May 15-19, 2017

2016-11-30 Thread Rich Bowen
Dear Apache enthusiast,

ApacheCon and Apache Big Data will be held at the Intercontinental in
Miami, Florida, May 16-18, 2017. Submit your talks, and register, at
http://apachecon.com/  Talks aimed at the Big Data section of the event
should go to
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp
while other talks should go to
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp


ApacheCon is the best place to meet the people that develop the software
that you use and rely on. It’s also a great opportunity to deepen your
involvement in the project, and perhaps make the leap to contributing.
And we find that user case studies, showcasing how you use Apache
projects to solve real world problems, are very popular at this event.
So, do consider whether you have a use case that might make a good
presentation.

ApacheCon will have many different ways that you can participate:

Technical Content: We’ll have three days of technical sessions covering
many of the projects at the ASF. We’ll be publishing a schedule of talks
on March 9th, so that you can plan what you’ll be attending

BarCamp: The Apache BarCamp is a standard feature of ApacheCon - an
un-conference style event, where the schedule is determined on-site by
the attendees, and anything is fair game.

Lightning Talks: Even if you don’t give a full-length talk, the
Lightning Talks are five minute presentations on any topic related to
the ASF, and can be given by any attendee. If there’s something you’re
passionate about, consider giving a Lightning Talk.

Sponsor: It costs money to put on a conference, and this is a great
opportunity for companies involved in Apache projects, or who benefit
from Apache code - your employers - to get their name and products in
front of the community. Sponsors can start any any monetary level, and
can sponsor everything from the conference badge lanyard, through larger
items such as video recordings and evening events. For more information
on sponsoring ApacheCon, see http://apachecon.com/sponsor/

So, get your tickets today at http://apachecon.com/ and submit your
talks. ApacheCon Miami is going to be our best ApacheCon yet, and you,
and your project, can’t afford to miss it.

-- 
Rich Bowen - rbo...@apache.org
VP, Conferences
http://apachecon.com
@apachecon



Re: Zeppelin or Jupiter

2016-11-30 Thread Mich Talebzadeh
Guys,

How Active Directory/LDAP and Kerberos are integrated with Zeppelin?

thanks

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 30 November 2016 at 11:26, Mich Talebzadeh 
wrote:

>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> Forwarded conversation
> Subject: Zeppelin or Jupiter
> 
>
> From: Mich Talebzadeh 
> Date: 28 November 2016 at 13:06
> To: users@zeppelin.apache.org
>
>
> H,
>
> I use Zeppelin in different form and shape and it is very promising. Some
> colleagues are mentioning that Jupiter can do all that Zeppelin handles.
>
> I have not used Jupiter myself. I have used Tableau but that is pretty
> limited to SQL.
>
> Anyone has used Jupiter and can share their experience of it vis-à-vis
> Zeppelin?
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> --
> From: Goodman, Alexander (398K) 
> Date: 28 November 2016 at 20:23
> To: "users@zeppelin.apache.org" 
>
>
> Hi Mich,
>
> You might want to take a look at this:
> https://www.linkedin.com/pulse/comprehensive-comparison-jupy
> ter-vs-zeppelin-hoc-q-phan-mba-
>
> I use both Zeppelin and Jupyter myself, and I would say by and large the
> conclusions of that article are still mostly correct. Jupyter is definitely
> superior in terms of stability, language (kernel) support, ease of
> installation and maintenance (thanks to conda) and performance. If you just
> want something that works well straight out of the box, then Jupyter should
> be your goto notebook solution. I would say this is especially true if your
> workflow is largely in python since many of the Jupyter developers also
> have close ties with the general python data analytics / scientific
> computing community, which results in better integration with some
> important packages (like matplotlib and bokeh, for example). This makes
> sense given that the project was originally a part of ipython after all.
>
> However I definitely think Zeppelin still has an important place. The vast
> majority of Zeppelin users also use spark (also an apache project), and for
> that use case it should always be better than Jupyter given that its
> backend code is written in Java (a JVM language). There are also several
> advanced features that Zeppelin has that are somewhat unique, including a
> simple API for sharing variables across interpreters (
> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter
> /spark.html#object-exchange). There's also the angular display system API
> (https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/displaysyst
> em/back-end-angular.html). Granted, these two features are currently only
> fully supported by the spark interpreter group but work is currently
> underway to make the API extensible to other interpreters. Lastly, I think
> the most powerful feature of Zeppelin is the overall concept of the
> interpreter (in contrast to Jupyter's kernels) and the ability to use them
> together in a single notebook. This is my main reason for using Zeppelin
> since I regularly work with both spark/scala and python together.
>
> So tl;dr, if you are using spark and/or have workflows which use multiple
> languages (namely scala/R/python/SQL), you should stick with Zeppelin.
> Otherwise, I would suggest Jupyter.
> --
> Alex Goodman
> Data Scientist I
> Science Data Modeling and Computing (398K)
> Jet Propulsion Laboratory
>

sparkContext to get Spark Driver's URL

2016-11-30 Thread Ruslan Dautkhanov
Any easy way to get Spark Driver's URL (i.e. from sparkContext )?
I always have to go to CM -> YARN applications -> choose my Spark job ->
click Application Master etc. to get Spark's Driver UI.

Any way we could derive driver's URL programmatically from SparkContext
variable?


ps. Long haul - it would be super awesome to get a link staright in
Zeppelin notebook (when SparkContext is instatiated).


Thank you,
Ruslan


Re: Zeppelin or Jupiter

2016-11-30 Thread Ruslan Dautkhanov
Mich,

This page has examples for both Active Directory and LDAP:

https://zeppelin.apache.org/docs/0.6.2/security/shiroauthentication.html

activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
activeDirectoryRealm.url = ldap://ldap.test.com:389
activeDirectoryRealm.groupRolesMap =
"CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"
activeDirectoryRealm.authorizationCachingEnabled = false

ldapRealm = org.apache.zeppelin.server.LdapGroupRealm
# search base for ldap groups (only relevant for LdapGroupRealm):
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COM
ldapRealm.contextFactory.url = ldap://ldap.test.com:389
ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COM
ldapRealm.contextFactory.authenticationMechanism = SIMPLE


On Kerberos it could be done for example through
export SPARK_SUBMIT_OPTIONS="--principal xxx --keytab yyy"
in zeppelin-env.sh as an example - that's how we do that.
Or as explained here -
https://zeppelin.apache.org/docs/latest/interpreter/spark.html#setting-up-zeppelin-with-kerberos


Hope this helps.


-- 
Ruslan Dautkhanov

On Wed, Nov 30, 2016 at 3:51 PM, Mich Talebzadeh 
wrote:

> Guys,
>
> How Active Directory/LDAP and Kerberos are integrated with Zeppelin?
>
> thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 30 November 2016 at 11:26, Mich Talebzadeh 
> wrote:
>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> Forwarded conversation
>> Subject: Zeppelin or Jupiter
>> 
>>
>> From: Mich Talebzadeh 
>> Date: 28 November 2016 at 13:06
>> To: users@zeppelin.apache.org
>>
>>
>> H,
>>
>> I use Zeppelin in different form and shape and it is very promising. Some
>> colleagues are mentioning that Jupiter can do all that Zeppelin handles.
>>
>> I have not used Jupiter myself. I have used Tableau but that is pretty
>> limited to SQL.
>>
>> Anyone has used Jupiter and can share their experience of it vis-à-vis
>> Zeppelin?
>>
>> Thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> --
>> From: Goodman, Alexander (398K) 
>> Date: 28 November 2016 at 20:23
>> To: "users@zeppelin.apache.org" 
>>
>>
>> Hi Mich,
>>
>> You might want to take a look at this:
>> https://www.linkedin.com/pulse/comprehensive-comparison-jupy
>> ter-vs-zeppelin-hoc-q-phan-mba-
>>
>> I use both Zeppelin and Jupyter myself, and I would say by and large the
>> conclusions of that article are still mostly correct. Jupyter is definitely
>> superior in terms of stability, language (kernel) support, ease of
>> installation and maintenance (thanks to conda) and performance. If you just
>> want something that works well straight out of the box, then Jupyter should
>> be your goto notebook solution. I would say this is especially true if your
>> workflow is largely in python since many of the Jupyter developers also
>> have close ties with the general python data analytics / scientific
>> computing community, which results in better integration with some
>> important packages (like matplotlib and bokeh, for example). This makes
>> sense given that the project wa

Re: sparkContext to get Spark Driver's URL

2016-11-30 Thread Jeff Zhang
You can get ui by

sc.uiWebUrl

And community is working on to display it in paragraph.
https://github.com/apache/zeppelin/pull/1663


Ruslan Dautkhanov 于2016年12月1日周四 上午8:58写道:

> Any easy way to get Spark Driver's URL (i.e. from sparkContext )?
> I always have to go to CM -> YARN applications -> choose my Spark job ->
> click Application Master etc. to get Spark's Driver UI.
>
> Any way we could derive driver's URL programmatically from SparkContext
> variable?
>
>
> ps. Long haul - it would be super awesome to get a link staright in
> Zeppelin notebook (when SparkContext is instatiated).
>
>
> Thank you,
> Ruslan
>
>


Re: 0.7.0 zeppelin.interpreters change: can't make pyspark be default Spark interperter

2016-11-30 Thread Jeff Zhang
Hi Ruslan,

I miss another thing, You also need to delete file conf/interpreter.json
which store the original setting. Otherwise the original setting is always
loaded.


Ruslan Dautkhanov 于2016年12月1日周四 上午1:03写道:

> Got it. Thanks Jeff.
>
> I've downloaded
>
> https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/interpreter-setting.json
> and saved to $ZEPPELIN_HOME/interpreter/spark/
> Then Moved  "defaultInterpreter": true,
> from json section
> "className": "org.apache.zeppelin.spark.SparkInterpreter",
> to section
> "className": "org.apache.zeppelin.spark.PySparkInterpreter",
>
> pySpark is still not default.
>
>
>
> --
> Ruslan Dautkhanov
>
> On Tue, Nov 29, 2016 at 10:36 PM, Jeff Zhang  wrote:
>
> No, you don't need to create that directory, it should be in
> $ZEPPELIN_HOME/interpreter/spark
>
>
>
>
> Ruslan Dautkhanov 于2016年11月30日周三 下午12:12写道:
>
> Thank you Jeff.
>
> Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
> or in $ZEPPELIN_HOME directory?
> So zeppelin.interpreters in zeppelin-site.xml is deprecated in 0.7?
>
> Thanks!
>
>
>
> --
> Ruslan Dautkhanov
>
> On Tue, Nov 29, 2016 at 6:54 PM, Jeff Zhang  wrote:
>
> The default interpreter is now defined in interpreter-setting.json
>
> You can update the following file to make pyspark as the default
> interpreter and then copy it to folder interpreter/spark
>
>
> https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/interpreter-setting.json
>
>
>
> Ruslan Dautkhanov 于2016年11月30日周三 上午8:49写道:
>
> After 0.6.2 -> 0.7 upgrade, pySpark isn't a default Spark interpreter;
> despite we have org.apache.zeppelin.spark.*PySparkInterpreter*
> listed first in zeppelin.interpreters.
>
> zeppelin.interpreters in zeppelin-site.xml:
>
> 
>   zeppelin.interpreters
>
> org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkInterpreter
> ...
> 
>
>
>
> Any ideas how to fix this?
>
>
> Thanks,
> Ruslan
>
>
>
>


Re: sparkContext to get Spark Driver's URL

2016-11-30 Thread Ruslan Dautkhanov
Thanks Jeff.

ZEPPELIN-1692 will be very helpful.



-- 
Ruslan Dautkhanov

On Wed, Nov 30, 2016 at 6:56 PM, Jeff Zhang  wrote:

> You can get ui by
>
> sc.uiWebUrl
>
> And community is working on to display it in paragraph.
> https://github.com/apache/zeppelin/pull/1663
>
>
> Ruslan Dautkhanov 于2016年12月1日周四 上午8:58写道:
>
>> Any easy way to get Spark Driver's URL (i.e. from sparkContext )?
>> I always have to go to CM -> YARN applications -> choose my Spark job ->
>> click Application Master etc. to get Spark's Driver UI.
>>
>> Any way we could derive driver's URL programmatically from SparkContext
>> variable?
>>
>>
>> ps. Long haul - it would be super awesome to get a link staright in
>> Zeppelin notebook (when SparkContext is instatiated).
>>
>>
>> Thank you,
>> Ruslan
>>
>>


Re: 0.7.0 zeppelin.interpreters change: can't make pyspark be default Spark interperter

2016-11-30 Thread Ruslan Dautkhanov
Jeff,

Yep, that was it.

Thank you!



-- 
Ruslan Dautkhanov

On Wed, Nov 30, 2016 at 7:34 PM, Jeff Zhang  wrote:

> Hi Ruslan,
>
> I miss another thing, You also need to delete file conf/interpreter.json
> which store the original setting. Otherwise the original setting is always
> loaded.
>
>
> Ruslan Dautkhanov 于2016年12月1日周四 上午1:03写道:
>
>> Got it. Thanks Jeff.
>>
>> I've downloaded
>> https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/
>> interpreter-setting.json
>> and saved to $ZEPPELIN_HOME/interpreter/spark/
>> Then Moved  "defaultInterpreter": true,
>> from json section
>> "className": "org.apache.zeppelin.spark.SparkInterpreter",
>> to section
>> "className": "org.apache.zeppelin.spark.PySparkInterpreter",
>>
>> pySpark is still not default.
>>
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>> On Tue, Nov 29, 2016 at 10:36 PM, Jeff Zhang  wrote:
>>
>> No, you don't need to create that directory, it should be in
>> $ZEPPELIN_HOME/interpreter/spark
>>
>>
>>
>>
>> Ruslan Dautkhanov 于2016年11月30日周三 下午12:12写道:
>>
>> Thank you Jeff.
>>
>> Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
>> or in $ZEPPELIN_HOME directory?
>> So zeppelin.interpreters in zeppelin-site.xml is deprecated in 0.7?
>>
>> Thanks!
>>
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>> On Tue, Nov 29, 2016 at 6:54 PM, Jeff Zhang  wrote:
>>
>> The default interpreter is now defined in interpreter-setting.json
>>
>> You can update the following file to make pyspark as the default
>> interpreter and then copy it to folder interpreter/spark
>>
>> https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/
>> interpreter-setting.json
>>
>>
>>
>> Ruslan Dautkhanov 于2016年11月30日周三 上午8:49写道:
>>
>> After 0.6.2 -> 0.7 upgrade, pySpark isn't a default Spark interpreter;
>> despite we have org.apache.zeppelin.spark.*PySparkInterpreter*
>> listed first in zeppelin.interpreters.
>>
>> zeppelin.interpreters in zeppelin-site.xml:
>>
>> 
>>   zeppelin.interpreters
>>   org.apache.zeppelin.spark.PySparkInterpreter,org.
>> apache.zeppelin.spark.SparkInterpreter
>> ...
>> 
>>
>>
>>
>> Any ideas how to fix this?
>>
>>
>> Thanks,
>> Ruslan
>>
>>
>>
>>


RE: Unable to connect with Spark Interpreter

2016-11-30 Thread Jan Botorek
I finally decided to move the solution to Ubuntu machine where everything works 
fine.

I really don’t know the fundamental problem, why Windows and Zeppelin not work 
together. It is certain that there is a problem in Spark Interpreter and 
Zeppelin Engine communication. Unfortunately, I cannot specify more specific☹

Thank you all for the effort.

Regards,
Jan

From: Felix Cheung [mailto:felixcheun...@hotmail.com]
Sent: Tuesday, November 29, 2016 8:58 PM
To: users@zeppelin.apache.org; users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter

Hmm possibly with the classpath. These might be Windows specific issues. We 
probably need to debug to fix these.


From: Jan Botorek mailto:jan.boto...@infor.com>>
Sent: Tuesday, November 29, 2016 4:01:43 AM
To: users@zeppelin.apache.org
Subject: RE: Unable to connect with Spark Interpreter

Your last advice helped me to progress a little bit:

-  I started spark interpreter manually

o   c:\zepp\\bin\interpreter.cmd, -d, c:\zepp\interpreter\spark\, -p, 61176, 
-l, c:\zepp\/local-repo/2C2ZNEH5W

o   I needed to add a ‚\‘ into the –d attributte and make the path shorter --> 
moved to c:\zepp

-  Then, in Zeppelin web environment I setup the spark interpret to 
„connect to existing process“ (localhost/61176)

-  After that, when I execute any command, in interpreter cmd window 
appears this exception:

o   Exception in thread "pool-1-thread-2" java.lang.NoClassDefFoundError: 
scala/Option

o   at java.lang.Class.forName0(Native Method)

o   at java.lang.Class.forName(Class.java:264)

o   at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.createInterpreter(RemoteInterpreterServer.java:148)

o   at 
org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$createInterpreter.getResult(RemoteInterpreterService.java:1409)

o   at 
org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$createInterpreter.getResult(RemoteInterpreterService.java:1394)

o   at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)

o   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)

o   at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)

o   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

o   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

o   at java.lang.Thread.run(Thread.java:745)

o   Caused by: java.lang.ClassNotFoundException: scala.Option

o   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)

o   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)

o   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)

o   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

o   ... 11 more

Is this of any help, please?

Regards,
Jan



From: Jan Botorek [mailto:jan.boto...@infor.com]
Sent: Tuesday, November 29, 2016 12:13 PM
To: users@zeppelin.apache.org
Subject: RE: Unable to connect with Spark Interpreter

I am sorry, but the directory local-repo is not presented in the zeppelin 
folder. I use this (https://zeppelin.apache.org/download.html) newest binary 
version.

Unfortunately, in the 0.6 version downloaded and built from github, also the 
folder local-repo doesn’t exist


From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Tuesday, November 29, 2016 10:45 AM
To: users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter

I still don't see much useful info. Could you try run the following interpreter 
command directly ?

c:\_libs\zeppelin-0.6.2-bin-all\\bin\interpreter.cmd  -d 
c:\_libs\zeppelin-0.6.2-bin-all\interpreter\spark -p 53099 -l 
c:\_libs\zeppelin-0.6.2-bin-all\/local-repo/2C2ZNEH5W


Jan Botorek mailto:jan.boto...@infor.com>>于2016年11月29日周二 
下午5:26写道:
I attach the log file after debugging turned on.

From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Tuesday, November 29, 2016 10:04 AM

To: users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter

Then I guess the spark process is failed to start so no logs for spark 
interpreter.

Can you use the following log4.properties ? This log4j properties file print 
more error info for further diagnose.

log4j.rootLogger = INFO, dailyfile

log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n

log4j.appender.dailyfile.DatePattern=.-MM-dd
log4j.appender.dailyfile.Threshold = DEBUG
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dail