Github user elbamos commented on the pull request:
https://github.com/apache/incubator-zeppelin/pull/208#issuecomment-156865367
@leemoonsoo
I will be rebasing over the next few days.
I the meantime I'm going to go through the issues you raised in sequence...
#### 1. The R-Scala Interface
I've explained several times that the proposal to use SparkR
bi-directionally doesn't work. I don't feel that I have more to add about
that.
I will try to reduce the size of the code that originated in rscala.
If this is not going to be acceptable to the PCCM, please tell me now.
#### 2. KnitR
What you're proposing is that users enter the same boilerplate, which they
would have to figure out for themselves, every time they want to use knitr.
Knitr and the repl are fundamentally two different ways for users to
interact with R. They have very different behaviors in terms of error
reporting and handling visualizations.
If you don't want to trust me about this, then I suggest we ask some other
R users what makes the most sense.
If this is not going to be acceptable to the PCCM, please tell me now.
#### 3. KnitR GPL License
*By the way, KnitR is GPL license. I don't think Zeppelin can have a
feature that depends on GPL licensed code.*
KnitR is an **optional** external dependency. This is not a licensing
problem.
It is also not a licensing problem to interact with GPL code that isn't
supplied with Zeppelin.
For example, **R itself** is GPL code. So it Zeppelin cannot interact with
external GPL code, then there cannot be an R interpreter in Zeppelin at all.
Considering that Spark interacts with R, I think this issue is closed.
#### 4. License any copyright
*License and Copyright problems are one of the highest priority item in
Zeppelin project*
Huh? What we were talking about is who gets identified in code as the
author. That is obviously not a license/copyright issue, its an issue of
credit.
You said that it is discouraged to identify anybody as the author in Apache
projects.
However, the current code does identify authors, with you identified as the
principal author.
So, at this point I'm not sure what you're referring to?
#### 5. Location of the Package
Its fine with me if it goes under /spark. The reason its in /r is that it
simplifies testing and development. Someone will have to merge the two build
scripts; I'm using scalatest for testing.
#### 6. Travis Builds
Actually the error in the travis logs begins with this:
> 15/09/23 03:12:22 INFO HiveMetaStore: No user is added in admin role,
since config is empty
> 15/09/23 03:12:24 WARN SparkInterpreter: Can't create HiveContext.
Fallback to SQLContext
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at > >
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45>
)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
That's not coming from rzeppelin. That's coming from the SparkInterpreter
when rzeppelin asks it to initiate a spark backend.
This is what I mean about issues in the spark-zeppelin interface.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---