Including ALL interpreters is not feasible, not due to download size as that
is easily increased but because we wouldn't want to couple the release cycles
as pointed out by Jeff. IMHO a few of the most popular ones should be included.
Yes it is just one extra step but if a computer can do it wh
Hi,
+1 for releasing netinst package only.
Regarding make binary package only some packages, like spark, markdown,
jdbc, we have discussed having minimal package in [1].
And i still think it's very difficult to decide which interpreter need to
be included which is not. For example i prefer to hav
Another thing I'd like to talk is that should we move most of interpreters
out of zeppelin project to somewhere else just like spark do for
spark-packages, 2 benefits:
1. Keep the zeppelin project much smaller
2. Each interpreter's improvements won't be blocked by the release of
zeppelin. Interpre
+1 for Jeff's idea! I also use the three interpreters mainly :)
2017년 1월 18일 (수) 오후 12:52, Jeff Zhang 님이 작성:
>
> How about also include markdown and jdbc interpreter if this won't cause
> binary distribution much bigger ? I guess spark, markdown, and jdbc
> interpreters are the top 3 interpreters
How about also include markdown and jdbc interpreter if this won't cause
binary distribution much bigger ? I guess spark, markdown, and jdbc
interpreters are the top 3 interpreters in zeppelin.
Ahyoung Ryu 于2017年1月18日周三 上午11:33写道:
> Thanks Mina always!
> +1 for releasing only netinst package.
>
Thanks Mina always!
+1 for releasing only netinst package.
On Wed, Jan 18, 2017 at 12:29 PM, Prabhjyot Singh wrote:
> +1
>
> I don't think it's a problem now, but if it keeps increasing then in the
> subsequent releases we can ship Zeppelin with few interpreters, and mark
> others as plugins tha
+1
I don't think it's a problem now, but if it keeps increasing then in the
subsequent releases we can ship Zeppelin with few interpreters, and mark
others as plugins that can be downloaded later with instructions with how
to configure.
On Jan 18, 2017 8:54 AM, "Jun Kim" wrote:
> +1
>
> I think
+1
I think it won't be a problem if we notice it clear.
Maybe we can do that next to the download button here (
http://zeppelin.apache.org/download.html)
A message may be "NOTE: only spark interpreter included since 0.7.0. If you
want other interpreters, please see interpreter installation guide"
+1, we should also mention it in release note and in the 0.7 doc
Mina Lee 于2017年1月18日周三 上午11:12写道:
> Hi all,
>
> Zeppelin is about to start 0.7.0 release process, I would like to discuss
> about binary package distribution.
>
> Every time we distribute new binary package, size of the
> zeppelin
Hi all,
Zeppelin is about to start 0.7.0 release process, I would like to discuss
about binary package distribution.
Every time we distribute new binary package, size of the
zeppelin-0.x.x-bin-all.tgz package is getting bigger:
- zeppelin-0.6.0-bin-all.tgz: 506M
- zeppelin-0.6.1-bin-all.tgz
Hi,
this deflinitly looks like a regredsion/bug,
Ruslan, would you mind creating a JIRA issue?
Paul, thanks for sharing notebook size reduction pro-tip!
--
Alex
On Wed, Jan 18, 2017, 10:04 Paul Brenner wrote:
> Just a tip that when I ran into this problem I found that using the “clear
> outpu
What issue do you see ? Can you paste the log and tell how to reproduce it ?
Sherif Akoush 于2017年1月18日周三 上午3:03写道:
> Hi,
>
> spark 2.1 uses commons.lang3 ver 3.5 while zeppelin master still used
> ver 3.4. This mismatch I guess causes executors to fail. Is there a
> requirement for zeppelin to
Just a tip that when I ran into this problem I found that using the “clear
output” button and then exporting my notebook made it easy to get below the
size limit. Not very helpful if you need ALL the output, but maybe you can
selectively clear output from some paragraphs?
http://www.placeiq.com
>From the screenshot "JSON file size cannot exceed MB".
Notice there is no number between "exceed" and "MB".
Not sure if we're missing a setting or an environment variable to define
the limit?
It now prevents us from importing any notebooks.
--
Ruslan Dautkhanov
On Tue, Jan 17, 2017 at 11:54 A
Hi,
spark 2.1 uses commons.lang3 ver 3.5 while zeppelin master still used
ver 3.4. This mismatch I guess causes executors to fail. Is there a
requirement for zeppelin to use ver 3.4?
Regards,
Sherif
'File size limit Exceeded' when importing notes - even for small files
This happens even for tiny files - a few Kb.
Is this a known issue?
Running Zeppelin 0.7.0 from a few weeks old snapshot.
See attached screenshot.
--
Ruslan Dautkhanov
There was an old jira for keyboard shortcuts. But there did not appear to
be an associated document
https://issues.apache.org/jira/browse/ZEPPELIN-391
Is there a comprehensive cheat-sheet for the shortcuts? Especially to
compare to the excellent jupyter keyboard shortcuts; e.g. dd to delete a
ce
Hi Deenar,
It is possible to use Zeppelin Context via Pyspark interpreter.
Example (based on Zeppelin 0.6.0)
paragraph1
---
%spark
# do some stuff and store result (dataframe) into Zeppelin context. In this
case as sql dataframe
...
z.put("scala_df", scala_df: org.apache.spark.sql.
Hi
Is it possible to access Zeppelin context via the Pyspark interpreter. Not
all the method available via the Spark Scala interpreter seem to be
available in the Pyspark one (unless i am doing something wrong). I would
like to do something like this from the Pyspark interpreter.
z.show(df, 100)
19 matches
Mail list logo