There's one bug in 0.8, in 0.9 we remove the regular expression support,
you need to set zeppelin.notebook.cron.folders to be /, so that all the
notes enable cron.
zeppelin.notebook.cron.folders
/
Notebook cron folders
christophk 于2020年4月20日周一 下午1:14写道:
> I enabe with in the zeppelin-
I enabe with in the zeppelin-site.xml the cron scheduler with:
zeppelin.notebook.cron.enable
true
Notebook enable cron scheduler feature
zeppelin.notebook.cron.folders
*
Notebook cron folders
In the zeppelin 0.8.X was a clock button in the notebooks available.
In the current zeppel
Looking at your original message
Your first error was
java.lang.IllegalArgumentException: requirement failed:
Not serialisation error.
As a golden rule of thumb always look at the first error , irrespective of
programming language.
On Mon, 20 Apr 2020, 03:13 Ravi Pullareddy,
wrote:
> Hi Juan
Hi Juan
I see what your problem is. You have declared class C with field x of type
Int. When you map to class fields you have to specify the type. Try the
below code it works.
case class C(x: Int)
val xs = benv.fromCollection(1 to 10).map(y => C(y.toInt))
xs.count
benv.execute("Count Col
What do you mean doesn't work like pre version ? Could you provide more
details ?
christophk 于2020年4月20日周一 上午2:33写道:
> Hi all,
> I use zeppelin 0.9 as well.
> I try to activate cron by enable it in zeppelin-site.xml but it does not
> work like in prev. versions.
> Any idears?
> Thanks
> Chris
>
Hi all,
I use zeppelin 0.9 as well.
I try to activate cron by enable it in zeppelin-site.xml but it does not
work like in prev. versions.
Any idears?
Thanks
Chris
--
Sent from:
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/
Hi Jeff,
I’ll try using POJO clases instead.
Thanks,
Juan
El El dom, 19 abr 2020 a las 15:51, Jeff Zhang escribió:
> Hi Juan,
>
> This is an issue of flink, I have created ticket in flink community,
> https://issues.apache.org/jira/browse/FLINK-16969
> The workaround is to use POJO class inst
Hi Juan,
This is an issue of flink, I have created ticket in flink community,
https://issues.apache.org/jira/browse/FLINK-16969
The workaround is to use POJO class instead of case class.
Juan Rodríguez Hortalá 于2020年4月19日周日
下午7:58写道:
> Thanks also to Som and Zahid for your answers. But as I sho
Thanks also to Som and Zahid for your answers. But as I show in my previous
answer, this issue happens without using the CSV source, and it doesn't
show in the Flink Scala shell, so it looks like an issue with Zeppelin
interpreter for Flink
On Sun, Apr 19, 2020 at 1:56 PM Juan Rodríguez Hortalá <
Hi Ravi,
I posted another message with the minimal reproduction, I repeat it here:
```scala
case class C(x: Int)
val xs = benv.fromCollection(1 to 10)
val cs = xs.map{C(_)}
cs.count
```
defined class C xs: org.apache.flink.api.scala.DataSet[Int] =
org.apache.flink.api.scala.DataSet@39c713c6 cs
You may also want to look at this
https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html
On Sun, 19 Apr 2020, 09:05 Juan Rodríguez Hortalá, <
juan.rodriguez.hort...@gmail.com> wrote:
> Hi,
>
> I'm using the Flink interpreter and the benv environment. I'm reading some
> cs
You may wish to have a look at Apache avro project.
It shows you how to Generare source code for schema classes for creating
objects to be used in reading csv files etc. for the purpose of
serialisation and deserialization.
https://avro.apache.org/docs/current/gettingstartedjava.html
The avro
Hi Juan
I have written various applications for continuous processing of csv files.
Please post your entire code and how you are mapping. It becomes easy to
highlight the issue.
Thanks
Ravi Pullareddy
*From:* Juan Rodríguez Hortalá
*Sent:* Sunday, 19 April 2020 6:25 PM
*To:* users@zepp
Just for the record, the Spark version of that works fine:
```
%spark
case class C2(x: Int)
val xs = sc.parallelize(1 to 10)
val csSpark = xs.map{C2(_)}
csSpark.collect
res3: Array[C2] = Array(C2(1), C2(2), C2(3), C2(4), C2(5), C2(6), C2(7),
C2(8), C2(9), C2(10))
```
Thanks,
Juan
On Sun, Ap
Minimal reproduction:
- Fist option
```scala
case class C(x: Int)
val xs = benv.fromCollection(1 to 10)
val cs = xs.map{C(_)}
cs.count
```
defined class C xs: org.apache.flink.api.scala.DataSet[Int] =
org.apache.flink.api.scala.DataSet@39c713c6 cs:
org.apache.flink.api.scala.DataSet[C] =
or
Hi,
I'm using the Flink interpreter and the benv environment. I'm reading some
csv files using benv.readCsvFile and it works ok. I have also defined a
case class C for the csv records. The problem happens when I apply a
map operation on the DataSet of tuples returned by benv.readCsvFile, to
conver
16 matches
Mail list logo