options:
>
> 1. Make sure every type is a proper Scala type (all case classes, no
> POJOs).
>
> 2. Use the @TypeInfo annotation for specifying a factory. This has
> highest precedence in all APIs.
>
> 3. Register a Kryo serializer in the execution config. This might be
12日周四 上午12:45写道:
> Hi
>
>
>
> Would you please give related code? I think it might due to insufficient
> hint to type information.
>
>
>
> Best
>
> Yun Tang
>
>
>
>
>
>
>
> *From: *杨光
> *Date: *Wednesday, December 11, 2019 at 7:20 PM
Hi, I'm working on write a flink stream job with scala api , how should I
find out which class is serialied by flink type serializer and which is
falled back to generic Kryo serializer.
And if one class falls back to Kryo serializer, how can I make some extend
the TypeInfo classes of Flink or some
.apache.org/jira/browse/FLINK-11727).
>
> Maybe it is worth it to write your own format and perform the JSON
> parsing logic how you would like it.
>
> Regards,
> Timo
>
> Am 04.03.19 um 08:38 schrieb 杨光:
> > Hi,
> > i am trying the flink sql api to read json
杨光
下午3:22 (1分钟前)
发送至 Timo、 user
HI Timo
I have get the nested value by change the Schema definition like this
Schema schemaDesc1 = new Schema()
.field("str2", Types.STRING)
.field("tablestr", Types.STRING).from("table")
* .field(&quo
Hi,
i am trying the flink sql api to read json formate data from kafka topic.
My json schema is a nested json like this
{
"type": "object",
"properties": {
"table": {
"type": "string"
},
"str2": {
"type": "string"
},
"obj1": {
"type": "object",
"prope
at 19:51, Stefan Richter
> wrote:
>
> Hi,
>
> maybe Aljoscha or Eron (both in CC) can help you with this problem, I
> think they might know best about the Kerberos security.
>
> Best,
> Stefan
>
> Am 20.09.2018 um 11:20 schrieb 杨光 :
>
> Hi,
> i am
Hi,
i am using the " per-job YARN session " mode deploy flink job on yarn and
my flink
version is 1.4.1.
https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/security-kerberos.html
My use case is the yarn cluster where the flink job running is not enabled
the kerberos mode in core-sit
Hi,
i am using the " per-job YARN session " mode deploy flink job on yarn.
The document
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/jobmanager_high_availability.html#yarn-cluster-high-availability
says that "we don’t run multiple JobManager (ApplicationMaster)
instances" but
:00 Stephan Ewen :
> Could you open an issue to add the old config keys as backwards supported
> "deprecated keys"? That should help making the transition smoother.
>
> On Fri, Dec 15, 2017 at 9:29 AM, Fabian Hueske wrote:
>>
>> Thanks for reporting back!
>>
&
Hi,
I am using flink single-job mode on YARN to read data from a kafka
cluster installation configured for Kerberos. When i upgrade flink to
1.4.0 , the yarn application can not run normally and logs th error
like this:
Exception in thread "main" java.lang.RuntimeException:
org.apache.flink.confi
ager.env.JAVA_HOME=/opt/jdk1.8.0_121 -yD
containerized.master.env.JAVA_HOME=/opt/jdk1.8.0_121 " and it works
.
Thanks a lot .
2017-12-14 20:52 GMT+08:00 Nico Kruber :
> Hi,
> are you running Flink in an JRE >= 8? We dropped Java 7 support for
> Flink 1.4.
>
>
> Nico
>
>
Hi,
I am usring flink single-job mode on YARN. After i upgrade flink
verson from 1.3.2 to 1.4.0, the parameter
"yarn.taskmanager.env.JAVA_HOME" doesn’t work as before.
I can only found error log on yarn like this:
Exception in thread "main" java.lang.UnsupportedClassVersionError:
org/apache/flin
13 matches
Mail list logo