subscribe
a#L603-L606
>
> <https://github.com/apache/flink/blob/e2579e39602ab7d3e906a185353dd413aca58317/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java#L603-L606>
>
> On Wed, Mar 6, 2019 at 3:58 AM 孙森 <mailto:senny...@163.com>> wrote:
>
m flag.
>
> Best,
> Gary
>
> On Tue, Mar 5, 2019 at 8:08 AM 孙森 <mailto:senny...@163.com>> wrote:
> Hi Gary:
>
> No zookeeper is because the reason that the job submit will fail.
> <屏幕快照 2019-03-05 下午3.07.21.png>
>
>
> Best
>
1711.mbox/%3c2e1eb190-26a0-b288-39a4-683b463f4...@apache.org%3E>
>
> I think it answers the same question and links to tickets for Timezone
> support.
>
> Piotrek
>
>> On 4 Mar 2019, at 08:55, 孙森 mailto:senny...@163.com>>
>> wrote:
>>
>> Hi al
oints.html
>
> <https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/state/checkpoints.html>
>
> On Fri, Mar 1, 2019 at 7:27 AM 孙森 <mailto:senny...@163.com>> wrote:
> Hi Gary:
> I checked the znode, the address of leader was there.
>
&
Hi all:
I am using flink sql with event time, but the field which acts as the
routine is not correct in the output. There’s an eight-hour time difference.
Any suggestion?
My input is (ums_ts_ acts as the rowtime):
{"schema":{"namespace":"en2.*.*.*","fields":[{"name":"ums_id_","type":"
ntime/rest/RestClient.java#L185>
> [2]
> https://stackoverflow.com/questions/4922943/test-from-shell-script-if-remote-tcp-port-is-open
>
> <https://stackoverflow.com/questions/4922943/test-from-shell-script-if-remote-tcp-port-is-open>
>
> On Wed, Feb 27, 2019 at 8:09 AM 孙森
Hi all:
I run flink (1.5.1 with hadoop 2.7) on yarn ,and submit job by
“/usr/local/flink/bin/flink run -m jmhost:port my.jar”, but the submission is
failed.
The HA configuration is :
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability
HI,all:
I specify the exact offsets the consumer should start from for each
partition.But the Kafka consumer connot periodically commit the offsets to
Zookeeper.
I have disabled the checkpoint only if the job is stopped.This is my code:
val properties = new Properties()
properties.setP
Hi,Fabian
I am using flink CEP library with event time, but there is no output( the
java code performed as expected, but scala did not) .My code is here:
object EventTimeTest extends App {
val env: StreamExecutionEnvironment =
StreamExecutionEnvironment.createLocalEnvironment()
env.s
QL_DATE
case DATETIME => Types.SQL_TIMESTAMP
case DECIMAL => Types.DECIMAL
}
}
}
val inputStream: DataStream[Row] = env.addSource(myConsumer)
val tableEnv = TableEnvironment.getTableEnvironment(env)
Thanks~
sen
> 在 2018年6月6日,下午7:22,孙森 写道:
>
> Hi ,
>
>
Hi ,
I've tried to to specify such a schema, when I read from kafka, and covert
inputstream to table . But I got the exception:
Exception in thread "main" org.apache.flink.table.api.TableException: An input
of GenericTypeInfo cannot be converted to Table. Please specify the type of th
[apache-flink]An input of GenericTypeInfo cannot be converted to Table.
Please specify the type of the input with a RowTypeInfo
https://stackoverflow.com/q/50718451/6059691?sem=2
13 matches
Mail list logo