Did you upgrade both the client and cluster to 1.6.0? The server
returned a completely empty response which shouldn't be possible if it
runs 1.6.0.
On 05.09.2018 07:27, 潘 功森 wrote:
Hi Vino,
Below are dependencies I used,please have a look.
I floud it also inclued flink-connector-kafka-0.10
Hi Chesney,
I can sure the client and cluster all upgraded to 1.6.0 cause if I used
“./flink run XXX.jar” to submit a job and it works fine.You can see ui below.
But when I used createRemoteEnvironment at local, and it failed.It
confused me a lot.
[cid:image002.png@01D44531.7E8AB2
Chesnay Schepler created FLINK-10282:
Summary: Provide separate thread-pool for REST endpoint
Key: FLINK-10282
URL: https://issues.apache.org/jira/browse/FLINK-10282
Project: Flink
Issue
Chesnay Schepler created FLINK-10283:
Summary: FileCache logs unnecessary warnings
Key: FLINK-10283
URL: https://issues.apache.org/jira/browse/FLINK-10283
Project: Flink
Issue Type: Bug
Jiayi Liao created FLINK-10284:
--
Summary: TumblingEventTimeWindows's offset should can be less than
zero.
Key: FLINK-10284
URL: https://issues.apache.org/jira/browse/FLINK-10284
Project: Flink
Chesnay Schepler created FLINK-10285:
Summary: Bum shade-plugin to 3.1.1
Key: FLINK-10285
URL: https://issues.apache.org/jira/browse/FLINK-10285
Project: Flink
Issue Type: Sub-task
Sayat Satybaldiyev created FLINK-10286:
--
Summary: Flink Persist Invalid Job Graph in Zookeeper
Key: FLINK-10286
URL: https://issues.apache.org/jira/browse/FLINK-10286
Project: Flink
Issu
Sayat Satybaldiyev created FLINK-10287:
--
Summary: Flink HA Persist Cancelled Job in Zookeeper
Key: FLINK-10287
URL: https://issues.apache.org/jira/browse/FLINK-10287
Project: Flink
Issue
Hi all,
I’m currently calculating a moving average with DataStreams via:
.keyBy(new XXXKeySelector())
.window(GlobalWindows.create())
.trigger(CountTrigger.of(1))
.aggregate(new MovingAverageAggregator(10))
MovingAverageAggregator uses a MovingAver
Currently there doesn’t seem to be a way to do this. Am I correct in that? I
guess one could register multiple implementations each with their own scheme
but that seems somewhat hacky.
It would be nice if the registration of the filesystem was done when the
DataSet (or DataStream) was define
Currently there doesn’t seem to be a way to do this. Am I correct in that? I
guess one could register multiple implementations each with their own scheme
but that seems somewhat hacky.
It would be nice if the registration of the filesystem was done when the
DataSet (or DataStream) was define
11 matches
Mail list logo