Maybe it is a known issue[1] and has already been resolved in 1.12.2(will
release soon).
BTW, I think it is unrelated with the aliyun oss info logs.
[1]. https://issues.apache.org/jira/browse/FLINK-20992
Best,
Yang
Lei Wang 于2021年2月8日周一 下午2:22写道:
> Flink standalone HA. Flink version 1.12.1
Flink standalone HA. Flink version 1.12.1
2021-02-08 13:57:50,550 ERROR
org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL:
Thread 'cluster-io-thread-30' produced an uncaught exception. Stopping the
process...
java.util.concurrent.RejectedExecutionException: Task
java.util.c
yes, but I use stop not cancel, which also stop and cancel the job together.
Yun Gao 于2021年2月8日周一 上午11:59写道:
> Hi yidan,
>
> One more thing to confirm: are you create the savepoint and stop the job
> all together with
>
> bin/flink cancel -s [:targetDirectory] :jobId
>
> command ?
>
> Best,
>
Hi Jan,
From my view, I think in Flink Window should be as a "high-level" operation for
some kind
of aggregation operation and if it could not satisfy the requirements, we could
at least turn to
using the "low-level" api by using KeyedProcessFunction[1].
In this case, we could use a ValueState
Hi yidan,
One more thing to confirm: are you create the savepoint and stop the job all
together with
bin/flink cancel -s [:targetDirectory] :jobId
command ?
Best,
Yun
--Original Mail --
Sender:赵一旦
Send Date:Sun Feb 7 16:13:57 2021
Recipients:Till Rohrmann
Hi Dan
The SQL add the uuid by default is for the case that users want execute
multiple bounded sql and append to the same directory (hive table), thus
a uuid is attached to avoid overriding the previous output.
The datastream could be viewed as providing the low-level api and
thus it does not ad
Hi Marco,
Sorry that current statebackend is a global configuration and could
not be configured differently for different operators.
One possible alternative option to this requirements might be set rocksdb
as the default statebackend, and for those operators that want to put state
in memory,
Hello Team,
As we have two kafka connectors "upsert-kafka" and "kafka".
I am facing issue with "upsert-kafka" while reading avro message serialized
using "io.confluent.kafka.serializers.KafkaAvroDeserializer".
Please note "kafka" connector is working while reading avro message
serialized usi
Hi experts,
I want to cache a temporary table for reuse it
Flink version 1.10.1
the table is consumer from kafka, struct like:
create table a (
field1 string,
field2 string,
field3 string,
field4 string
)
the sample code looks like:
val settings =
EnvironmentSettings.newInstance().inStreamingM
Hi,
MemoryStateBackend and FsStateBackend both hold keyed state in
HeapKeyedStateBackend [1], and the main structure to store data is StateTable
[2] which holds POJO format objects. That is to say, the object would not be
serialized when calling update().
On the other hand, RocksDB statebackend
Hi.
*Context*
I'm migrating my Flink SQL job to DataStream. When switching to
StreamingFileSink, I noticed that the part files now do not have a uuid in
them. "part-0-0" vs "part-{uuid string}-0-0". This is easy to add with
OutputFileConfig.
*Question*
Is there a reason why the base OutputFile
On Thu, Feb 04, 2021 at 04:26:42PM +0800, ChangZhuo Chen (陳昌倬) wrote:
> Hi,
>
> We have problem connecting to queryable state client proxy as described
> in [0]. Any help is appreciated.
>
> * The port 6125 is opened in taskmanager pod.
>
> ```
> root@-654b94754d-2vknh:/tmp# ss -tlp
> Stat
Using FsStateBackend.
I was under the impression that ValueState.value will serialize an object which
is stored in the local state backend, copy the serialized object and
deserializes it. Likewise update() would do the same steps copying the object
back to local state backend.And as a cons
It also maybe have something to do with my job's first tasks. The second
task have two input, one is the kafka source stream(A), another is
self-defined mysql source as broadcast stream.(B)
In A: I have a 'WatermarkReAssigner', a self-defined operator which add an
offset to its input watermark and
The first problem is critical, since the savepoint do not work.
The second problem, in which I changed the solution, removed the 'Map'
based implementation before the data are transformed to the second task,
and this case savepoint works. The only problem is that, I should stop the
job and remembe
15 matches
Mail list logo