Hi Marco,
the network buffer pool is destroyed when the task manager is shutdown.
Could you check if you have an error before that in your log?
It seems like the timer is triggered at a point where it shouldn't. I'll
check if there is a known issue that has been fixed in later versions. Do
you ha
Hi,
I'm confused about slots communication in same taskmanager.
Assume only one job which running on per-job cluster with parallalism = 6.
Each taskmanager with 3 slot.
There are 6 slot:
slot 1-1, slot 1-2, slot 1-3,
slot 2-1, slot 2-2 , slot 2-3
Assume the job has 'KeyBy' operator, thus,
I have a flink job that collects and aggregates time-series data from many
devices into one object (let's call that X) that was collected by a window.
X contains time-series data, so it contains many String, Instant, a
HashMap, and another type (Let's call Y) objects.
When I collect 4 X instances
Hi. What causes a buffer pool exception? How can I mitigate it? It is
causing us plenty of problems right now.
2021-01-26 04:14:33,041 INFO
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets [] -
Subtask 1 received completion notification for checkpoint with id=4.
2021-01-26 04:14:
Hi,Nick
I do not think you could update the `myState` in the
`processBroadcastElement`. It is because you need a key before to update
the keyedstate. But there is no key in `processBroadcastElement` .
Best,
Guowei
On Tue, Jan 26, 2021 at 6:31 AM Nick Bendtner wrote:
> Hi Guowei,
> I am not usi
Thanks, Matthias!
I tried your suggestion and it does work.
After installing kafka-avro-serializer with pom I got some more errors about
io.confluent:kafka-schema-registry-parent:pom:5.5.2 and
io.confluent:rest-utils-parent:pom:5.5.2 and so on. After manually installing
all these dependencies wi
Hi Guowei,
I am not using a keyed broadcast function, I use [1]. My question is, can
a non broadcast state, for instance value state/map state be updated
whenever I get a broadcast event in *processBroadcastElement*. This way the
state updates are consistent since each instance of the task gets th
Hello,
in our setup we have:
- Flink 1.11.2
- job submission via REST API (first we upload jar, then we submit
multiple jobs with it)
- additional jars embedded in lib directory of main jar (this is crucial
part)
When we submit jobs this way, Flink creates new temp jar files via
Packaged
Hi,
What is the difference between table.exec.source.idle-timeout and
setIdleStateRetentionTime ?
table.exec.source.idle-timeout:
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/config.html#table-exec-source-idle-timeout
setIdleStateRetentionTime:
https://ci.apache.or
Dear Community,
More precise feedback on the points raised will be highly appreciated !
@Niels: thanks for this first statement
Thanks a lot !
Le dim. 24 janv. 2021 à 19:37, Niels Basjes a écrit :
> Hi,
>
> I haven't tried it myself yet but there is a Flink connector for HBase and
> I remembe
Hi Robert and Yun Tang,
Robert: Sounds good.
Yun: Thanks for the info about the custom serializer. I ended up hard coding
the fields which we did not want in to use in the keyBy.
Thanks,
/David
From: Yun Tang
Date: Friday, 22 January 2021 at 04:52
To: Robert Metzger , David Haglund
Cc: use
Hi,
Flink 1.12.1
Scala 2.12.12
While attempting to fix a serialization bug I previously wrote about, I
temporarily disabled projection pushdown for my custom source
implementation. I then proceeded to run the application only to encounter a
ClassCastException, which after debugging was caused by
Thanks for reaching out. Semi-asynchronous does *not* refer to incremental
checkpoints and Savepoints are always triggered as full snapshots (not
incremental).
Earlier versions of the RocksDb state backend supported two snapshotting modes,
fully and semi-asynchronous snapshots. Semi-asynchronou
Hi, Falak
>>>Now if I try to query this bucket (state) using a queryable state, then
i don't get the results.
AFAIK, Flink does not have a way to let user query the state of the
`WiindowOperator`. It needs to expose the window operator's internal
implementation, which might be difficult to maintai
In the current Flink version, the OVERWRITE should be added to every
INSERT INTO statement. It is not part of the connector anymore. Maybe we
can introduce an option in the future to define the default connector
behavior (feel free to open an issue for this if you think this is
required).
How
Great! Thanks for the detailed answer TImo! I think I'll wait for the
migration to finish before updating my code.
However, does the usage of a catalog solve the problem of CSV override as
well? I can't find a way to use INSERT OVERRIDE with a CSV sink using the
executeSql.
Best,
Flavio
On Mon, J
Hi Flavio,
FLIP-129 will update the connect() API with a programmatic way of
defining tables. In the API we currently only support the DDL via
executeSql.
I would recommend to implement the Catalog interface. This interface has
a lot of methods, but you only need to implement a couple of met
Any advice on how to fix those problems?
Best,
Flavio
On Thu, Jan 21, 2021 at 4:03 PM Flavio Pompermaier
wrote:
> Hello everybody,
> I was trying to get rid of the deprecation warnings about
> using BatchTableEnvironment.registerTableSink() but I don't know how to
> proceed.
>
> My current code
Hi Smile,
you missed installing the pom provided by mvnrepository.org [1]. Maven will
install a basic pom if none is provided [2]. This basic pom file will not
include any dependencies. You should be able to fix your problem by running
your command above but adding the -DpomFile property with the p
19 matches
Mail list logo