I have created an issue [1] and a pull request to fix this. Hope we can
catch up with this release.
Best,
Jark
[1]: https://issues.apache.org/jira/browse/FLINK-18461
On Wed, 1 Jul 2020 at 18:16, Jingsong Li wrote:
> CC: @Jark Wu and @Timo Walther
>
> Best,
> Jingsong
>
> On Wed, Jul 1, 2020
Hi AllI also had a scenario which need dynamic and dynamic sink to route streaming data to different kafkaIs any way better to do it in runtime
Hi Danny,
Thanks for the response.
In short without restarting we cannot add new sinks or sources.
For better understanding I will explain my problem more clearly.
My scenario is I have two topics, one is configuration topic and second one
is event activities.
* In the configuration topic I wi
Maybe you can share the log and gc-log of the problematic TaskManager? See
if we can find any clue.
Thank you~
Xintong Song
On Wed, Jul 1, 2020 at 8:11 PM Ori Popowski wrote:
> I've found out that sometimes one of my TaskManagers experiences a GC
> pause of 40-50 seconds and I have no idea w
Hi Prasanna,
There are Flink use cases in the US healthcare space, unfortunately, I do
not have any public references that I will be able to provide.
Some important Flink features that are relevant when working in a field
that requires compliance:
- SSL:
https://ci.apache.org/projects/fli
I've found out that sometimes one of my TaskManagers experiences a GC pause
of 40-50 seconds and I have no idea why.
I profiled one of the machines using JProfiler and everything looks fine.
No memory leaks, memory is low.
However, I cannot anticipate which of the machines will get the 40-50
second
Sorry, a job graph is solid while we compile it before submitting to the
cluster, not dynamic as what you want.
You did can write some wrapper operators which response to your own PRCs to run
the appended operators you want,
But the you should keep the consistency semantics by yourself.
Best,
D
CC: @Jark Wu and @Timo Walther
Best,
Jingsong
On Wed, Jul 1, 2020 at 5:55 PM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
> CREATE TABLE t_pick_order (
> order_no VARCHAR,
> status INT
> ) WITH (
> 'connector' = 'kafka',
> 'topic' = 'example',
> '
CREATE TABLE t_pick_order (
order_no VARCHAR,
status INT
) WITH (
'connector' = 'kafka',
'topic' = 'example',
'scan.startup.mode' = 'latest-offset',
'properties.bootstrap.servers' = '172.19.78.32:9092',
'format' = 'canal-json'
)
CREATE TABLE order_stat
If you are using 1.11 new changelog format, I think it will retract old
value from old partition correctly.
If not, (I assume you are using append only changelog) I think it won't
retract old value.
lec ssmi 于2020年7月1日周三 下午2:39写道:
> The old value is already counted in a partition, and when the a
Hi Lorenzo,
what you could try to do is to derive your own InputFormat (extending
FileInputFormat) where you set the field `unsplittable` to true. That way,
an InputSplit is the whole file and you can handle the set of new rules as
a single record.
Cheers,
Till
On Mon, Jun 29, 2020 at 3:52 PM Lo
Hi Georg,
I'm pulling in Aljoscha who might know more about the problem you are
describing.
Cheers,
Till
On Mon, Jun 29, 2020 at 10:21 PM Georg Heiler
wrote:
> Older versions of flink were incompatible with the Scala specific record
> classes generated from AvroHugger.
>
> https://issues.apach
12 matches
Mail list logo