Hi Benchao,
Thanks for the input.
The code is self-explanatory.
Best,
Dongwon
On Thu, Dec 10, 2020 at 12:20 PM Benchao Li wrote:
> Hi Dongwon,
>
> I think you understand it correctly.
> You can find this logic here[1]
>
> [1]
> https://github.com/apache/flink/blob/master/flink-streaming-jav
One of the Exception instances finally reported a stacktrace. I'm not sure
why it's so infrequent.
java.lang.NullPointerException: null
at
org.apache.flink.table.data.GenericRowData.getLong(GenericRowData.java:154)
~[flink-table-blink_2.12-1.11.1.jar:1.11.1]
at
org.apache.flink.
Maybe FLINK-16626[1] is related. And it is fixed in 1.10.1 and 1.11.
[1]. https://issues.apache.org/jira/browse/FLINK-16626
Best,
Yang
Yun Tang 于2020年12月10日周四 上午11:06写道:
> Hi Suchithra,
>
> Have you ever checked job manager log to see whether the savepoint is
> triggered and why the savepoint
Could you use 4 scalar functions instead of UDTF and map function? For
example;
select *, hasOrange(fruits), hasBanana(fruits), hasApple(fruits),
hasWatermelon(fruits)
from T;
I think this can preserve the primary key.
Best,
Jark
On Thu, 3 Dec 2020 at 15:28, Rex Fenley wrote:
> It appears tha
Hi Bastien,
I think you could refer to WritableSavepoint#write [1] to get all existing
state and flat map to remove the state you do not want (could refer to
StatePathExtractor[2] )
[1]
https://github.com/apache/flink/blob/168124f99c75e873adc81437c700f85f703e2248/flink-libraries/flink-state-p
Hi Dongwon,
I think you understand it correctly.
You can find this logic here[1]
[1]
https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/streamstatus/StatusWatermarkValve.java#L108
Dongwon Kim 于2020年12月10日周四 上午12:21写道:
> Hi,
>
> Let
Hi Suchithra,
Have you ever checked job manager log to see whether the savepoint is triggered
and why the savepoint failed to complete.
Best
Yun Tang
From: V N, Suchithra (Nokia - IN/Bangalore)
Sent: Wednesday, December 9, 2020 23:45
To: user@flink.apache.org
S
Hi, abelm ~
Which version Flink did you use? We did some refactoring for the connector
options since Flink 1.11. The METADATA syntax is only supported since
version 1.12.
In 1.11, to ignore the parse errors, you need to use option
"json.ignore-parse-error" [1]
[1]
https://ci.apache.org/projects/
No, the group agg, stream-stream join and rank are all stateful
operators which need a state-backend to bookkeep the acc values.
But it is only required to emit the retractions when the stateful operator
A has a downstream operator B that is also stateful, because the B needs
the retractions to co
In the Flink dashboard, my job is failing with a NullPointerException but
the Exception is not showing a stack trace. I do not see any
NullPointerExceptions in any of the flink-jobmanager and flink-taskmanager
logs.
Is this a normal issue?
[image: Screen Shot 2020-12-09 at 4.29.30 PM.png]
Sure thanks Flavio, will check it out
On Wed, Dec 9, 2020, 16:20 Flavio Pompermaier wrote:
> I issued a PR some time ago at https://github.com/apache/flink/pull/12038 but
> Flink committers were busy in refactoring that part..I don't know if it is
> still required to have that part into the jdbc
I'm trying to figure out a way to make Flink jobmanager (in HA) connect to
zookeeper over SSL/TLS. It doesn't seem like there are native properties
like Kafka has that support this interaction yet. Is this true or is there
some way that I can go about doing this?
Hi Anuj,
SIGTERM with SIGNAL 15 means that it was killed by an external process.
Look into the Yarn logs to look for a specific error.
Usually, yarn kills a container with exit code 143 when it goes over memory
boundaries. This is something the community constantly improves, but may
still happen
So from what I'm understanding, the aggregate itself is not a "stateful
operator" but one may follow it? How does the aggregate accumulator keep
old values then? It can't all just live in memory, actually, looking at the
savepoints it looks like there's state associated with our aggregate
operator.
Hi,
Is there a way to reduce cardinality (preaggregate) metrics that are
emitted to Prom Push gateway?
Our metrics infra is struggling to digest per task stats. Any way we can
configure it to emit per stage aggregates?
Our current config:
metrics.scope.tm flink.taskmanager
metrics.scope.operator
I have a Flink stream job that reads data from Kafka and writes it to S3.
This job keeps failing after running for 2-3 days.
I am not able to find anything in logs why it's failing. Can somebody help
me how to find out the cause of failure?
I can only see this in logs :
org.apache.flink.streamin
Hello! I have a Scala 2.12 project which registers some tables (that get
their data from Kafka in JSON form) to the StreamTableEnvironment via the
executeSql command before calling execute on the StreamExecutionEnvironment.
Everything behaves as expected until I either try to set
/'format.ignore-p
Hi Flink Community,
We use Flink SQL to calculate some metrics. In our SQL, we use window
aggregation and we want to trigger the result earlier with different trigger
strategies.
So we get the window operators in the transformations and set the triggers by
reflection.
It worked in Flink 1.7.
Hi,
Let's consider two operators: A (parallelism=2) and B (parallelism=1).
B has two input partitions, B_A1 and B_A2, which are connected to A1 and A2
respectively.
At some point,
- B_A1's watermark : 12
- B_A2's watermark : 10
- B's event-time clock : 10 = min(12, 10)
- B has registered a timer
Thank you for the help.
--
Thanks,
Hongjian Peng
At 2020-11-30 16:16:48, "Dawid Wysakowicz" wrote:
Hi,
I managed to backport the change to the 1.11 branch. It should be part of the
1.11.3 release.
Best,
Dawid
On 25/11/2020 16:23, Hongjian Peng wrote:
Thanks for Danny and
Hello,
I am running streaming flink job and I was using cancel command with savepoint
to cancel the job. From flink 1.10 version stop command should be used instead
of cancel command.
But I am getting below error sometimes. Please let me know what might be the
issue.
{"host":"cancel1-flinkcli
But when I monitor my database, it only shows one query. It does not show a
query every 20 seconds. It seems that since the cache is 10 times larger that
the records are kept even though they expired.
> On Dec 9, 2020, at 4:05 AM, Danny Chan wrote:
>
> Yes, you understand it correctly.
>
> M
Hi everyone,
The Flink 1.12 AMA is happening today at 10 am Pacific Time/ 1 pm Eastern
Time/ 7 pm Central European Time. Tune in directly at
https://youtu.be/u8jPgXoNDXA for a discussion on the upcoming release and
new features with Aljoscha Krettek, Stephan Ewen, Arvid Heise, Robert
Metzger, and
Your approach looks rather good to me.
In the version with querying for the JobStatus you must remember that
there are such states as e.g. INITIALIZING, which just tells you that
the job was submitted.
In 1.12 we introduced the TableResult#await method, which is a shortcut
over what you did in th
Hi Marta,
Do you mean you want to emit results every 5 minutes based on the wall
time (processing time)? If so you should use the
ContinuousProcessingTimeTrigger instead of ContinuousEventTimeTrigger
which will emit results based on the event time.
Does that solve your problem?
Best,
Dawid
On
Hello Yun,
Thank you very much for your response, that's what I thought,
However, it does not seem possible to remove only one state using the state
processor API,
We use it a lot, and we can only remove all of the operator states, not one
specifically,
Am I missing something?
Best Regards,
Bastie
Thanks for the information.
Are there any plans to implement this? It is supported on other docker
images...
On Tue, 8 Dec 2020 at 9:36 PM, Fabian Paul
wrote:
> Hi Narasimha,
>
> I investigated your problem and it is caused by multiple issues. First vvp
> in
> general cannot really handle multi
The "true" means the message is an insert/update after, the "false" means
the message is a retraction (for the old record that needs to be modified).
Appleyuchi 于2020年12月9日周三 下午12:20写道:
>
> The complete code is:
> https://paste.ubuntu.com/p/hpWB87kT6P/
>
> The result is:
> 2> (true,1,diaper,4)
>
Hi, Rex Fenley ~
If there is stateful operator as the output of the aggregate function. Then
each time the function receives an update (or delete) for the key, the agg
operator would emit 2 messages, one for retracting the old record, one for
the new message. For your case, the new message is the
Yes, you understand it correctly.
Marco Villalobos 于2020年12月9日周三 上午4:23写道:
> I set up the following lookup cache values:
>
> 'lookup.cache.max-rows' = '20'
> 'lookup.cache.ttl' = '1min'
>
> for a jdbc connector.
>
> This table currently only has about 2 records in it. However,
> since I
Hi, Marco Villalobos ~
It's nice to see that you choose the SQL API which is more concise and
expressive.
To answer some of your questions:
> Q: Is there a way to control that? I don't want the N + 1 query problem.
No, the SQL evaluate row by row, there maybe some optimizations internal
that bu
Hi,
Your code does not show how to create the Table of `Orders`. For how to
specify the time attribute according to DDL, you can refer to the official
document[1].
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sql/create.html
Best,
Xingbo
Appleyuchi 于2020年12月9日周三
I issued a PR some time ago at https://github.com/apache/flink/pull/12038 but
Flink committers were busy in refactoring that part..I don't know if it is
still required to have that part into the jdbc connector Flink code of if
using the new factories (that use the java services) you could register
Thanks Roman, I'll look into how I go about doing that.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
At the first glance I can not find anything wrong with those settings. If
it was some memory configuration problem that caused this error, I guess it
would be visible as an exception somewhere. It's unlikely a GC issue, as if
some machine froze and stopped responding for a longer period of tim
35 matches
Mail list logo