Hi,
We have a job of Flink 1.11.0 running on YARN that reached FAILED state cause
its jobmanager lost leadership
during a ZK full GC. But after the ZK connection was recovered, somehow the job
was reinitiated again
with no checkpoints found in ZK, and hence used an earlier savepoint to restore
I'm learning executeinsert from the document
My code is:
https://paste.ubuntu.com/p/d2TDdcy7GB/
I guess the function createTemporaryView can create a table needed by the
function executeinsert
I got:
No table was registered under the name
`default_catalog`.`default_database`.`OutOrders`.
@Piotr Nowojski @Nico Kruber
An update.
I am able to figure out the problem code. A change in the Apache Beam code
is causing this problem.
Beam introduced a lock on the “emit” in Unbounded Source. The lock is on
the Flink’s check point lock. Now the same lock is used by Flink’s timer
ser
Hi,
Have you dropped or renamed any operator from the original job? If yes, and
you are okay with discarding the state of that operator, you can submit the
job with --allowNonRestoredState or -n.
https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html#allowing-non-restored
Hi Timo,
I figured it out, thanks a lot for your help.
Are there any articles detailing the pre-flight and cluster phases? I
couldn't find anything on ci.apache.org/projects/flink and I think this
behaviour should be documented as a warning/note.
On Thu, Oct 22, 2020 at 6:44 PM Timo Walther wrot
Hi,
We are trying to save checkpoints for one of the flink job running in Flink
version 1.9 and tried to resume the same flink job in Flink version 1.11.2. We
are getting the below error when trying to restore the saved checkpoint in the
newer flink version. Can
Cannot map checkpoint/savepoint
Hi Dan,
I tried with the PR you pointed out, and cannot reproduce the problem. So
it should not be related to the PR codes.
I'm running with maven 3.2.5, which is the same version that we use for
running ci tests on AZP for PRs. Your maven log suggests the maven version
on your machine is 3.6.3.
I think the issue is you have to specify a *time *interval for "step." It
would be nice to consider the preceding N minutes as of every message. You
can somewhat approximate that using a very small step.
On Thu, Oct 22, 2020 at 2:29 AM Danny Chan wrote:
> The SLIDING window always triggers as of
Any thoughts this doesn't seem to create duplicates all the time or maybe
it's unrelated as we are still seeing the message and there is no
duplicates...
On Wed., Oct. 21, 2020, 12:09 p.m. John Smith,
wrote:
> And yes my downstream is handling the duplicates in an idempotent way so
> we are good
Hey Roman,
Sorry to miss this -- thanks for the confirmation and making the ticket.
I'm happy to propose a fix if someone is able to assign the ticket to me.
Best,
Austin
On Mon, Oct 19, 2020 at 6:56 AM Khachatryan Roman <
khachatryan.ro...@gmail.com> wrote:
> Hey Austin,
>
> I think you are ri
I'm learning executeinsert from the document
My code is:
https://paste.ubuntu.com/p/d2TDdcy7GB/
I guess the function createTemporaryView can create a table needed by the
function executeinsert
I got:
No table was registered under the name
`default_catalog`.`default_database`.`OutOrders`.
Danny,
Thanks! I have created a new JIRA issue [1]. I’ll look into how hard it is to
get a patch and unit test myself, although I may need a hand on the process of
making a change to both the master branch and a release branch if it is desired
to get a fix into 1.11.
Regards,
Dylan Forciea
[1
Hi,
sorry for the late reply. I the problem was in the
`tEnv.toAppendStream(result,Order.class).print();` right?
You can also find a new example here:
https://github.com/apache/flink/blob/master/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/basics/Gett
Hi Manas,
you can use static variable but you need to make sure that the logic to
fill the static variable is accessible and executed in all JVMs.
I assume `pipeline.properties` is in your JAR that you submit to the
cluster right? Then you should be able to access it through a singleton
patt
>From my point of view, the value of NOW() function in SQL is certain by the
time when the streaming app is launched and will not change with the process
time. However, as a new Flink user, I'm not so sure of that. By the way, if
my attemp is to keep the time logic to update all the time, what shou
Sorry, I messed up the code snippet in the earlier mail. The correct one is
:
public static void main(String[] args) {
Properties prop = new Properties();
InputStream is =
Config.class.getClassLoader().getResourceAsStream("pipeline.properties");
prop.load(is);
HashMap strMap = new HashMap
Hi Timo,
Thank you for the explanation, I can start to see why I was getting an
exception.
Are you saying that I cannot use static variables at all when trying to
deploy to a cluster? I would like the variables to remain static and not be
instance-bound as they are accessed from multiple classes.
B
try naming it PythonProgramOptionsITCase; it apparently needs a jar to
be created first, which happens after unit tests (tests suffixed with
Test) are executed.
On 10/22/2020 1:48 PM, Juha Mynttinen wrote:
Hello there,
The PR https://github.com/apache/flink/pull/13322 lately added the
test m
Hello there,
The PR https://github.com/apache/flink/pull/13322 lately added the test
method testConfigurePythonExecution in
org.apache.flink.client.cli.PythonProgramOptionsTest.
"mvn clean verify" fails for me in testConfigurePythonExecution:
...
INFO] Running org.apache.flink.client.cli.Pytho
Hello,
We are using Apache Flink 1.11.1 version. During our security scans following
issues are reported by our scan tool.
1.Package : commons_codec-1.10
Severity: Medium
Description:
Apache Commons contains a flaw that is due to the Base32 codec decoding invalid
strings instead of rejecting
Hi
I'm maintaining the flink sink connector from apache iceberg community,
did your classpath include the correct iceberg-flink-runtime.jar ?
Pls following the steps here:
https://github.com/apache/iceberg/blob/master/site/docs/flink.md
Thanks.
On Wed, Oct 21, 2020 at 10:54 PM 18717838093 <187
Yes, the current code throws directly for NULLs, can you log an issue there
?
Dylan Forciea 于2020年10月21日周三 上午4:30写道:
> I believe I am getting an error because I have a nullable postgres array
> of text that is set to NULL that I’m reading using the JDBC SQL Connector.
> Is this something that sh
The SLIDING window always triggers as of each step, what do you mean by
"stepless" ?
Alex Cruise 于2020年10月21日周三 上午1:52写道:
> whoops.. as usual, posting led me to find some answers myself. Does this
> make sense given my requirements?
>
> Thanks!
>
> private class MyWindowAssigner(val windowSize:
Hi, dawangli ~
Usually people build the lineage of tables through a self-built platform,
there was a DB to persist the relationship between the tables, for each
job, you may need to analyze each SQL which are source tables and which are
sink.
E.G. The INSERT target table is a sink and table after
Hi Theo,
this is indeed a difficult use case. The KafkaDeserializationSchema is
actually meant mostly for deserialization and should not contain more
complex logic such as joining with a different topic. You would make
KafkaDeserializationSchema stateful.
But in your usecase, I see no better
Hi,
thanks for letting us know about this shortcoming.
I will link someone from the runtime team in the JIRA issue. Let's
continue the discussion there.
Regards,
Timo
On 22.10.20 05:36, chenkaibit wrote:
Hi everyone:
I met this Exception when a hard disk was damaged:
https://issues.apache
Hi Manas,
you need to make sure to differentiate between what Flink calls
"pre-flight phase" and "cluster phase".
The pre-flight phase is were the pipeline is constructed and all
functions are instantiated. They are then later serialized and send to
the cluster.
If you are reading your pro
Hi,
I am trying to write some data to a kafka topic and I have the following
situation:
monitorStateStream
.process(new
IDAP2AlarmEmitter()).name(IDAP2_ALARM_EMITTER).uid(IDAP2_ALARM_EMITTER)
*... // Stream that outputs elements of type IDAP2Alarm*
.addSink(getFlinkKafkaProducer(ALARMS_KA
28 matches
Mail list logo