Jark,
Thanks for the heads up! I didn’t see this behavior when running in batch mode
with parallelism turned on. Is it safe to do this kind of join in batch mode
right now, or am I just getting lucky?
Dylan
From: Jark Wu
Date: Friday, April 16, 2021 at 5:10 AM
To: Dylan Forciea
Cc: Timo
need that in the future.
Dylan
On 4/14/21, 10:27 AM, "Dylan Forciea" wrote:
Timo,
Here is the plan (hopefully I properly cleansed it of company proprietary
info without garbling it)
Dylan
== Abstract Syntax Tree ==
LogicalSink(table=[default_catalog.default_da
share the resulting plan with us? Ideally with the ChangelogMode
detail enabled as well.
statementSet.explain(...)
Maybe this could help.
Regards,
Timo
On 14.04.21 16:47, Dylan Forciea wrote:
> Piotrek,
>
> I am looking at the count of records
14, 2021 at 9:38 AM
To: Dylan Forciea
Cc: "user@flink.apache.org"
Subject: Re: Nondeterministic results with SQL job when parallelism is > 1
Hi Dylan,
But if you are running your query in Streaming mode, aren't you counting
retractions from the FULL JOIN? AFAIK in Streaming
consistent) of what comes out consistently when parallelism is set to 1.
Dylan
From: Dylan Forciea
Date: Wednesday, April 14, 2021 at 9:08 AM
To: Piotr Nowojski
Cc: "user@flink.apache.org"
Subject: Re: Nondeterministic results with SQL job when parallelism is > 1
Pitorek,
I
part of
the key, that shouldn’t affect the number of records coming out I wouldn’t
think.
Dylan
From: Piotr Nowojski
Date: Wednesday, April 14, 2021 at 9:06 AM
To: Dylan Forciea
Cc: "user@flink.apache.org"
Subject: Re: Nondeterministic results with SQL job when parallelism is > 1
have run across a bug?
Regards,
Dylan Forciea
object Job {
def main(args: Array[String]): Unit = {
StreamExecutionEnvironment.setDefaultLocalParallelism(1)
val settings =
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
val streamEnv
I wanted to report that I tried out your PR, and it does solve my issue. I am
able to create a generic LatestNonNull and it appears to do what is expected.
Thanks,
Dylan Forciea
On 1/21/21, 8:50 AM, "Timo Walther" wrote:
I opened a PR. Feel free to try it out.
https://
rebase to accommodate the code reformatting change that occurred
since). Is there a process for getting somebody to review it? Not sure if with
the New Year and the 1.12 release and follow-up if it just got lost in the
commotion.
Regards,
Dylan Forciea
[1] https://github.com/apache/flink/pull
Timo,
I converted what I had to Java, and ended up with the exact same issue as
before where it will work if I only ever use it on 1 type, but not if I use it
on multiple. Maybe this is a bug?
Dylan
On 1/20/21, 10:06 AM, "Dylan Forciea" wrote:
Oh, I think I might have a clue
x27;t static and I get an error from Flink
because of that. I'm going to see if I just write this UDF in Java with an
embedded public static class like you have if it will solve my problems. I'll
report back to let you know what I find. If that works, I'm not quite sure how
to make i
As a side note, I also just tried to unify into a single function registration
and used _ as the type parameter in the classOf calls there and within the
TypeInference definition for the accumulator and still ended up with the exact
same stack trace.
Dylan
On 1/20/21, 9:22 AM, "Dylan Fo
s.FIELD("value",
argDataType),
DataTypes.FIELD("date",
DataTypes.DATE()));
return Optional.of(outputDataType);
})
.build();
}
}
Regar
multiple times and just directly use Long or String
rather than using subclassing, then it works just fine. I appreciate any help I
can get on this!
Regards,
Dylan Forciea
[1]
https://github.com/apache/flink/blob/release-1.12.0/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table
the series of steps that I went through, if
anybody has any suggestions!
Thanks,
Dylan Forciea
From: Dylan Forciea
Date: Monday, December 7, 2020 at 5:33 PM
To: "user@flink.apache.org"
Subject: Re: Batch loading into postgres database
As a follow up – I’m trying to follow the
ing all
the data through. I tried both join and get on the job status CompleteableFuture
Is there anything I’m missing as far as being able to tell when the job is
complete? Again, this is Flink 1.11.2 that I’m running.
Thanks,
Dylan Forciea
From: Dylan Forciea
Date: Monday, December 7, 2020 at
I am setting up a Flink job that will reload a table in a postgres database
using the Flink SQL functionality. I just wanted to make sure that given the
current feature set I am going about this the correct way. I am currently using
version 1.11.2, but plan on upgrading to 1.12 soon whenever it
orting and help.
[1] https://issues.apache.org/jira/browse/FLINK-20255
Best,
Godfrey
Dylan Forciea mailto:dy...@oseberg.io>> 于2020年11月19日周四
下午12:10写道:
Godfrey,
I confirmed that in Flink 1.11.2 and in 1.12-SNAPSHOT I get the stack trace
running exactly this code:
import org.apache.flink.ap
Ah yes, missed the kafka part and just saw the array part. FLINK-19771
definitely was solely in the postgres-specific code.
Dylan
From: Jark Wu
Date: Thursday, November 19, 2020 at 9:12 AM
To: Dylan Forciea
Cc: Danny Chan , Rex Fenley , Flink
ML
Subject: Re: Filter Null in Array in SQL
Do you mean that the array contains values that are null, or that the entire
array itself is null? If it’s the latter, I have an issue written, along with a
PR to fix it that has been pending review [1].
Regards,
Dylan Forciea
[1] https://issues.apache.org/jira/browse/FLINK-19771
From: Danny
FROM (
SELECT
attr1,
attr3,
ROW_NUMBER() OVER (
PARTITION BY attr1
ORDER BY
attr4 DESC NULLS LAST,
w.attr2 = attr2 DESC NULLS LAST
) AS row_num
FRO
String = ";"): Unit = {
if (str != null) {
str.split(separator).foreach(s => collect(Row.of(s.trim(
}
}
}
Removing the lateral table bit in that first table made the original query plan
work correctly.
I greatly appreciate your assistance!
Regards,
Dylan Forciea
.
Regards,
Dylan Forciea
Danny,
Thanks! I have created a new JIRA issue [1]. I’ll look into how hard it is to
get a patch and unit test myself, although I may need a hand on the process of
making a change to both the master branch and a release branch if it is desired
to get a fix into 1.11.
Regards,
Dylan Forciea
ecord(JdbcRowDataInputFormat.java:259)
[error] ... 5 more
Thanks,
Dylan Forciea
environment, but it wasn’t
happy with that.
Thanks,
Dylan Forciea
From: Leonard Xu
Date: Wednesday, October 14, 2020 at 10:20 PM
To: Dylan Forciea
Cc: "user@flink.apache.org"
Subject: Re: Setting JDBC connector options using JdbcCatalog
Hi, Dylan
The table in JdbcCatalog onl
generically set
those properties that would go in the WITH clause of the SQL interface beyond
the JdbcCatalog constructor arguments. Is there any way to set those after the
catalog is created/registered?
Thanks,
Dylan Forciea
at 10:15 PM
To: Dylan Forciea
Cc: Till Rohrmann , dev , Shengkai
Fang , "user@flink.apache.org" ,
Leonard Xu
Subject: Re: autoCommit for postgres jdbc streaming in Table/SQL API
Hi Dylan,
Sorry for the late reply. We've just come back from a long holiday.
Thanks for reporti
eater than <9010>
[ERROR] JMXReporterFactoryTest.testWithoutArgument:60
[INFO]
[ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 0
Thanks,
Dylan Forciea
From: Till Rohrmann
Date: Thursday, October 8, 2020 at 2:29 AM
To: Dylan Forciea
Cc: dev , Shengkai Fang ,
"user@flink.apache.org
Actually…. It looks like what I did covers both cases. I’ll see about getting
some unit tests and documentation updated.
Dylan
From: Dylan Forciea
Date: Wednesday, October 7, 2020 at 11:47 AM
To: Till Rohrmann , dev
Cc: Shengkai Fang , "user@flink.apache.org"
, "j...@apache.or
t 7, 2020 at 5:00 PM Dylan Forciea
mailto:dy...@oseberg.io>> wrote:
I appreciate it! Let me know if you want me to submit a PR against the issue
after it is created. It wasn’t a huge amount of code, so it’s probably not a
big deal if you wanted to redo it.
Thanks,
Dylan
From: Shengkai Fan
I appreciate it! Let me know if you want me to submit a PR against the issue
after it is created. It wasn’t a huge amount of code, so it’s probably not a
big deal if you wanted to redo it.
Thanks,
Dylan
From: Shengkai Fang
Date: Wednesday, October 7, 2020 at 9:06 AM
To: Dylan Forciea
Subject
I hadn’t heard a response on this, so I’m going to expand this to the dev email
list.
If this is indeed an issue and not my misunderstanding, I have most of a patch
already coded up. Please let me know, and I can create a JIRA issue and send
out a PR.
Regards,
Dylan Forciea
Oseberg
From
to pull this off?
Regards,
Dylan Forciea
Oseberg
34 matches
Mail list logo