Dian Fu created FLINK-18485:
---
Summary: Kerberized YARN per-job on Docker test failed during
unzip jce_policy-8.zip
Key: FLINK-18485
URL: https://issues.apache.org/jira/browse/FLINK-18485
Project: Flink
Hi Zhijiang,
It will probably be best if we connect next week and discuss the issue
directly since this could be quite difficult to reproduce.
Before the testing result on our side comes out for your respective job
case, I have some other questions to confirm for further analysis:
- How much
Hi, Konstantin
> . Would we support a temporal join with a changelog stream with
> event time semantics by ignoring DELETE messages or would it be completed
> unsupported.
I don’t know the percentage of this feature in temporal scenarios.
Comparing to support the approximate event time join by i
Zhang Jianguo created FLINK-18480:
-
Summary: JomManager suspend because of loss ZK connection
Key: FLINK-18480
URL: https://issues.apache.org/jira/browse/FLINK-18480
Project: Flink
Issue Type
David Anderson created FLINK-18482:
--
Summary: Replace flink-training datasets with data generators
Key: FLINK-18482
URL: https://issues.apache.org/jira/browse/FLINK-18482
Project: Flink
Issu
Danny Cranmer created FLINK-18483:
-
Summary: [EFO] Test coverage improvements for existing connector
Key: FLINK-18483
URL: https://issues.apache.org/jira/browse/FLINK-18483
Project: Flink
Iss
Mans Singh created FLINK-18484:
--
Summary: RowSerializer arity error does not provide specific
information about the mismatch
Key: FLINK-18484
URL: https://issues.apache.org/jira/browse/FLINK-18484
Projec
initsun created FLINK-18481:
---
Summary: Kafka connector can't select data
Key: FLINK-18481
URL: https://issues.apache.org/jira/browse/FLINK-18481
Project: Flink
Issue Type: Bug
Affects Versions:
Hi Thomas,
Thanks a lot for offering these information.
We have decided to try to reproduce the regression on AWS. It will be
really appreciated if you can share some demo code with us, and if it is
not convenient, could you give us some more information about the record
type and size, the proces
Hi Leonard,
Thank you for the summary. I don't fully understand the implications of
(3). Would we support a temporal join with a changelog stream with
event time semantics by ignoring DELETE messages or would it be completed
unsupported. I mean something like the following sequence of statements:
Hi all,
After a discussion with Max, I added a scenario 2 to the design doc.
Feel free to comment if you want.
https://docs.google.com/document/d/1q0y0aWlJMoUWNW7jjsM8uWfHsy2dM6YmmcmhpQzgLMA/edit?usp=sharing
Best
Etienne
On 25/06/2020 09:56, Etienne Chauchot wrote:
Hi all,
regarding this
Thanks Jingsong, Jark, Knauf, Seth for sharing your thoughts.
Although we discussed many details about the concept, I think it’s worth to
clarify the semantic from long term goals. Temporal table concept was first
imported in SQL:2011, I made some investigation of Temporal Table work
mechanism
Hi,
I am trying to build Flink from the source code, but I had one error which I
paste the error message in below.
For the developing environment, I am using linuxkit 4.9.184, Maven 3.2.5, Java
11.0.7.
Before I run mvn command to build the project (mvn clean install), I cleaned
the maven local
+1 (non-binding)
- check wheel package consistency with the built from the source code
- test the built from the wheel package in mac os with python 3.6
- verify the performance for PyFlink UDFs including Python General UDF and
Pandas UDF
- test Python UDTF
Best,
Xingbo
Dian Fu 于2020年7月3日周五 下午8
+1 (non-binding)
- built from source with scala 2.11 successfully
- checked the signature and checksum of the binary packages
- installed PyFlink on MacOS, Windows and Linux successfully
- tested the functionality of Pandas UDF and the conversion between PyFlink
Table and Pandas DataFrame
- verif
That's incorrect; you need to do the same for filesystems, as one
example. So, build everything -> build filesystems -> build flink-dist.
On 03/07/2020 14:31, Robert Metzger wrote:
We could also build releases by calling "mvn package" again in
"flink-dist". But all these solutions are far from
We could also build releases by calling "mvn package" again in
"flink-dist". But all these solutions are far from elegant.
Ideally the Maven folks have implemented something nicer by now. Let's see
what they say.
On Fri, Jul 3, 2020 at 1:20 PM Chesnay Schepler wrote:
> It's not that *difficult
+1(binding)
Checks:
- check wheel package consistency
- test the built from the wheel package
- checked the signature and checksum
- pip installed the Python package
`apache_flink-1.11.0-cp37-cp37m-macosx_10_9_x86_64.whl` successfully and
run a simple word count example successfully
It's not that /difficult /for us to work around it mind you; we "just"
have to
a) separate the distribution packaging from the flink-dist jar from the
distribution assembly, so we can have dependencies on the various
opt/plugins modules without pulling in dependencies into the flink-dist jar,
+1 (non-binding)
- checked/verified signatures and hashes
- built from source sing scala 2.11 succeeded
- go through all issues which "fixVersion" property is 1.11.0, there is no
blocker.
- checked that there are no missing artifacts
- test SQL connector Elasticsearch7/JDBC/HBase/Kafka (new conne
Hi all,
Just as an addition to what Dawid asked, I would also like to ask:
1) which Flink version are you using? because the stack trace line
numbers do not match the current master.
2) as a clarification (although maybe not relevant here), there is no
guarantee on the order of the elements, so
th
Hi Thomas,
I tried to reproduce the regression by constructing a Job with the same
topology, parallelism and checkpoint interval (Kinesis source and sink are
replaced for we do not have the test environment). But unfortunately, no
regression is observed both for back pressure and no back pressure
Thanks Till for the clarification. I opened
https://github.com/apache/flink/pull/12816
On 03/07/2020 10:15, Till Rohrmann wrote:
> @Dawid I think it would be correct to also include the classifier for the
> org.apache.orc:orc-core:jar:nohive:1.4.3 dependency because it is different
> from the non-
@Dawid I think it would be correct to also include the classifier for the
org.apache.orc:orc-core:jar:nohive:1.4.3 dependency because it is different
from the non-classified artifact. I would not block the release on it,
though, because it is a ASL 2.0 dependency which we are not required to
list.
I just reached out to the users@maven mailing list again to check if
there's any resolution for shading behavior post 3.2.5 [1]
[1]
https://lists.apache.org/thread.html/8b2dcf462de814d06d8e30bafce2c886217c5790a3ee07d33d0b8dfc%40%3Cusers.maven.apache.org%3E
On Thu, Jun 4, 2020 at 3:08 PM Chesnay
We have documented how the licensing works here:
https://cwiki.apache.org/confluence/display/FLINK/Licensing
(There's a section on the "licenses directory")
In this case, I don't think you'll need to include the apache license in
the licenses/ directory (because it's the Apache license)
On Tue, J
Hi Rahul.
Could you verify that the provided code is the one that fails? Something
does not seem right for me in the stacktrace. The stacktrace shows that
you call processElement recursively, but I can not see that in the code:
com.westpac.itm.eq.pattern.TestProcess.processElement(TestProcess.jav
Since there were no further comments on this discussion, I removed the
"draft" label from the Wiki page and I consider the Jira semantics proposal
agreed upon.
On Mon, Jun 15, 2020 at 9:49 AM Piotr Nowojski wrote:
>
> > On 12 Jun 2020, at 15:44, Robert Metzger wrote:
> >
> > Piotrek, do you agr
For the others on the dev@ list: I responded on SO.
On Tue, Jun 16, 2020 at 7:56 AM Singh Aulakh, Karanpreet KP
wrote:
> Hello!
>
> (Apache Flink1.8 on AWS EMR release label 5.28.x)
>
> Our data source is an AWS Kinesis stream (with 450 shards if that
> matters). We use the FlinkKinesisConsumer
29 matches
Mail list logo