Hi all,
I'd like to start the vote for FLIP-36[1], which has been discussed in
thread[2].
The vote will be open for 72h, until May 3, 2020, 07:00 AM UTC, unless
there's an objection.
Best,
Xuannan
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-36%3A+Support+Interactive+Programming+i
flink-sql-connector-elasticsearch6 isn't bundling com.carrotsearch:hppc,
nor does it have dependencies on org.elasticsearch:elasticsearch-geo,
org.elasticsearch.plugin:lang-mustache-client nor
com.github.spullara.mustache.java:compiler (and thus is also not
bundling them).
You can check this
Thanks for the tip! I checked it and you are right :)
On Thu, 30 Apr 2020 at 15:08, Chesnay Schepler wrote:
> flink-sql-connector-elasticsearch6 isn't bundling com.carrotsearch:hppc,
> nor does it have dependencies on org.elasticsearch:elasticsearch-geo,
> org.elasticsearch.plugin:lang-mustache-
klion26 commented on pull request #247:
URL: https://github.com/apache/flink-web/pull/247#issuecomment-621676247
@chaojianok thanks for your contribution. could you please get rid of the
`git merge` commit in the history. you can use `git rebase` or the other git
command to achieve it.
Has anyone known why this thread could occur in development related discussions
at Flink?
From: GitBox
Sent: Thursday, April 30, 2020 15:55
To: dev@flink.apache.org
Subject: [GitHub] [flink-web] klion26 commented on pull request #247:
[FLINK-13683] Translate "Co
Hey Karim,
I'm sorry that you had such a bad experience contributing to Flink, even
though you are nicely following the rules.
You mentioned that you've implemented the proposed change already. Could
you share a link to a branch here so that we can take a look? I can assess
the API changes easier
Gary Yao created FLINK-17473:
Summary: Remove unused classes ArchivedExecutionVertexBuilder and
ArchivedExecutionJobVertexBuilder
Key: FLINK-17473
URL: https://issues.apache.org/jira/browse/FLINK-17473
Pr
Hi,
I think it's good to contribute the changes to Flink directly since we
already have the RMQ connector in the respository.
I would propose something similar to the Kafka connector, which takes
both the generic DeserializationSchema and a KafkaDeserializationSchema
that is specific to Kafk
Jingsong Lee created FLINK-17474:
Summary: Test and correct case insensitive for parquet and orc in
hive
Key: FLINK-17474
URL: https://issues.apache.org/jira/browse/FLINK-17474
Project: Flink
chun11 created FLINK-17475:
--
Summary: Flink Multiple buffers ,IllgeaStateException
Key: FLINK-17475
URL: https://issues.apache.org/jira/browse/FLINK-17475
Project: Flink
Issue Type: Bug
Roman Khachatryan created FLINK-17476:
-
Summary: Add tests to check recovery from snapshot created with
different UC mode
Key: FLINK-17476
URL: https://issues.apache.org/jira/browse/FLINK-17476
Pr
zentol commented on a change in pull request #85:
URL: https://github.com/apache/flink-shaded/pull/85#discussion_r417859654
##
File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-34/pom.xml
##
@@ -128,4 +128,4 @@ under the License.
-
R
Hello Guys,
Thanks for all the responses, i want to stress out that i didn't feel
ignored i just thought that i forgot an important step or something.
Since i am a newbie i would follow whatever route you guys would suggest :)
and i agree that the RMQ connector needs a lot of love still "which i
Hi Fabian, Aljoscha
Thanks for the feedback.
Agree with you that we can deal with primary key as you mentioned.
now, the type column has contained the nullability attribute, e.g. BIGINT
NOT NULL.
(I'm also ok that we use two columns to represent type just like mysql)
>Why I treat `watermark` as
Hi,
I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is FLINK-16091 ;
https://issues.apache.org/jira/browse/FLINK-16091.Thank you.
Piotr Nowojski created FLINK-17477:
--
Summary: resumeConsumption call should happen as quickly as
possible to minimise latency
Key: FLINK-17477
URL: https://issues.apache.org/jira/browse/FLINK-17477
P
Hi,
Welcome to the community!
There is no contributor permission now, you can just comment under the JIRA
issue.
And committer will assign issue to you if no one is working on this.
Best,
Jark
On Thu, 30 Apr 2020 at 17:36, flinker wrote:
> Hi,
>
> I want to contribute to Apache Flink.
> Would
Hi Godfrey,
The formatting of your example seems to be broken.
Could you send them again please?
Regarding your points
> because watermark express can be a sub-column, just like `f1.q2` in above
example I give.
I would put the watermark information in the row of the top-level field and
indicate
Thank You.Best,Marshal
At 2020-04-30 17:49:07, "Jark Wu" wrote:
>Hi,
>
>Welcome to the community!
>There is no contributor permission now, you can just comment under the JIRA
>issue.
>And committer will assign issue to you if no one is working on this.
>
>Best,
>Jark
>
>
>On Thu, 30 Apr 2020 at 17
Hi everyone,
just looping in Austin as he mentioned that they also ran into issues due
to the inflexibility of the RabiitMQSourcce to me yesterday.
Cheers,
Konstantin
On Thu, Apr 30, 2020 at 11:23 AM seneg...@gmail.com
wrote:
> Hello Guys,
>
> Thanks for all the responses, i want to stress ou
Gyula Fora created FLINK-17478:
--
Summary: Avro format logical type conversions do not work due to
type mismatch
Key: FLINK-17478
URL: https://issues.apache.org/jira/browse/FLINK-17478
Project: Flink
Thanks for the clarification.
I think you are right that the typed approach does not work with the plugin
mechanism because even if we had the specific ExternalResourceInfo subtype
available one could not cast it into this type because the actual instance
has been loaded by a different class loade
nobleyd created FLINK-17479:
---
Summary: Occasional checkpoint failure due to null pointer
exception in Flink version 1.10
Key: FLINK-17479
URL: https://issues.apache.org/jira/browse/FLINK-17479
Project: Flin
Hi Xuannan,
sorry, for not entering the discussion earlier. Could you please update
the FLIP to how it would like after FLIP-84? I think your proposal makes
sense to me and aligns well with the other efforts from an API
perspective. However, here are some thought from my side:
It would be ni
Canbin Zheng created FLINK-17480:
Summary: Support PyFlink on native Kubernetes setup
Key: FLINK-17480
URL: https://issues.apache.org/jira/browse/FLINK-17480
Project: Flink
Issue Type: New Fe
I agree with Till and Xintong, if the ExternalResourceInfo is only a
holder of properties that doesn't have any sublasses it can just become
the "properties" itself.
Aljoscha
On 30.04.20 12:49, Till Rohrmann wrote:
Thanks for the clarification.
I think you are right that the typed approach d
klion26 commented on a change in pull request #245:
URL: https://github.com/apache/flink-web/pull/245#discussion_r417948740
##
File path: contributing/code-style-and-quality-preamble.zh.md
##
@@ -1,25 +1,25 @@
---
-title: "Apache Flink Code Style and Quality Guide — Preamble"
morsapaes opened a new pull request #332:
URL: https://github.com/apache/flink-web/pull/332
Adding a blogpost to announce Flink's application to [Google Season of
Docs](https://developers.google.com/season-of-docs).
This is
Hi, dear community
Recently, I’m thinking to refactor the flink-jdbc connector structure before
release 1.11.
After the refactor, in the future, we can easily introduce unified pluggable
JDBC dialect for Table and DataStream, and we can have a better module
organization and implementations.
Hi Leonard,
this sounds like a nice refactoring for consistency. +1 from my side.
However, I'm not sure how much backwards compatibility is required.
Maybe others can comment on this.
Thanks,
Timo
On 30.04.20 14:09, Leonard Xu wrote:
Hi, dear community
Recently, I’m thinking to refactor th
Gyula Fora created FLINK-17481:
--
Summary: Cannot set LocalDateTime column as rowtime when
converting DataStream to Table
Key: FLINK-17481
URL: https://issues.apache.org/jira/browse/FLINK-17481
Project: F
Robert Metzger created FLINK-17482:
--
Summary: KafkaITCase.testMultipleSourcesOnePartition unstable
Key: FLINK-17482
URL: https://issues.apache.org/jira/browse/FLINK-17482
Project: Flink
Issu
Hi everyone,
sorry for reviving this thread at this point in time. Generally, I think,
this is a very valuable effort. Have we considered only providing a very
basic data generator (+ discarding and printing sink tables) in Apache
Flink and moving a more comprehensive data generating table source
Thanks for the feedback, @Aljoscha and @Till!
Glad to see that we reach a consensus on the third proposal.
Regarding the detail of the `ExternalResourceInfo`, I think Till's
proposal might be good enough. It seems these two methods already
fulfill our requirement and using "Properties" itself mig
+1 to what Timo has said.
One more comment about the relation of this FLIP and FLIP-84, in FLIP-84 we
start to deprecate all APIs which will buffer the table operation or plans.
You can think of APIs like `sqlUpdate`,
and `insertInto` is some kind of buffer operation, and all buffered
operations w
Yu Li created FLINK-17483:
-
Summary: Missing NOTICE file in flink-sql-connector-elasticsearch7
to reflect bundled dependencies
Key: FLINK-17483
URL: https://issues.apache.org/jira/browse/FLINK-17483
Project:
Timo Walther created FLINK-17484:
Summary: Enable type coercion
Key: FLINK-17484
URL: https://issues.apache.org/jira/browse/FLINK-17484
Project: Flink
Issue Type: Sub-task
Component
piyushnarang commented on pull request #85:
URL: https://github.com/apache/flink-shaded/pull/85#issuecomment-621874740
Yeah, let me double check if things continue to work after the excludes.
This is an automated message from
piyushnarang commented on a change in pull request #85:
URL: https://github.com/apache/flink-shaded/pull/85#discussion_r418035841
##
File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-34/pom.xml
##
@@ -128,4 +128,4 @@ under the License.
Hi, there.
The "FLIP-108: Add GPU support in Flink"[1] is now working in
progress. However, we met problems regarding class loader and
dependency. For more details, you could look at the discussion[2]. The
discussion thread is now converged and the solution is changing the
RuntimeContext#getExtern
I'm very happy to see the jdbc connector being normalized in this way. +1
from me.
David
On Thu, Apr 30, 2020 at 2:14 PM Timo Walther wrote:
> Hi Leonard,
>
> this sounds like a nice refactoring for consistency. +1 from my side.
>
> However, I'm not sure how much backwards compatibility is requ
Very big +1 from me
Best,
Flavio
On Thu, Apr 30, 2020 at 4:47 PM David Anderson
wrote:
> I'm very happy to see the jdbc connector being normalized in this way. +1
> from me.
>
> David
>
> On Thu, Apr 30, 2020 at 2:14 PM Timo Walther wrote:
>
> > Hi Leonard,
> >
> > this sounds like a nice refa
Hi Konstantin,
Thanks for the link of Java Faker. It's an intereting project and
could benefit to a comprehensive datagen source.
What the discarding and printing sink look like in your thought?
1) manually create a table with a `blackhole` or `print` connector, e.g.
CREATE TABLE my_sink (
a I
The application to Season of Docs 2020 is close to being finalized. I've
created a PR with the application announcement for the Flink blog [1] (as
required by Google OSS).
Thanks a lot to everyone who pitched in — and special thanks to Aljoscha
and Seth for volunteering as mentors!
I'll send an u
Big +1 from my side.
The new structure and class names look nicer now.
Regarding to the compability problem, I have looked into the public APIs in
flink-jdbc, there are 3 kinds of APIs now:
1) new introduced JdbcSink for DataStream users in 1.11
2) JDBCAppendTableSink, JDBCUpsertTableSink, JDBCTab
Hey all + thanks Konstantin,
Like mentioned, we also run into issues with the RMQ Source inflexibility. I
think Aljoscha's idea of supporting both would be a nice way to incorporate new
changes without breaking the current API.
We'd definitely benefit from the changes proposed here but have ano
Xingxing Di created FLINK-17485:
---
Summary: Add a thread dump REST API
Key: FLINK-17485
URL: https://issues.apache.org/jira/browse/FLINK-17485
Project: Flink
Issue Type: Improvement
Co
Hi,
I'm in favor of Fabian's proposal.
First, watermark is not a column, but a metadata just like primary key, so
shouldn't stand with columns.
Second, AFAIK, primary key can only be defined on top-level columns.
Third, I think watermark can also follow primary key than only allow to
define on top
Hi Fabian,
the broken example is:
create table MyTable (
f0 BIGINT NOT NULL,
f1 ROW,
f2 VARCHAR<256>,
f3 AS f0 + 1,
PRIMARY KEY (f0),
UNIQUE (f3, f2),
WATERMARK f1.q2 AS (`f1.q2` - INTERVAL '3' SECOND)
) with (...)
name
type
key
compute column
watermark
f0
Hi Dawid,
I just want to mention one of your response,
> What you described with
> 'format' = 'csv',
> 'csv.allow-comments' = 'true',
> 'csv.ignore-parse-errors' = 'true'
> would not work though as the `format` prefix is mandatory in the sources
as only the properties with format
> will be passe
Bump.
Please let me know, if someone is interested in reviewing this one. I am
willing to start working on this. BTW, a small and new addition to the
list: With FLINK-10114 merged, OrcBulkWriterFactory can also reuse
`SerializableHadoopConfiguration` along with SequenceFileWriterFactory and
Compre
Hi Jark,
my gut feeling is 1), because of its consistency with other connectors
(does not add two secret keywords) although it is more verbose.
Best,
Konstantin
On Thu, Apr 30, 2020 at 5:01 PM Jark Wu wrote:
> Hi Konstantin,
>
> Thanks for the link of Java Faker. It's an intereting project
Lorenzo Nicora created FLINK-17486:
--
Summary: ClassCastException when copying AVRO SpecificRecord
containing a decimal field
Key: FLINK-17486
URL: https://issues.apache.org/jira/browse/FLINK-17486
Pr
piyushnarang commented on pull request #85:
URL: https://github.com/apache/flink-shaded/pull/85#issuecomment-622144494
@zentol added the updates you requested. I did some basic sanity checking /
testing after excluding the spotbugs and jsr305 and it seems to work ok. Do you
know if there's
nobleyd created FLINK-17487:
---
Summary: Do not delete old checkpoints when stop the job.
Key: FLINK-17487
URL: https://issues.apache.org/jira/browse/FLINK-17487
Project: Flink
Issue Type: Improvemen
Khokhlov Pavel created FLINK-17488:
--
Summary: JdbcSink has to support setting auto-commit mode of DB
Key: FLINK-17488
URL: https://issues.apache.org/jira/browse/FLINK-17488
Project: Flink
Is
56 matches
Mail list logo