Thanks for kicking off this discussion, Yangze.
First, let me try to explain a bit more about this problem. Since we
decided to make the `ExternalResourceDriver` a plugin whose implementation
could be provided by user, we think it makes sense to leverage Flink’s
plugin mechanism and load the drive
Thanks for bringing this up Yangze and Xintong. I see the problem. Help me
to understand how the ExternalResourceInfo is intended to be used by the
user. Will she ask for some properties and then pass them to another
component? Where does this component come from?
Cheers,
Till
On Wed, Apr 29, 202
>
> Will she ask for some properties and then pass them to another component?
Yes. Take GPU as an example, the property needed is "GPU index", and the
index will be used to tell the OS which GPU should be used for the
computing workload.
> Where does this component come from?
The component coul
Rui Li created FLINK-17456:
--
Summary: Update hive connector tests to execute DDL & DML via
TableEnvironment
Key: FLINK-17456
URL: https://issues.apache.org/jira/browse/FLINK-17456
Project: Flink
Is
Rui Li created FLINK-17457:
--
Summary: Manage Hive metadata via Flink DDL
Key: FLINK-17457
URL: https://issues.apache.org/jira/browse/FLINK-17457
Project: Flink
Issue Type: Sub-task
Compone
For your convenience, I modified the Tokenizer in "WordCount"[1] case
to show how UDF leverages GPU info and how we found that problem.
[1]
https://github.com/KarmaGYZ/flink/blob/7c5596e43f6d14c65063ab0917f3c0d4bc0211ed/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/stream
Hi folks,
The FLIP-36 is updated according to the discussion with Becket. In the
meantime, any comments are very welcome.
If there are no further comments, I would like to start the voting
thread by tomorrow.
Thanks,
Xuannan
On Sun, Apr 26, 2020 at 9:34 AM Xuannan Su wrote:
> Hi Becket,
>
>
Hi everyone,
I would like to bring up a discussion about the result type of describe
statement.
In previous version, we define the result type of describe statement is a
single column as following
Statement
Result Schema
Result Value
Result Kind
Examples
DESCRIBE xx
field name: result
fiel
Congxian Qiu(klion26) created FLINK-17458:
-
Summary:
TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers
Key: FLINK-17458
URL: https://issues.apache.org/jira/browse/FLINK-17458
Pro
Hi everyone,
I would like to bring up a discussion about the result type of describe
statement,
which is introduced in FLIP-84[1].
In previous version, we define the result type of `describe` statement is a
single column as following
Statement
Result Schema
Result Value
Result Kind
Examples
> -原始邮件-
> 发件人: "Jark Wu"
> 发送时间: 2020-04-29 14:09:44 (星期三)
> 收件人: dev , "Yu Li" , myas...@live.com
> 抄送: azagre...@apache.org
> 主题: Re: The use of state ttl incremental cleanup strategy in sql
> deduplication resulting in significant performance degradation
>
> Hi lsyldliu,
>
> Than
Hi Godfrey,
Thanks for starting this discussion!
In my mind, WATERMARK is a property (or constraint) of a field, just like
PRIMARY KEY.
Take this example from MySQL:
mysql> CREATE TABLE people (id INT NOT NULL, name VARCHAR(128) NOT NULL,
age INT, PRIMARY KEY (id));
Query OK, 0 rows affected (0.
We will not advertise snapshot artifacts to users. This is simply not
allowed by ASF rules.
On 29/04/2020 04:08, Yang Wang wrote:
Thanks for starting this discussion.
In general, i am also in favor of making docker image release could be
partly decoupled from Flink release. However, i think it
Yes, this is a reason to cancel the RC.
Looking at the ES commit, there may also be other dependencies missing,
like mustache and elasticsearch-geo.
On 29/04/2020 08:54, Robert Metzger wrote:
Thanks a lot for creating a release candidate for 1.10.1!
I'm not sure, but I think found a potenti
Thanks a lot for creating a release candidate for 1.10.1!
+1 from my side
checked
- md5/gpg, ok
- source does not contain any binaries, ok
- pom points to the same version 1.10.1, ok
- README file does not contain anything unexpected, ok
- maven clean package -DskipTests, ok
- maven clean verify,
Thanks for taking a look Chesnay. Then let me officially cancel the release:
-1 (binding)
Another question that I had while checking the release was the
"apache-flink-1.10.1.tar.gz" binary, which I suppose is the python
distribution.
It does not contain a LICENSE and NOTICE file at the root leve
+1 I like the general idea of printing the results as a table.
On the specifics I don't know enough but Fabians suggestions seems to
make sense to me.
Aljoscha
On 29.04.20 10:56, Fabian Hueske wrote:
Hi Godfrey,
Thanks for starting this discussion!
In my mind, WATERMARK is a property (or c
ranqiqiang created FLINK-17459:
--
Summary: JDBCAppendTableSink not support flush by
flushIntervalMills
Key: FLINK-17459
URL: https://issues.apache.org/jira/browse/FLINK-17459
Project: Flink
I
Jingsong Lee created FLINK-17460:
Summary: Create sql-jars for parquet and orc
Key: FLINK-17460
URL: https://issues.apache.org/jira/browse/FLINK-17460
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17461:
---
Summary: Support JSON serialization and deseriazation schema for
RowData type
Key: FLINK-17461
URL: https://issues.apache.org/jira/browse/FLINK-17461
Project: Flink
I
Jark Wu created FLINK-17462:
---
Summary: Support CSV serialization and deseriazation schema for
RowData type
Key: FLINK-17462
URL: https://issues.apache.org/jira/browse/FLINK-17462
Project: Flink
Is
I am also in favor of the option3. Since the Flink FileSystem has the very
similar implementation via plugin mechanism. It has a map "FS_FACTORIES"
to store the plugin-loaded specific FileSystem(e.g. S3, AzureFS, OSS, etc.).
And provide some common interfaces.
Best,
Yang
Yangze Guo 于2020年4月29日周
Even without any code change, I see following test errors:
[ERROR] Errors:
[ERROR]
LocalFileSystemRecoverableWriterTest>AbstractRecoverableWriterTest.testCloseWithNoData:99
» FileSystem
[ERROR]
LocalFileSystemRecoverableWriterTest>AbstractRecoverableWriterTest.testRecoverAfterMultiplePersistsState
Could you give us the entire error?
What OS are you working on?
On 29/04/2020 15:00, Manish G wrote:
Even without any code change, I see following test errors:
[ERROR] Errors:
[ERROR]
LocalFileSystemRecoverableWriterTest>AbstractRecoverableWriterTest.testCloseWithNoData:99
» FileSystem
[ERROR]
Robert Metzger created FLINK-17463:
--
Summary:
BlobCacheCleanupTest.testPermanentBlobCleanup:133->verifyJobCleanup:432 »
FileAlreadyExists
Key: FLINK-17463
URL: https://issues.apache.org/jira/browse/FLINK-17463
Hi everyone,
discussions around ConfigOption seem to be very popular recently. So I
would also like to get some opinions on a different topic.
How do we represent hierarchies in ConfigOption? In FLIP-122, we agreed
on the following DDL syntax:
CREATE TABLE fs_table (
...
) WITH (
'connect
> Therefore, should we advocate instead:
>
> 'format.kind' = 'json',
> 'format.fail-on-missing-field' = 'false'
Yes. That's pretty much it.
This is reasonable important to nail down as with such violations I
believe we could not actually switch to a standard YAML parser.
On 29/04/2020 16:05,
John Lonergan created FLINK-17464:
-
Summary: Stanalone HA Cluster crash with non-recoverable cluster
state - need to wipe cluster to recover service
Key: FLINK-17464
URL: https://issues.apache.org/jira/browse/FLIN
>From a user's perspective, I prefer the shorter one "format=json", because
it's more concise and straightforward. The "kind" is redundant for users.
Is there a real case requires to represent the configuration in JSON style?
As far as I can see, I don't see such requirement, and everything works
f
Personally I don't have any preference here. Compliance wih standard YAML
parser is probably more important
On Wed, Apr 29, 2020 at 5:10 PM Jark Wu wrote:
> From a user's perspective, I prefer the shorter one "format=json", because
> it's more concise and straightforward. The "kind" is redundan
Andrey Zagrebin created FLINK-17465:
---
Summary: Update Chinese user documentation for job manager memory
model
Key: FLINK-17465
URL: https://issues.apache.org/jira/browse/FLINK-17465
Project: Flink
Looks like the ES NOTICE problem is a long-standing problem, because the
ES6 sql connector NOTICE also misses these dependencies.
Best,
Jark
On Wed, 29 Apr 2020 at 17:26, Robert Metzger wrote:
> Thanks for taking a look Chesnay. Then let me officially cancel the
> release:
>
> -1 (binding)
>
>
Gyula Fora created FLINK-17466:
--
Summary: toRetractStream doesn't work correctly with Pojo
conversion class
Key: FLINK-17466
URL: https://issues.apache.org/jira/browse/FLINK-17466
Project: Flink
Roman Khachatryan created FLINK-17467:
-
Summary: Implement aligned savepoint in UC mode
Key: FLINK-17467
URL: https://issues.apache.org/jira/browse/FLINK-17467
Project: Flink
Issue Type:
Piotr Nowojski created FLINK-17468:
--
Summary: Provide more detailed metrics why asynchronous part of
checkpoint is taking long time
Key: FLINK-17468
URL: https://issues.apache.org/jira/browse/FLINK-17468
Hi all,
I also wanted to share my opinion.
When talking about a ConfigOption hierarchy we use for configuring Flink
cluster I would be a strong advocate for keeping a yaml/hocon/json/...
compatible style. Those options are primarily read from a file and thus
should at least try to follow common p
Regarding the WatermarkGenerator (WG) interface itself. The proposal is
basically to turn emitting into a "flatMap", we give the
WatermarkGenerator a "collector" (the WatermarkOutput) and the WG can
decide whether to output a watermark or not and can also mark the output
as idle. Changing the i
Hi ,
We have a use case where we have to demultiplex the incoming stream to
multiple output streams.
We read from 1 Kafka topic and as an output we generate multiple Kafka
topics. The logic of generating each new Kafka topic is different and not
known beforehand. Users of the system keep adding n
John Lonergan created FLINK-17469:
-
Summary: Support override of DEFAULT_JOB_NAME with system property
for StreamExecutionEnvironment
Key: FLINK-17469
URL: https://issues.apache.org/jira/browse/FLINK-17469
Hunter Herman created FLINK-17470:
-
Summary: Flink task executor process permanently hangs on
`flink-daemon.sh stop`, deletes PID file
Key: FLINK-17470
URL: https://issues.apache.org/jira/browse/FLINK-17470
piyushnarang opened a new pull request #85:
URL: https://github.com/apache/flink-shaded/pull/85
Follow up from https://github.com/apache/flink/pull/11938 to commit in the
right project.
Picking up the updated zk dependency allows us to get around an issue in the
`StaticHostProvider` in
>
> Hi ,
>
> We have a use case where we have to demultiplex the incoming stream to
> multiple output streams.
>
> We read from 1 Kafka topic and as an output we generate multiple Kafka
> topics. The logic of generating each new Kafka topic is different and not
> known beforehand. Users of the syst
Hi ,
We have a use case where we have to demultiplex the incoming stream to
multiple output streams.
We read from 1 Kafka topic and as an output we generate multiple Kafka
topics. The logic of generating each new Kafka topic is different and not
known beforehand. Users of the system keep adding n
Seth,
Thanks for the enthusiastic reply.
However, I have some questions ... and concerns :)
1) Create a page on the flink packages website.
I looked at this website and it raises a number of red flags for me:
- There is no instructions anywhere on the site on how to add a listing.
- The
piyushnarang commented on pull request #85:
URL: https://github.com/apache/flink-shaded/pull/85#issuecomment-621507875
cc @zentol - I tried using this version on Flink and I hit the issues that
were captured in https://issues.apache.org/jira/browse/FLINK-11259
We need to make a minor twe
Thanks Timo for staring the discussion.
Generally I like the idea to keep the config align with a standard like
json/yaml.
>From the user's perspective, I don't use table configs from a config file
like yaml or json for now,
And it's ok to change it to yaml like style. Actually we didn't know tha
Thanks Timo for staring the discussion.
I am +1 for "format: 'json'".
Take a look to Dawid's yaml case:
connector: 'filesystem'
path: '...'
format: 'json'
format:
option1: '...'
option2: '...'
option3: '...'
Is this work?
According to my understanding, 'format' key is the attribute o
Here I have a little doubt. At present, our json only supports the
conventional json format. If we need to implement json with bson, json with
avro, etc., how should we express it?
Do you need like the following:
‘format.name' = 'json',
‘format.json.fail-on-missing-field' = 'false'
‘format.name
IIUC FLIP-122 already delegate the responsibility for designing and parsing
connector properties to connector developers.
So frankly speaking, no matter which style we choose, there is no strong
guarantee for either of these. So it's also possible
that developers can choose a totally different way
Sorry for mistake,
I proposal:
connector: 'filesystem'
path: '...'
format: 'json'
format.option:
option1: '...'
option2: '...'
option3: '...'
And I think most of cases, users just need configure 'format' key, we
should make it convenient for them. There is no big problem in making
fo
Thanks for all the efforts checking the license.
The vote is hereby canceled, will prepare the next RC after license issues
resolved.
Best Regards,
Yu
On Wed, 29 Apr 2020 at 23:29, Jark Wu wrote:
> Looks like the ES NOTICE problem is a long-standing problem, because the
> ES6 sql connector NO
klion26 commented on a change in pull request #245:
URL: https://github.com/apache/flink-web/pull/245#discussion_r417746431
##
File path: contributing/code-style-and-quality-preamble.zh.md
##
@@ -1,25 +1,25 @@
---
-title: "Apache Flink Code Style and Quality Guide — Preamble"
klion26 commented on pull request #245:
URL: https://github.com/apache/flink-web/pull/245#issuecomment-621605899
Seems the original author's account has been deleted. maybe someone else can
take over this?
This is an automat
XBaith commented on a change in pull request #245:
URL: https://github.com/apache/flink-web/pull/245#discussion_r417749335
##
File path: contributing/code-style-and-quality-preamble.zh.md
##
@@ -1,25 +1,25 @@
---
-title: "Apache Flink Code Style and Quality Guide — Preamble"
Yu Li created FLINK-17471:
-
Summary: Move LICENSE and NOTICE files to root directory of python
distribution
Key: FLINK-17471
URL: https://issues.apache.org/jira/browse/FLINK-17471
Project: Flink
Iss
Hello,
I am new to the mailing list and to contributing in Big opensource projects
in general and i don't know if i did something wrong or should be more
patient :)
I put a topic for discussion as per the contribution guide "
https://flink.apache.org/contributing/how-to-contribute.html"; almost a
Hi Karim,
Sorry you did not have the best first time experience. You certainly did
everything right which I definitely appreciate.
The problem in that particular case, as I see it, is that RabbitMQ is
not very actively maintained and therefore it is not easy too find a
committer willing to take o
Hi all,
I'd like to start with a comment that I am ok with the current state of
the FLIP-122 if there is a strong preference for it. Nevertheless I
still like the idea of adding `type` to the `format` to have it as
`format.type` = `json`.
I wanted to clarify a few things though:
@Jingsong As far
ES6 isn't bundling these dependencies.
On 29/04/2020 17:29, Jark Wu wrote:
Looks like the ES NOTICE problem is a long-standing problem, because the
ES6 sql connector NOTICE also misses these dependencies.
Best,
Jark
On Wed, 29 Apr 2020 at 17:26, Robert Metzger wrote:
Thanks for taking a loo
RocMarshal created FLINK-17472:
--
Summary: StreamExecutionEnvironment and ExecutionEnvironment in
Yarn mode
Key: FLINK-17472
URL: https://issues.apache.org/jira/browse/FLINK-17472
Project: Flink
Hi Chesnay,
I mean `flink-sql-connector-elasticsearch6`.
Because this dependency change on elasticserch7 [1] is totally following
how elasticsearch6 does. And they have the almost same dependencies.
Best,
Jark
[1]:
https://github.com/apache/flink/commit/1827e4dddfbac75a533ff2aea2f3e690777a3e5e#d
61 matches
Mail list logo