Hi Niels,
I think Robert was referring to the fact that Apache considers only the
source release to be "the release", everything else is called
convenience release.
Best,
Aljoscha
On 27.04.20 19:43, Niels Basjes wrote:
Hi,
In my opinion the docker images are essentially simply differently
Huang Xingbo created FLINK-17423:
Summary: Support Python UDTF in blink planner under batch mode
Key: FLINK-17423
URL: https://issues.apache.org/jira/browse/FLINK-17423
Project: Flink
Issue T
Yu Li created FLINK-17424:
-
Summary: SQL Client end-to-end test (Old planner) Elasticsearch
(v7.5.1) failed due to download error
Key: FLINK-17424
URL: https://issues.apache.org/jira/browse/FLINK-17424
Projec
Hi Yadong,
this sounds like a good solution to me.
Cheers,
Till
On Tue, Apr 28, 2020 at 4:18 AM Forward Xu wrote:
> +1
>
> best,
> Forward
>
> Yadong Xie 于2020年4月28日周二 上午10:03写道:
>
> > Hi all
> >
> > sorry for we have an issue that was not discovered in advance
> >
> > When users run multiple
Hi Pavan,
please post these kind of questions to the user ML. I've cross linked it
now.
Image attachments will be filtered out. Consequently, we cannot see what
you have posted. Moreover, it would be good if you could provide the
community with a bit more details what the custom way is and what y
Jark Wu created FLINK-17425:
---
Summary: Supports SupportsFilterPushDown in planner
Key: FLINK-17425
URL: https://issues.apache.org/jira/browse/FLINK-17425
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17427:
---
Summary: Support SupportsPartitionPushDown in planner
Key: FLINK-17427
URL: https://issues.apache.org/jira/browse/FLINK-17427
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17426:
---
Summary: Support SupportsLimitPushDown in planner
Key: FLINK-17426
URL: https://issues.apache.org/jira/browse/FLINK-17426
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17428:
---
Summary: Support SupportsProjectionPushDown in planner
Key: FLINK-17428
URL: https://issues.apache.org/jira/browse/FLINK-17428
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17430:
---
Summary: Support SupportsPartitioning in planner
Key: FLINK-17430
URL: https://issues.apache.org/jira/browse/FLINK-17430
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17429:
---
Summary: Support SupportsOverwrite in planner
Key: FLINK-17429
URL: https://issues.apache.org/jira/browse/FLINK-17429
Project: Flink
Issue Type: Sub-task
Com
Rui Li created FLINK-17431:
--
Summary: Implement table DDLs for Hive dialect part 1
Key: FLINK-17431
URL: https://issues.apache.org/jira/browse/FLINK-17431
Project: Flink
Issue Type: Sub-task
David Anderson created FLINK-17432:
--
Summary: Rename Tutorials to Training
Key: FLINK-17432
URL: https://issues.apache.org/jira/browse/FLINK-17432
Project: Flink
Issue Type: Improvement
Hi,
Thank you for clarifying the tradeoffs and choices.
I've updated my pull request for your review, as far as I can tell it meets
the choices.
Now there are 3 scenarios:
1) There is no properties file --> everything returns a "default" value.
These are the defaults I have chosen:
Version
This would likely solve the issues surrounding the SQL client, so I
would go along with that.
On 17/04/2020 12:16, Aljoscha Krettek wrote:
I think having such tools and/or tailor-made distributions can be nice
but I also think the discussion is missing the main point: The initial
observation/m
It would be good if we could nail down what a slim/fat distribution
would look like, as there are various ideas floating around in this thread.
Like, what is a "slim" distribution? Are we just emptying /opt? Removing
everything larger than 1mb? Are we throwing out the Table API from /lib
for a
Jingsong Lee created FLINK-17433:
Summary: Real time hive integration
Key: FLINK-17433
URL: https://issues.apache.org/jira/browse/FLINK-17433
Project: Flink
Issue Type: Bug
Componen
Jingsong Lee created FLINK-17434:
Summary: Hive partitioned source support streaming read
Key: FLINK-17434
URL: https://issues.apache.org/jira/browse/FLINK-17434
Project: Flink
Issue Type: Su
Jingsong Lee created FLINK-17435:
Summary: Hive non-partitioned source support streaming read
Key: FLINK-17435
URL: https://issues.apache.org/jira/browse/FLINK-17435
Project: Flink
Issue Type
Hi,
While building the project, I get this error:
[ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.12:check
(default) on project flink-parent: Too many files with unapproved license
Any workaround for it?
Manish
Hi Manish,
The error suggests that there are some files in the project without the ASF
license included. This usually happens when you added some source files
without adding the license, or there are some temporal files (e.g.,
generated by running a local test) not cleaned up. The error message sh
The failure means that a file is missing the Apache license header.
This is usually because either
a) you have added a file but forgot the header, in which case the
solution naturally is to add the header
b) you run on Windows, and Maven does not detect that some files are
binary files (which w
I agree with Niels that if a Flink release is made it should be
accompanied by the Dockerfiles for that version, and release them at once.
This makes sense for testing purposes alone, and I view Docker as just
another distribution channel like Maven central.
The reverse isn't necessarily true tho
Wei Zhong created FLINK-17436:
-
Summary: When submitting Python job via "flink run" a
IllegalAccessError will be raised due to the package's private access control
Key: FLINK-17436
URL: https://issues.apache.org/jira/
Thank you all for the feedback.
It seems we reach a consensus:
- The naming convention would better to be xyz.[min|max].
- Adding tests/tools checking the pattern of new configuration options
- Tickets for Flink 2.0 to migrate the "wrong" configuration options.
If there is no objection in the nex
颖 created FLINK-17438:
-
Summary: Flink StreamingFileSink chinese garbled
Key: FLINK-17438
URL: https://issues.apache.org/jira/browse/FLINK-17438
Project: Flink
Issue Type: Bug
Components: Forma
Jark Wu created FLINK-17437:
---
Summary: Use StringData instead of BinaryStringData in code
generation
Key: FLINK-17437
URL: https://issues.apache.org/jira/browse/FLINK-17437
Project: Flink
Issue Ty
Chesnay Schepler created FLINK-17439:
Summary: Setup versioned branches in flink-docker
Key: FLINK-17439
URL: https://issues.apache.org/jira/browse/FLINK-17439
Project: Flink
Issue Type:
Piotr Nowojski created FLINK-17440:
--
Summary: Potential Buffer leak in output unspilling for unaligned
checkpoints
Key: FLINK-17440
URL: https://issues.apache.org/jira/browse/FLINK-17440
Project: Fli
> To me this means that the docker images should be released at the same time
> the other artifacts are released. This also includes shapshot releases. So
> the build of the docker images should be an integral part of the build.
This is already the case since the last release, what this thread i
Currently, processes started in the foreground (like in the case of
Docker) output all logging/stdout directly to the console, without
creating any logging files.
The downside of this approach, as outlined in FLIP-111, is that the
WebUI is not able to display the logs since it relies on these
Robert Metzger created FLINK-17441:
--
Summary: FlinkKinesisConsumerTest.testPeriodicWatermark: watermark
count expected:<2> but was:<1>
Key: FLINK-17441
URL: https://issues.apache.org/jira/browse/FLINK-17441
Gyula Fora created FLINK-17442:
--
Summary: Cannot convert String or boxed-primitive arrays to
DataStream using TypeInformation
Key: FLINK-17442
URL: https://issues.apache.org/jira/browse/FLINK-17442
Proje
Piyush Narang created FLINK-17443:
-
Summary: Flink's ZK in HA mode setup is unable to start up if any
of the zk hosts are unreachable
Key: FLINK-17443
URL: https://issues.apache.org/jira/browse/FLINK-17443
Marie May created FLINK-17444:
-
Summary: Flink StreamingFileSink Azure HadoopRecoverableWriter is
missing.
Key: FLINK-17444
URL: https://issues.apache.org/jira/browse/FLINK-17444
Project: Flink
Brandon Bevans created FLINK-17445:
--
Summary: Allow OperatorTransformation to bootstrapWith a Scala
DataSet
Key: FLINK-17445
URL: https://issues.apache.org/jira/browse/FLINK-17445
Project: Flink
Thanks for confirming! Honestly I support to treat timestamp field as
special and restrict modification (the way DataStream API does), but I
agree the new approach could be more natural to unify the semantic of SQL
for both batch and stream.
Thanks again,
Jungtaek Lim (HeartSaVioR)
On Tue, Apr 28
Liu created FLINK-17446:
---
Summary: Blink supports cube and window together in group by
Key: FLINK-17446
URL: https://issues.apache.org/jira/browse/FLINK-17446
Project: Flink
Issue Type: New Feature
Thanks for starting this discussion.
In general, i am also in favor of making docker image release could be
partly decoupled from Flink release. However, i think it should only happen
when we want to make some changes that are independent from Flink
majar release(e.g. JDK11, change base image, ins
chun11 created FLINK-17447:
--
Summary: Flink CEPOperator StateException
Key: FLINK-17447
URL: https://issues.apache.org/jira/browse/FLINK-17447
Project: Flink
Issue Type: Bug
Compon
Rui Li created FLINK-17448:
--
Summary: Implement table DDLs for Hive dialect part2
Key: FLINK-17448
URL: https://issues.apache.org/jira/browse/FLINK-17448
Project: Flink
Issue Type: Sub-task
klion26 commented on a change in pull request #235:
URL: https://github.com/apache/flink-web/pull/235#discussion_r417031566
##
File path: contributing/how-to-contribute.zh.md
##
@@ -4,136 +4,138 @@ title: "如何参与贡献"
-Apache Flink is developed by an open and friendly communi
Thanks for Chesnay starting this discussion.
In FLINK-17166 implementation[1], we are trying to use "tee" instead of
introducing the stream redirection(redirect the out/err to files). However,
a side effect is that the logging will be duplicated both in .log and .out
files.
Then it may consume mor
Rui Li created FLINK-17449:
--
Summary: Implement ADD/DROP partitions
Key: FLINK-17449
URL: https://issues.apache.org/jira/browse/FLINK-17449
Project: Flink
Issue Type: Sub-task
Components:
Rui Li created FLINK-17450:
--
Summary: Implement function DDLs for Hive dialect
Key: FLINK-17450
URL: https://issues.apache.org/jira/browse/FLINK-17450
Project: Flink
Issue Type: Sub-task
C
Rui Li created FLINK-17451:
--
Summary: Implement view DDLs for Hive dialect
Key: FLINK-17451
URL: https://issues.apache.org/jira/browse/FLINK-17451
Project: Flink
Issue Type: Sub-task
Compo
Rui Li created FLINK-17452:
--
Summary: Support creating Hive tables with constraints
Key: FLINK-17452
URL: https://issues.apache.org/jira/browse/FLINK-17452
Project: Flink
Issue Type: Sub-task
Jiayi Liao created FLINK-17453:
--
Summary: KyroSerializer throws IndexOutOfBoundsException type
java.util.PriorityQueue
Key: FLINK-17453
URL: https://issues.apache.org/jira/browse/FLINK-17453
Project: Fli
Hi, all!
At flink master branch, we have supported state ttl for sql mini-batch
deduplication using incremental cleanup strategy on heap backend, refer to
FLINK-16581. Because I want to test the performance of this feature, so I
compile master branch code and deploy the jar to production envir
klion26 commented on pull request #242:
URL: https://github.com/apache/flink-web/pull/242#issuecomment-620981849
@shining-huang thanks for your contribution. could you please resolve the
conflic by rebasing the newly master?
klion26 edited a comment on pull request #242:
URL: https://github.com/apache/flink-web/pull/242#issuecomment-620981849
@shining-huang thanks for your contribution. could you please resolve the
conflict by rebasing the newly master?
Piotr Nowojski created FLINK-17454:
--
Summary: test_configuration.py ConfigurationTests::test_add_all
failed on travis
Key: FLINK-17454
URL: https://issues.apache.org/jira/browse/FLINK-17454
Project:
Hi lsyldliu,
Thanks for investigating this.
First of all, if you are using mini-batch deduplication, it doesn't support
state ttl in 1.9. That's why the tps looks the same with 1.11 disable state
ttl.
We just introduce state ttl for mini-batch deduplication recently.
Regarding to the performance
Jingsong Lee created FLINK-17455:
Summary: Move FileSystemFormatFactory to table common
Key: FLINK-17455
URL: https://issues.apache.org/jira/browse/FLINK-17455
Project: Flink
Issue Type: Sub-
Hi, there:
The "FLIP-108: Add GPU support in Flink"[1] is now working in
progress. However, we met a problem with
"RuntimeContext#getExternalResourceInfos" if we want to leverage the
Plugin[2] mechanism in Flink.
The interface is:
The problem is now:
public interface RuntimeContext {
/**
Thanks a lot for creating a release candidate for 1.10.1!
I'm not sure, but I think found a potential issue in the release while
checking dependency changes on the ElasticSearch7 connector:
https://github.com/apache/flink/commit/1827e4dddfbac75a533ff2aea2f3e690777a3e5e#diff-bd2211176ab6e7fa83ffeaa
56 matches
Mail list logo