Robert Metzger created FLINK-16431:
--
Summary: Pass build profile into end to end test script on Azure
Key: FLINK-16431
URL: https://issues.apache.org/jira/browse/FLINK-16431
Project: Flink
I
Niels Basjes created FLINK-16432:
Summary: Building Hive connector gives problems
Key: FLINK-16432
URL: https://issues.apache.org/jira/browse/FLINK-16432
Project: Flink
Issue Type: Bug
Rui Li created FLINK-16433:
--
Summary: TableEnvironment doesn't clear buffered operations when
it fails to translate the operation
Key: FLINK-16433
URL: https://issues.apache.org/jira/browse/FLINK-16433
Proje
Jingsong Lee created FLINK-16434:
Summary: Add document to explain how to pack hive with their own
hive dependencies
Key: FLINK-16434
URL: https://issues.apache.org/jira/browse/FLINK-16434
Project: Fl
Huang Xingbo created FLINK-16435:
Summary: Fix ide static check
Key: FLINK-16435
URL: https://issues.apache.org/jira/browse/FLINK-16435
Project: Flink
Issue Type: Improvement
Compo
Yu Li created FLINK-16436:
-
Summary: Update Apache downloads link due to INFRA structural
changes
Key: FLINK-16436
URL: https://issues.apache.org/jira/browse/FLINK-16436
Project: Flink
Issue Type: T
Thanks for open this FLIP and summarize the current state of
Dockerfiles, Andrey! +1 for this idea.
I have some minor comments / questions:
- Regarding the flink_docker_utils#install_flink function, I think it
should also support build from local dist and build from a
user-defined archive.
- It se
Dear devs,
we conducted some POCs and updated the FLIP accordingly [1].
Key changes:
- POC showed that it is viable to spill only on checkpoint (in contrast to
spilling continuously to avoid overload of external systems)
- Greatly revised/refined recovery and rescaling
- Sketched the required com
Hi Andrey,
Thanks for driving this significant FLIP. From the user ML, we could also
know there are
many users running Flink in container environment. Then the docker image
will be the
very basic requirement. Just as you say, we should provide a unified place
for all various
usage(e.g. session,
Xintong Song created FLINK-16437:
Summary: Make SlotManager allocate resource from ResourceManager
at the worker granularity.
Key: FLINK-16437
URL: https://issues.apache.org/jira/browse/FLINK-16437
Pr
Xintong Song created FLINK-16438:
Summary: Make YarnResourceManager starts workers using
WorkerResourceSpec requested by SlotManager
Key: FLINK-16438
URL: https://issues.apache.org/jira/browse/FLINK-16438
Xintong Song created FLINK-16439:
Summary: Make KubernetesResourceManager starts workers using
WorkerResourceSpec requested by SlotManager
Key: FLINK-16439
URL: https://issues.apache.org/jira/browse/FLINK-16439
Xintong Song created FLINK-16440:
Summary: Extend SlotManager metrics and status for dynamic slot
allocation.
Key: FLINK-16440
URL: https://issues.apache.org/jira/browse/FLINK-16440
Project: Flink
+1 to Arvid's proposal.
On Thu, Mar 5, 2020 at 4:14 AM Xingbo Huang wrote:
> Thanks a for this proposal.
>
> As a new contributor to Flink, it would be very helpful to have such blogs
> for us to understand the future of Flink and get involved
>
> BTW, I have a question whether the dev blog ne
Gyula Fora created FLINK-16441:
--
Summary: Allow users to override flink-conf parameters from SQL
CLI environment
Key: FLINK-16441
URL: https://issues.apache.org/jira/browse/FLINK-16441
Project: Flink
Xintong Song created FLINK-16442:
Summary: Make MesosResourceManager starts workers using
WorkerResourceSpec requested by SlotManager
Key: FLINK-16442
URL: https://issues.apache.org/jira/browse/FLINK-16442
Hi everyone,
We just noticed that everytime a pull request gets merged with the "Squash
and merge" button,
GitHub drops the original authorship information and changes "authored" to
whoever merged the PR.
We found this happened in #11102 [1] and #11302 [2]. It seems that it is a
long outstanding
+1 to Arvid's proposal.
On Thu, 5 Mar 2020 at 18:13, Robert Metzger wrote:
> +1 to Arvid's proposal.
>
>
>
> On Thu, Mar 5, 2020 at 4:14 AM Xingbo Huang wrote:
>
> > Thanks a for this proposal.
> >
> > As a new contributor to Flink, it would be very helpful to have such
> blogs
> > for us to un
Do we have more cases of "common Hadoop Utils"?
If yes, does it make sense to create a "flink-hadoop-utils" module with
exactly such classes? It would have an optional dependency on
"flink-shaded-hadoop".
On Wed, Mar 4, 2020 at 9:12 AM Till Rohrmann wrote:
> Hi Sivaprasanna,
>
> we don't upload
+1 to Arvid's proposal
> 在 2020年3月5日,下午6:49,Jark Wu 写道:
>
> +1 to Arvid's proposal.
>
> On Thu, 5 Mar 2020 at 18:13, Robert Metzger wrote:
>
>> +1 to Arvid's proposal.
>>
>>
>>
>> On Thu, Mar 5, 2020 at 4:14 AM Xingbo Huang wrote:
>>
>>> Thanks a for this proposal.
>>>
>>> As a new cont
Hi Jark,
Thanks for bringing up this discussion. Good catch. Agree that we can
disable "Squash and merge"(also the other buttons) for now.
There is a guideline on how to do that in
https://help.github.com/en/github/administering-a-repository/configuring-commit-squashing-for-pull-requests
.
Best,
Thanks for deep investigation.
+1 to disable "Squash and merge" button now.
But I think this is a very serious problem, It affects too many GitHub
workers. Github should deal with it quickly?
Best,
Jingsong Lee
On Thu, Mar 5, 2020 at 7:21 PM Xingbo Huang wrote:
> Hi Jark,
>
> Thanks for bringi
Hi Jark
There is a conversation about this here:
https://github.community/t5/How-to-use-Git-and-GitHub/Authorship-of-merge-commits-made-by-Github-Apps-changed/td-p/48797
I think GitHub will fix it soon, it is a bug, not a feature :).
Jingsong Li 于2020年3月5日周四 下午8:32写道:
> Thanks for deep investiga
Big +1 to disable it.
I have never been a fan, it has always caused problems:
- Merge commits
- weird alias emails
- lost author information
- commit message misses the "This closes #" line to track back
commits to PRs/reviews.
The button goes against best practice, it should go away.
Be
Hi Stephen,
I guess it is a valid point to have something like 'flink-hadoop-utils'.
Maybe a [DISCUSS] thread can be started to understand what the community
thinks?
On Thu, Mar 5, 2020 at 4:22 PM Stephan Ewen wrote:
> Do we have more cases of "common Hadoop Utils"?
>
> If yes, does it make sen
For implement it, file a JIRA ticket in INFRA [1]
Best,
tison.
[1] https://issues.apache.org/jira/projects/INFRA
Stephan Ewen 于2020年3月5日周四 下午8:57写道:
> Big +1 to disable it.
>
> I have never been a fan, it has always caused problems:
> - Merge commits
> - weird alias emails
> - lost autho
Hi Dawid,
Thanks for your suggestion.
After some investigation, there are two designs in my mind about how to defer
the instantiation of temporary system function and temporary catalog function
to compile time.
1. FunctionCatalog accepts both FunctionDefinitions and uninstantiated
temporary
Hi Jark
I think GitHub UI can not disable both the "Squash and merge" button and
"Rebase and merge" at the same time if there exists any protected branch in
the repository(according to github rules).
If we only left "merge and commits" button, it will against requiring a
linear commit history rul
Hi Yadong,
Maybe we firstly reach out INFRA team and see the reply from their side.
Since the actual operator is INFRA team, in the dev mailing list we can
focus on motivation and
wait for the reply.
Best,
tison.
Yadong Xie 于2020年3月5日周四 下午9:29写道:
> Hi Jark
>
> I think GitHub UI can not disab
Stephan Ewen created FLINK-16443:
Summary: Fix wrong fix for user-code CheckpointExceptions
Key: FLINK-16443
URL: https://issues.apache.org/jira/browse/FLINK-16443
Project: Flink
Issue Type:
Yun Tang created FLINK-16444:
Summary: Count the read/write/seek/next latency of RocksDB as
metrics
Key: FLINK-16444
URL: https://issues.apache.org/jira/browse/FLINK-16444
Project: Flink
Issue T
Hi,
thanks for starting the discussion, Tison!
I'd like to fix this dependency mess rather sooner than later, but we do
have to consider the fact that we are breaking the dependency setup of
users. If they they only had a dependency on flink-streaming-java before
but used classes from flink-c
Also from my side +1 to start voting.
Cheers,
Kostas
On Thu, Mar 5, 2020 at 7:45 AM tison wrote:
>
> +1 to star voting.
>
> Best,
> tison.
>
>
> Yang Wang 于2020年3月5日周四 下午2:29写道:
>>
>> Hi Peter,
>> Really thanks for your response.
>>
>> Hi all @Kostas Kloudas @Zili Chen @Peter Huang @Rong Rong
+1 to this fix, in general.
If the main issue is that users have to now add "flink-clients" explicitly,
then I think this is okay, if we spell it out prominently in the release
notes, and make sure quickstarts / etc are updated, and have a good error
message when client/runtime classes are not fou
+1 for disabling "Squash and merge" if feasible to do that.
The possible benefit to use this button is for saving some efforts to squash
some intermediate "[fixup]" commits during PR review.
But it would bring more potential problems as mentioned below, missing author
information and message of
Thanks for this proposal Arvid!
+1 and looking forward to the wiki structure and more following blogs.
Best,
Zhijiang
--
From:Dian Fu
Send Time:2020 Mar. 5 (Thu.) 19:08
To:dev
Subject:Re: Flink dev blog
+1 to Arvid's proposal
>
+1 for disabling this feature for now.
Thanks a lot for spotting this!
On Thu, Mar 5, 2020 at 3:54 PM Zhijiang
wrote:
> +1 for disabling "Squash and merge" if feasible to do that.
>
> The possible benefit to use this button is for saving some efforts to
> squash some intermediate "[fixup]" comm
Hi all,
Thanks for the feedbacks. But I want to clarify the motivation to disable
"Squash and merge" is just because of the regression/bug of the missing
author information.
If GitHub fixes this later, I think it makes sense to bring this button
back.
Hi Stephan & Zhijiang,
To be honest, I love
Gary Yao created FLINK-16445:
Summary: Raise japicmp.referenceVersion to 1.10.0
Key: FLINK-16445
URL: https://issues.apache.org/jira/browse/FLINK-16445
Project: Flink
Issue Type: Bug
Co
Hi,
If it’s really not preserving ownership (I didn’t notice the problem before),
+1 for removing “squash and merge”.
However -1 for removing “rebase and merge”. I didn’t see any issues with it and
I’m using it constantly.
Piotrek
> On 5 Mar 2020, at 16:40, Jark Wu wrote:
>
> Hi all,
>
> T
Zou created FLINK-16446:
---
Summary: Add rate limiting feature for FlinkKafkaConsumer
Key: FLINK-16446
URL: https://issues.apache.org/jira/browse/FLINK-16446
Project: Flink
Issue Type: Improvement
It looks like this feature still messes up email addresses, for example if
you do a "git log | grep noreply" in the repo.
Don't most PRs consist anyways of multiple commits where we want to
preserve "refactor" and "feature" differentiation in the history, rather
than squash everything?
On Thu, Ma
> I have some hesitation, because the actual version number can better
reflect the actual dependency. For example, if the user also knows the
field hiveVersion[1]. He may enter the wrong hiveVersion because of the
name, or he may have the wrong expectation for the hive built-in functions.
Sorry, I
João Boto created FLINK-16447:
-
Summary: Non serializable field on CompressWriterFactory
Key: FLINK-16447
URL: https://issues.apache.org/jira/browse/FLINK-16447
Project: Flink
Issue Type: Bug
Big +1 also from my side.
This will eliminate some work-arounds used so far to bypass the module
structure (like code using reflection to extract a JobGraph from a
Pipeline).
I agree with Stephan that with proper documentation, release notes and
tooling update, it will hopefully not be a big hass
Bowen Li created FLINK-16448:
Summary: add documentation for Hive table sink parallelism setting
strategy
Key: FLINK-16448
URL: https://issues.apache.org/jira/browse/FLINK-16448
Project: Flink
I
We could merge the two modules into one?
sequence-files its another way of compressing files..
On 2020/03/05 13:02:46, Sivaprasanna wrote:
> Hi Stephen,
>
> I guess it is a valid point to have something like 'flink-hadoop-utils'.
> Maybe a [DISCUSS] thread can be started to understand what the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Seems, this will be fixed today:
https://twitter.com/natfriedman/status/1235613840659767298?s=19
- -Matthias
On 3/5/20 8:37 AM, Stephan Ewen wrote:
> It looks like this feature still messes up email addresses, for
> example if you do a "git log |
Marta Paes Moreira created FLINK-16449:
--
Summary: Deprecated methods in the Table API walkthrough.
Key: FLINK-16449
URL: https://issues.apache.org/jira/browse/FLINK-16449
Project: Flink
Hi,
I agree with Jark. The tool is useful. If there are some problem, I think
we can reach an agreement to form certain terms?
Github provides:
- "rebase and merge" keep all commits.
- "squash and merge" squash all commits to one commits, pull request
authors used to be multiple commits, like "ad
Jingsong Lee created FLINK-16450:
Summary: Integrate parquet columnar row reader to hive
Key: FLINK-16450
URL: https://issues.apache.org/jira/browse/FLINK-16450
Project: Flink
Issue Type: Sub
Hi there,
I would like kick off discussion on
https://issues.apache.org/jira/browse/FLINK-16392 and discuss what is best
way moving forward. Here is problem statement and proposal we have in mind.
Please kindly provide feedback.
Native intervaljoin rely on statebackend(e.g rocksdb) to insert/fetc
jinfeng created FLINK-16451:
---
Summary: listagg with distinct for over window
Key: FLINK-16451
URL: https://issues.apache.org/jira/browse/FLINK-16451
Project: Flink
Issue Type: Bug
Compon
Hi Bowen,
My idea is to directly provide the really dependent version, such as hive
1.2.2, our jar name is hive 1.2.2, so that users can directly and clearly
know the version. As for which metastore is supported, we can guide it in
the document, otherwise, write 1.0, and the result version is inde
Rui Li created FLINK-16452:
--
Summary: Insert into static partition doesn't support order by or
limit
Key: FLINK-16452
URL: https://issues.apache.org/jira/browse/FLINK-16452
Project: Flink
Issue Typ
Hi Stephan,
> noreply email address.
I investigated this and found some x...@users.noreply.github.com address. I
think that's because they enabled "kepp email addresses private" on GitHub
[1].
> Don't most PRs consist anyways of multiple commits where we want to
preserve "refactor" and "feature"
Hi Jark,
Thanks for starting this discussion. Personally I also love the "squash and
merge" button. It's very convenient.
Regarding to the email address "noreply", it seems that there are two cases:
- The email address in the original commit is already "noreply". In this case,
this issue will s
cpugputpu created FLINK-16453:
-
Summary: A test failure in KafkaTest
Key: FLINK-16453
URL: https://issues.apache.org/jira/browse/FLINK-16453
Project: Flink
Issue Type: Bug
Components: C
Hi Jingsong,
I think I misunderstood you. So your argument is that, to support hive
1.0.0 - 1.2.2, we are actually using Hive 1.2.2 and thus we name the flink
module as "flink-connector-hive-1.2", right? It makes sense to me now.
+1 for this change.
Cheers,
Bowen
On Thu, Mar 5, 2020 at 6:53 PM
Hi Jark,
Thanks for the further investigation.
If the bug of missing author can be solved by Github soon, I am generally
neutral to disable "Squash and merge" button, even somehow preferring to keep
it because it could bring a bit benefits sometimes and some committers are
willing to rely on
Zhijiang created FLINK-16454:
Summary: Update the copyright year in NOTICE files
Key: FLINK-16454
URL: https://issues.apache.org/jira/browse/FLINK-16454
Project: Flink
Issue Type: Task
Jingsong Lee created FLINK-16455:
Summary: Introduce flink-sql-connector-hive modules to provide
hive uber jars
Key: FLINK-16455
URL: https://issues.apache.org/jira/browse/FLINK-16455
Project: Flink
That also makes sense but that, I believe, would be a breaking/major
change. If we are okay with merging them together, we can name something
like "flink-hadoop-compress" since SequenceFile is also a Hadoop format and
the existing "flink-compress" module, as of now, deals with Hadoop based
compress
These Github buttons sometimes can help me merge commits when the network
from China to Github is unstable. It would take me so long to fetch and
reorganize
commits locally, and fetch master, doing some rebase and then push. Each
step
is time consuming when network situation is bad.
So I would lik
Hi,
+1 to make flink-streaming-java an API only module and solve it sooner
rather than later.
It would be more clear to only expose an SDK module for writing jobs.
Another benefit I can see is: the flink-streaming-java would be scala-free
if we reverse the dependencies and this would be really ni
Hi,
> It looks like this feature still messes up email addresses, for example if
> you do a "git log | grep noreply" in the repo.
I’ve checked my appearences on that list (git log | grep noreply) and they
happened couple of times, when I actually used squash and merge (I wanted to
squash fixup
66 matches
Mail list logo