Implement question: how to apply the line length rules?
If we just turn on checkstyle rule "LineLength" then a huge
effort is required to break lines those break the rule. If
we use an auto-formatter here then it possibly break line
"just at the position" awfully.
Is it possible we require only t
+1 non-binding
- built from source with default profile
- manually ran SQL and Table API tests for Flink's metadata integration
with Hive Metastore in local cluster
- manually ran SQL tests for batch capability with Blink planner and Hive
integration (source/sink/udf) in local cluster
- file f
+1 (non-binding)
Reran Jepsen tests 10 times.
On Wed, Aug 21, 2019 at 5:35 AM vino yang wrote:
> +1 (non-binding)
>
> - checkout source code and build successfully
> - started a local cluster and ran some example jobs successfully
> - verified signatures and hashes
> - checked release notes and
+1 (non-binding)
- checkout source code and build successfully
- started a local cluster and ran some example jobs successfully
- verified signatures and hashes
- checked release notes and post
Best,
Vino
Stephan Ewen 于2019年8月21日周三 上午4:20写道:
> +1 (binding)
>
> - Downloaded the binary release
+1 (non-binding)
- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- started a cluster, ran a SQL query to temporal join with kafka source and
mysql jdbc table, and write results to
I filed a jira https://issues.apache.org/jira/browse/FLINK-13807 for this.
Not sure if I am able to get to it in the near future so I didn't assign to
myself. Anyone feel free to pick it up.
I changed my environment to pass this for now.
On Tue, Aug 20, 2019 at 4:11 AM Stephan Ewen wrote:
> Tha
Ethan Li created FLINK-13807:
Summary: Flink-avro unit tests fails if the character encoding in
the environment is not default to UTF-8
Key: FLINK-13807
URL: https://issues.apache.org/jira/browse/FLINK-13807
+1 (binding)
- Downloaded the binary release tarball
- started a standalone cluster with four nodes
- ran some examples through the Web UI
- checked the logs
- created a project from the Java quickstarts maven archetype
- ran a multi-stage DataSet job in batch mode
- killed as TaskManager a
Stephan Ewen created FLINK-13806:
Summary: Metric Fetcher floods the JM log with errors when TM is
lost
Key: FLINK-13806
URL: https://issues.apache.org/jira/browse/FLINK-13806
Project: Flink
+1 for KCL 1.x changes only.
I also think it would make sense to align FLIP-27 work and KCL 2.x related
changes, since these will require a hardening cycle with extensive testing
that is probably not practical to repeat.
On Tue, Aug 20, 2019 at 10:57 AM Bowen Li wrote:
> @Stephan @Becket kine
Stephan Ewen created FLINK-13805:
Summary: Bad Error Message when TaskManager is lost
Key: FLINK-13805
URL: https://issues.apache.org/jira/browse/FLINK-13805
Project: Flink
Issue Type: Bug
@Stephan @Becket kinesis connector currently is using KCL 1.9. Mass changes
are needed if we switch to KCL 2.x. I agree with Dyana that, since KCL 1.x
is also updated to Apache 2.0, we can just focus on upgrading to a newer
KCL 1.x minor version for now.
On Tue, Aug 20, 2019 at 7:52 AM Dyana Rose
I created an umbrella issue for the code style guide effort and a subtask
for this discussion:
https://issues.apache.org/jira/browse/FLINK-13804
I will also submit a PR to flink-web based on the conclusion.
On Mon, Aug 19, 2019 at 6:15 PM Stephan Ewen wrote:
> @Andrey Will you open a PR to add t
Andrey Zagrebin created FLINK-13804:
---
Summary: Collections initial capacity
Key: FLINK-13804
URL: https://issues.apache.org/jira/browse/FLINK-13804
Project: Flink
Issue Type: Sub-task
Yu Li created FLINK-13803:
-
Summary: Introduce SpillableHeapKeyedStateBackend and all
necessities
Key: FLINK-13803
URL: https://issues.apache.org/jira/browse/FLINK-13803
Project: Flink
Issue Type: S
Andrey Zagrebin created FLINK-13802:
---
Summary: Flink code style guide
Key: FLINK-13802
URL: https://issues.apache.org/jira/browse/FLINK-13802
Project: Flink
Issue Type: Task
Compo
Yu Li created FLINK-13801:
-
Summary: Introduce a HybridStateTable to combine everything
together
Key: FLINK-13801
URL: https://issues.apache.org/jira/browse/FLINK-13801
Project: Flink
Issue Type: Su
ok great,
that's done, the PR is rebased and squashed on top of master and is running
through Travis
https://github.com/apache/flink/pull/9494
Dyana
On Tue, 20 Aug 2019 at 15:32, Tzu-Li (Gordon) Tai
wrote:
> Hi Dyana,
>
> Regarding your question on the Chinese docs:
> Since the Chinese counte
Stephan Ewen created FLINK-13799:
Summary: Web Job Submit Page displays stream of error message when
web submit is disables in the config
Key: FLINK-13799
URL: https://issues.apache.org/jira/browse/FLINK-13799
Yu Li created FLINK-13800:
-
Summary: Create a module for spill-able heap backend
Key: FLINK-13800
URL: https://issues.apache.org/jira/browse/FLINK-13800
Project: Flink
Issue Type: Sub-task
Hi Dyana,
Regarding your question on the Chinese docs:
Since the Chinese counterparts for the Kinesis connector documentation
isn't translated yet (see docs/dev/connectors/kinesis.zh.md), for now you
can simply just sync whatever changes you made to the English doc to the
Chinese one as well.
Che
Sorry for the lag but since we've got a consensus days ago, I started a
vote thread which will have a result by EOD, thus I'm closing this
discussion thread. Thanks all for the participation and
comments/suggestions!
Best Regards,
Yu
On Fri, 16 Aug 2019 at 09:09, Till Rohrmann wrote:
> +1 for
zhijiang created FLINK-13798:
Summary: Refactor the process of checking stream status while
emitting watermark in source
Key: FLINK-13798
URL: https://issues.apache.org/jira/browse/FLINK-13798
Project: Fl
Fokko Driesprong created FLINK-13797:
Summary: Add missing format argument
Key: FLINK-13797
URL: https://issues.apache.org/jira/browse/FLINK-13797
Project: Flink
Issue Type: Task
Fokko Driesprong created FLINK-13796:
Summary: Remove unused variable
Key: FLINK-13796
URL: https://issues.apache.org/jira/browse/FLINK-13796
Project: Flink
Issue Type: Task
Com
Stephan Ewen created FLINK-13795:
Summary: Web UI logs errors when selecting Checkpoint Tab for
Batch Jobs
Key: FLINK-13795
URL: https://issues.apache.org/jira/browse/FLINK-13795
Project: Flink
I agree with Stephan. It will be good to see if we can align those two
efforts so that we don't write code that are soon to be refactored again.
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 20, 2019 at 10:50 AM Stephan Ewen wrote:
> Just FYI - Becket, Aljoscha, and me are working on fleshing out
TisonKun created FLINK-13794:
Summary: Remove unused field printStatusDuringExecution in
ClusterClient
Key: FLINK-13794
URL: https://issues.apache.org/jira/browse/FLINK-13794
Project: Flink
Issu
Thanks for the clarification.
The idea JobDeployer ever came into my mind when I was muddled with
how to execute per-job mode and session mode with the same user code
and framework codepath.
With the concept JobDeployer we back to the statement that environment
knows every configs of cluster depl
+1
Legal checks:
- verified signatures and hashes
- New bundled Javascript dependencies for flink-runtime-web are correctly
reflected under licenses-binary and NOTICE file.
- locally built from source (Scala 2.12, without Hadoop)
- No missing artifacts in staging repo
- No binaries in source relea
I second Stephan's summarize, and to be more explicit, +1 on:
- Set a hard line length limit
- Allow arguments on the same line if below length limit
- With consistent argument breaking when that length is exceeded
- Developers can break before that if they feel it helps with readability
FWIW, hba
Thanks for sharing your thoughts, Thomas, Henry and Stephan. I also think
the committers are supposed to be mature enough to know when a review on
their own patch is needed.
@Henry, just want to confirm, are you +1 on the proposed bylaws?
Thanks,
Jiangjie (Becket) Qin
On Tue, Aug 20, 2019 at 10
Thanks, looks like you diagnosed it correctly. environment specific
encoding settings.
Could you open a ticket (maybe a PR) to set the encoding and make the test
stable across environments?
On Mon, Aug 19, 2019 at 9:46 PM Ethan Li wrote:
> It’s probably the encoding problem. The environment I r
I see it somewhat similar to Henry.
Generally, all committers should go for a review by another committer,
unless it is a trivial comment or style fix. I personally do that, even
though being one of the committers that have been with the project longest.
For now, I was hoping though that we have
Just FYI - Becket, Aljoscha, and me are working on fleshing out the
remaining details of FLIP-27 (new source API).
We will share this as soon as we have made some progress on some of the
details.
The Kinesis connector would be one of the first that we would try to also
implement in that new API, a
Till has made some good comments here.
Two things to add:
- The job mode is very nice in the way that it runs the client inside the
cluster (in the same image/process that is the JM) and thus unifies both
applications and what the Spark world calls the "driver mode".
- Another thing I would
Thanks for the summarize Andrey!
I'd also like to adjust my -1 to +0 on using Optional as parameter for
private methods due to the existence of the very first rule - "Avoid using
Optional in any performance critical code". I'd regard the "possible GC
burden while using Optional as parameter" also
I would not be in favour of getting rid of the per-job mode since it
simplifies the process of running Flink jobs considerably. Moreover, it is
not only well suited for container deployments but also for deployments
where you want to guarantee job isolation. For example, a user could use
the per-jo
I think Dawid raised a very good point here.
One of the outcomes should be that we are consistent in our recommendations
and requests during PR reviews. Otherwise we'll just confuse contributors.
So I would be
+1 for someone to use Optional in a private method if they believe it is
helpful
-1
Nico Kruber created FLINK-13793:
---
Summary: Build different language docs in parallel
Key: FLINK-13793
URL: https://issues.apache.org/jira/browse/FLINK-13793
Project: Flink
Issue Type: Sub-task
@Zili
As far as I know, Timo is drafting a FLIP that has taken the number 55.
There is a round-up number maintained on the FLIP wiki page [1] shows which
number should be used for the new FLIP, which should be increased by
whoever takes the number for a new FLIP.
Thank you~
Xintong Song
[1]
ht
Hi Andrey,
Just wanted to quickly elaborate on my opinion. I wouldn't say I am -1,
just -0 for the Optionals in private methods. I am ok with not
forbidding them there. I just think in all cases there is a better
solution than passing the Optionals around, even in private methods. I
just hope the
In my opinion the client should not use any environment to get the Job
graph because the jar should reside ONLY on the cluster (and not in the
client classpath otherwise there are always inconsistencies between client
and Flink Job manager's classpath).
In the YARN, Mesos and Kubernetes scenarios y
zzsmdfj created FLINK-13792:
---
Summary: source and sink support manual rate limit
Key: FLINK-13792
URL: https://issues.apache.org/jira/browse/FLINK-13792
Project: Flink
Issue Type: Improvement
Ahh, brilliant, I had myself on notifications for the streams adapter
releases, but must have missed it. That's great news.
I've got the branch prepped for moving over to Apache 2.0, but staying on
KCL 1.x, which requires the least amount of change.
Considering the large amount of change required
I would like to involve Till & Stephan here to clarify some concept of
per-job mode.
The term per-job is one of modes a cluster could run on. It is mainly aimed
at spawn
a dedicated cluster for a specific job while the job could be packaged with
Flink
itself and thus the cluster initialized with j
46 matches
Mail list logo