zhijiang created FLINK-11035:
Summary: Notify data available to network stack immediately after
finishing BufferBuilder
Key: FLINK-11035
URL: https://issues.apache.org/jira/browse/FLINK-11035
Project: Fli
I agree with your take regarding superficial stream environment distinction
and the difficulties that introduces for users.
To fix the immediate issue in Beam, it was necessary to duplicate
RemoteStreamEnvironment.executeRemotely
https://github.com/apache/beam/pull/7169/files#diff-6acb0479d563cfc
Wei-Che Wei created FLINK-11034:
---
Summary: Provide "rewriting config” to file system factory
Key: FLINK-11034
URL: https://issues.apache.org/jira/browse/FLINK-11034
Project: Flink
Issue Type:
Yes it is my wrong. I checked the original e-mail and its subject was
changed to "[RESULT] [VOTE]..".
It should be something wrong in my mail manager. Sorry for the noise.
Best,
tison.
Thomas Weise 于2018年11月30日周五 上午9:38写道:
> Looks correct to me. The subject was changed to "[RESULT] [VOTE].."
Looks correct to me. The subject was changed to "[RESULT] [VOTE].."
Thomas
On Thu, Nov 29, 2018 at 4:48 PM Tzu-Li Chen wrote:
> Hi Till,
>
> Maybe wrongly announce the result in the same thread?
>
> Best,
> tison.
>
>
> Till Rohrmann 于2018年11月30日周五 上午1:28写道:
>
> > I'm happy to announce that we
Hi Till,
Maybe wrongly announce the result in the same thread?
Best,
tison.
Till Rohrmann 于2018年11月30日周五 上午1:28写道:
> I'm happy to announce that we have unanimously approved the 1.7.0 release.
>
> There are 5 approving votes, 4 of which are binding:
> - Chesnay (binding)
> - Timo (binding)
> -
I'm only voicing my opinion here; these do not reflect in any way
long-term directions.
I wouldn't remove the execute() method; it's too important for a
convenient execution of jobs via the CLI/WebUI.
But I would like to get rid of this distinction of environments as their
existence implies
I'm happy to announce that we have unanimously approved the 1.7.0 release.
There are 5 approving votes, 4 of which are binding:
- Chesnay (binding)
- Timo (binding)
- Till (binding)
- Gary (non-binding)
- Gordon (binding)
There are no disapproving votes.
Thanks everyone for the hard work and hel
Thanks everyone for the release testing. Hereby I close the vote. The
result will be announced in a separate thread.
Cheers,
Till
On Thu, Nov 29, 2018 at 6:24 PM Tzu-Li (Gordon) Tai
wrote:
> +1
>
> Functional checks:
>
> - Built Flink from source (`mvn clean verify`) locally, with success
> - R
+1
Functional checks:
- Built Flink from source (`mvn clean verify`) locally, with success
- Run all nightly end-to-end tests locally for 5 times in a loop, no
attempts failed (Hadoop 2.8.3, Scala 2.12)
- Tested Java quickstart + modified to include Kafka 1.0 connector and
Elasticsearch 6.0 conne
+1 (non-binding)
I ran the Jepsen tests against the flink-1.7.0-bin-hadoop28-scala_2.12.tgz
distribution.
Best,
Gary
On Thu, Nov 29, 2018 at 5:51 PM Timo Walther wrote:
> +1
>
> - run `mvn clean verify` locally with success
> - run a couple of end-to-end tests locally with success
>
> Found o
+1
- checked checksums and signatures
- checked LICENSE/NOTICE files for source and binary releases
- built Flink from sources
- Tested standalone mode with multiple job submissions (CLI + web UI)
- Tested plan visualizer
- Tested that Java and Scala quickstarts work with IntelliJ
- Tested SBT qui
+1
- run `mvn clean verify` locally with success
- run a couple of end-to-end tests locally with success
Found one minor thing: the class loading e2e test failed on my machine
(we had a similar bug a while ago), it might be
related to a bug in the script. It runs in every pre-commit test, so I
Till Rohrmann created FLINK-11033:
-
Summary: Elasticsearch (v6.3.1) sink end-to-end test unstable on
Travis
Key: FLINK-11033
URL: https://issues.apache.org/jira/browse/FLINK-11033
Project: Flink
Till Rohrmann created FLINK-11032:
-
Summary: Elasticsearch (v6.3.1) sink end-to-end test unstable on
Travis
Key: FLINK-11032
URL: https://issues.apache.org/jira/browse/FLINK-11032
Project: Flink
Thanks for taking a look.
Are you saying that the longer term direction is to get rid of the execute
method from StreamExecutionEnvironment and instead construct the cluster
client outside?
That would currently expose even more internals to the user. Considering
the current implementation in Remo
Thanks for the feedback, everyone!
I created a FLIP for these efforts:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free
I will open an umbrella Jira ticket for FLIP-28 with concrete subtasks
shortly.
Thanks,
Timo
Am 29.11.18 um 12
Preethi.C created FLINK-11031:
-
Summary: How to consume the Hive data in Flink using Scala
Key: FLINK-11031
URL: https://issues.apache.org/jira/browse/FLINK-11031
Project: Flink
Issue Type: Bug
Hi Jincheng,
Sounds good!
You should have Wiki permissions now.
Thanks, Fabian
Am Do., 29. Nov. 2018 um 14:46 Uhr schrieb jincheng sun <
sunjincheng...@gmail.com>:
> Thanks Fabian&Piotrek,
>
> Your feedback sounds very good!
> So far we on the same page about how to handle group keys. I will up
Thanks Fabian&Piotrek,
Your feedback sounds very good!
So far we on the same page about how to handle group keys. I will update
the google doc according our discussion and I'd like to convert it to a
FLIP. Thus, it would be great if any of you can grant me the write access
to Confluence. My Co
Maciej Bryński created FLINK-11030:
--
Summary: Cannot use Avro logical types with
ConfluentRegistryAvroDeserializationSchema
Key: FLINK-11030
URL: https://issues.apache.org/jira/browse/FLINK-11030
Pro
Hi,
Thanks for the clarification Becket!
I have a few thoughts to share / questions:
1) I'd like to know how you plan to implement the feature on a plan /
planner level.
I would imaging the following to happen when Table.cache() is called:
1) immediately optimize the Table and internally conve
As a small addendum here is the link to the PR for the release announcement
[1].
[1] https://github.com/apache/flink-web/pull/137
Cheers,
Till
On Thu, Nov 29, 2018 at 2:12 PM Chesnay Schepler wrote:
> +1
>
> - source release contains no binaries
> - checked signatures
> - binary release contai
Yangze Guo created FLINK-11029:
--
Summary: Incorrect parameter in Working with state doc
Key: FLINK-11029
URL: https://issues.apache.org/jira/browse/FLINK-11029
Project: Flink
Issue Type: Bug
+1
- source release contains no binaries
- checked signatures
- binary release contains correct LICENSE/NOTICE file
- all required artifacts are available in the maven repository
- scala versions are set correctly in the deployed poms
On 29.11.2018 01:01, Till Rohrmann wrote:
Hi everyone,
Pleas
Chesnay Schepler created FLINK-11028:
Summary: Disable deployment of flink-fs-tests
Key: FLINK-11028
URL: https://issues.apache.org/jira/browse/FLINK-11028
Project: Flink
Issue Type: Impr
Hi Piotr,
Thanks for sharing your ideas on the method naming. We will think about
your suggestions. But I don't understand why we need to change the return
type of cache().
Cache() is a physical operation, it does not change the logic of
the `Table`. On the tableAPI layer, we should not introduce
Thanks Timo,
That makes sense to me. And I left the comment about code generation in doc.
Looking forward to participate in it!
Best,
Jark
On Thu, 29 Nov 2018 at 16:42, Timo Walther wrote:
> @Kurt: Yes, I don't think that that forks of Flink will have a hard time
> to keep up with the porting
Kostas Kloudas created FLINK-11027:
--
Summary:
JobManagerHAProcessFailureRecoveryITCase#testDispatcherProcessFailure failed on
Travis
Key: FLINK-11027
URL: https://issues.apache.org/jira/browse/FLINK-11027
Hi Becket,
Thanks for the response.
1. I wasn’t saying that materialised view must be mutable or not. The same
thing applies to caches as well. To the contrary, I would expect more
consistency and updates from something that is called “cache” vs something
that’s a “materialised view”. In other
thanks Kostas for the quick reply,
yes. It is related to my previous question.
When you said "But if you know what operation to push down" -> This is what
I am trying to search on Flink code. I want to know the operation on the
fly.
The component on Flink that will say to me that there is a filte
Hi Jincheng & Fabian,
+1 From my point of view.
I like this idea that you have to close `flatAggregate` with `select`
statement. In a way it will be consistent with normal `groupBy` and indeed it
solves the problem of mixing table and scalar functions.
I would be against supporting `select(‘*
I'm not aware of any plans to expose this in the StreamExecutionEnvironment.
The issue would be that we would start mixing submission details with
the job definition, which results in redundancy and weird semantics,
e.g., which savepoint configuration takes priority if both a job and CLI
job s
Hi again,
I forgot to say that, unfortunately, I am not familiar with Apache Edgent,
but if you can write your filter in Edgent's programming model,
Then you can push your data from Edgent to a third party storage system
(e.g. Kafka, HDFS, etc) and use Flink's connectors, instead of
having to impl
Hi Felipe,
This seems related to your previous question about a custom scheduler that
knows which task to run on which machine.
As Chesnay said, this is a rather involved and laborious task, if you want
to do it as a general framework.
But if you know what operation to push down, then why not dec
Chesnay Schepler created FLINK-11026:
Summary: Rework creation of sql-client connector/format jars
Key: FLINK-11026
URL: https://issues.apache.org/jira/browse/FLINK-11026
Project: Flink
I
Hi,
OK, not supporting select('*) in the first version, sounds like a good
first step, +1 for this.
However, I don't think that select('*) returning only the result columns of
the agg function would be a significant break in semantics.
Since aggregate()/flatAggregate() is the last command and (vi
Chesnay Schepler created FLINK-11025:
Summary: Connector shading is inconsistent
Key: FLINK-11025
URL: https://issues.apache.org/jira/browse/FLINK-11025
Project: Flink
Issue Type: Bug
Hi,
I am trying to design a little prototype with Flink and Apache Edgent (
http://edgent.apache.org/) and I would like some help on the direction for
it. I am running Flink at my laptop and Edgent on my Raspberry Pi with a
simple filter for a proximity sensor (
https://github.com/felipegutierrez/
@Kurt: Yes, I don't think that that forks of Flink will have a hard time
to keep up with the porting. That is also why I called this `long-term
goal` because I don't see big resources for the porting to happen
quicker. But at least new features, API, and runtime profit from Java to
Scala conver
Sure Shuyu,
What I hope is that we can reach an agreement on DDL gramma as soon as
possible. There are a few differences between your proposal and ours. Once
Lin and Jark propose our design, we can quickly discuss on the those
differences, and see how far away towards a unified design.
WRT the ext
41 matches
Mail list logo