Sihua Zhou created FLINK-9364:
-
Summary: Add doc for the memory usage in flink
Key: FLINK-9364
URL: https://issues.apache.org/jira/browse/FLINK-9364
Project: Flink
Issue Type: Improvement
Hi,
Let me first clarify a few things so that we are on the same page here:
1. The reason that we are looking into having a new base-module for versions
5.3+ is that
new Elasticsearch BulkProcessor APIs are breaking some of our original base API
assumptions.
This is noticeable from the intended
+dev@beam / hi dev@flink / I saw this and forwarded on to dev@beam for
consideration. There was general agreement that it was interesting so I
thought I'd loop them together. I tried to wait until both threads had
enough support that combining them wouldn't confuse things.
Beam would also be inter
bq. this pull request requires a review, please simply write any comment.
Shouldn't the wording of such comment be known before hand ?
Otherwise pull request waiting for committers' review may be mis-classified.
Cheers
On Mon, May 14, 2018 at 7:59 PM, blues zheng wrote:
> +1 for the proposal.
+1 for the proposal.
Best,
blues
On 05/14/2018 20:58, Ufuk Celebi wrote:
Hey Piotr,
thanks for bringing this up. I really like this proposal and also saw
it work successfully at other projects. So +1 from my side.
- I like the approach with a notification one week before
automatically closing t
+1. This could be very useful for "dynamic" UDF.
Just to clarify, if I understand correctly, we are tying to use an ENUM
indicator to
(1) Replace the current Boolean isExecutable flag.
(2) Provide additional information used by ExecutionEnvironment to decide
when/where to use the DistributedCached
It seems to me that if the transport client dependency is removed, the same
module could perform inserts, updates, and deletes via the http bulk API,
and whatever version differences exist with that API could be handled
inside the module without any difference to the classpath of the pipeline.
If
Ted Yu created FLINK-9363:
-
Summary: Bump up the Jackson version
Key: FLINK-9363
URL: https://issues.apache.org/jira/browse/FLINK-9363
Project: Flink
Issue Type: Improvement
Reporter: Ted
Bowen Li created FLINK-9362:
---
Summary: Document that Flink doesn't guarantee transaction of
modifying state and register timers in processElement()
Key: FLINK-9362
URL: https://issues.apache.org/jira/browse/FLINK-9362
Can you try out mvn 3.5.2 ?
I don't get the error when running the command line you gave.
BTW 2.7.3.2.6.2.0-205 was quite old release.
Cheers
On Mon, May 14, 2018 at 7:15 AM, shashank734 wrote:
> While building from source failing with following error :
>
> Failed to execute goal
> org.apache
Thanks for the reply Timo / Fabian,
Yes that's what I had in mind. ParameterType can be vague but return type
has to be exact.
I can image that: depending on the input parameter type, the output type
can be different. But I cannot think of a concrete use cases as of now.
I actually created a doc
While building from source failing with following error :
Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
(dependency-convergence) on project flink-bucketing-sink-test
MVN Version : 3.0.5
Command : mvn clean install -DskipTests -Dscala.version=2.11.7
-Pven
Actually my solution was to recompile Flink on my own PC.
I was just arguing whether the current build system should be considered OK
or not..
On Mon, May 14, 2018 at 4:59 PM, Ted Yu wrote:
> Flavio:
> Can you use the snapshot for 1.5 RC ?
> https://repository.apache.org/content/repositories/org
Flavio:
Can you use the snapshot for 1.5 RC ?
https://repository.apache.org/content/repositories/orgapacheflink-1154/
It was uploaded on Apr 2nd.
FYI
On Mon, May 14, 2018 at 7:54 AM, Fabian Hueske wrote:
> Hi,
>
> I'd assume that we stopped updating 1.5-SNAPSHOT jars when we forked off
> the r
Hi,
I'd assume that we stopped updating 1.5-SNAPSHOT jars when we forked off
the release-1.5 branch and updated the version on master to 1.6-SNAPSHOT.
Best, Fabian
2018-05-14 15:51 GMT+02:00 Flavio Pompermaier :
> Hi to all.
> we were trying to run a 1.5 Flink job and we set the version to
> 1.
Timo Walther created FLINK-9361:
---
Summary: Changing refresh interval in changelog mode leads to
exception
Key: FLINK-9361
URL: https://issues.apache.org/jira/browse/FLINK-9361
Project: Flink
I
Hi to all.
we were trying to run a 1.5 Flink job and we set the version to
1.5-SNAPSHOT.
Unfortunately the 1.5-SNAPSHOT version uploaded on the apache snapshot repo
is very old (february 2018). Shouldn't be this version be updated as well?
Best,
Flavio
Hey Piotr,
thanks for bringing this up. I really like this proposal and also saw
it work successfully at other projects. So +1 from my side.
- I like the approach with a notification one week before
automatically closing the PR
- I think a bot will the best option as these kinds of things are
usu
Andrey Zagrebin created FLINK-9360:
--
Summary: HA end-to-end nightly test takes more than 15 min in
Travis CI
Key: FLINK-9360
URL: https://issues.apache.org/jira/browse/FLINK-9360
Project: Flink
David Anderson created FLINK-9359:
-
Summary: Update quickstart docs to only mention Java 8
Key: FLINK-9359
URL: https://issues.apache.org/jira/browse/FLINK-9359
Project: Flink
Issue Type: Bug
Hi
I'm trying to copy data from kafka to HDFS . The data in HDFS is used to do
other computations by others in map/reduce.
If some tasks failed, the ".valid-length" file is created for the low version
hadoop. The problem is other people must know how to deal with the
".valid-length" file, othe
Hi Rong,
yes I think we can improve the type infererence at this point. Input
parameter type inference can be more tolerant but return types should be
as exact as possible.
The change should only touch ScalarSqlFunction and
UserDefinedFunctionUtils#createEvalOperandTypeInference, right?
Re
Till Rohrmann created FLINK-9358:
Summary: Closing of unestablished RM connections can cause NPE
Key: FLINK-9358
URL: https://issues.apache.org/jira/browse/FLINK-9358
Project: Flink
Issue Typ
Chesnay Schepler created FLINK-9357:
---
Summary: Add margins to yarn exception excerpts
Key: FLINK-9357
URL: https://issues.apache.org/jira/browse/FLINK-9357
Project: Flink
Issue Type: Improv
Florian Schmidt created FLINK-9356:
--
Summary: Improve error message for when queryable state not ready
/ reachable
Key: FLINK-9356
URL: https://issues.apache.org/jira/browse/FLINK-9356
Project: Flink
Hey,
We have lots of open pull requests and quite some of them are
stale/abandoned/inactive. Often such old PRs are impossible to merge due to
conflicts and it’s easier to just abandon and rewrite them. Especially there
are some PRs which original contributor created long time ago, someone else
Hi Rong,
I didn't look into the details of the example that you provided, but I
think if we can improve the internal type resolution of scalar UDFs we
should definitely go for it.
There is quite a bit of information available such as the signatures of the
eval() methods but also the argument types
Stefan Richter created FLINK-9355:
-
Summary: Simplify configuration of local recovery to a simple
on/off
Key: FLINK-9355
URL: https://issues.apache.org/jira/browse/FLINK-9355
Project: Flink
Chesnay Schepler created FLINK-9354:
---
Summary: print execution times for end-to-end tests
Key: FLINK-9354
URL: https://issues.apache.org/jira/browse/FLINK-9354
Project: Flink
Issue Type: Im
Aljoscha Krettek created FLINK-9353:
---
Summary: End-to-end test: Kubernetes integration
Key: FLINK-9353
URL: https://issues.apache.org/jira/browse/FLINK-9353
Project: Flink
Issue Type: Impro
vinoyang created FLINK-9352:
---
Summary: In Standalone checkpoint recover mode many jobs with same
checkpoint interval cause IO pressure
Key: FLINK-9352
URL: https://issues.apache.org/jira/browse/FLINK-9352
P
I think that is a good idea +1.
> Am 11.05.2018 um 20:41 schrieb Stephan Ewen :
>
> Hi!
>
> The configuration option (in flink-conf.yaml) for local recovery is currently
> an enumeration with the values "DISABLED" and "ENABLE_FILE_BASED".
>
> I would suggest to change that, for a few reasons:
Hi Fabian,
thanks you very much for the reply, just a alternative. Can we implement the
TTL logical in `AbstractStateBackend` and `AbstractState`? A simplest way is to
append the `ts` to the state's value? and we use the backend's `current
time`(its also can be event time and process time) to ju
Hi Sihua,
I think it makes sense to couple state TTL to the timer service. We'll need
some kind of timers to expire state, so I think we should reuse components
that we have instead of implementing another timer service.
Moreover, using the same timer service and using the public state APIs
helps
Hi Bowen,
thanks for your doc! I left some comments on the doc, the main concerning is
that it makes me feel like a coupling that the TTL need to depend on `timer`.
Because I think the TTL is a property of the state, so it should be backed by
the state backend. If we implement the TTL base on th
+1 for Stephan's proposal.
2018-05-14 8:22 GMT+02:00 Shuyi Chen :
> +1 to the proposal. IMO, the current option "ENABLE_FILE_BASED" contains
> too much implementation details and might confuse the simple users. Having
> a simple on/off toggle for majority of the users and an advanced option for
>
36 matches
Mail list logo