Hi guys,
I found that in CodeGenUtils, default values of numeric primitive types are
set to -1, what's the consideration of setting the default values to -1
instead of 0? IMHO 0 would make more sense, although in DB if a field is
null then all operations on this field will return null anyway.
Hi Max,
You are right. The uber jar is built in "intsall" or "package" stage and Yarn
test assumes the flink uber jar to pre-exist. We may have to document the steps
so that it will not be missed when someone else tries in the future.
I have also noticed errors with the "CliFrontendAddressConfigu
FLIP ?? Really? :D
http://www.maya.tv/en/character/flip
-Matthias
On 06/28/2016 06:26 PM, Aljoscha Krettek wrote:
> I'm proposing to add a formal process for how we deal with (major)
> improvements to Flink and design docs. This has been mentioned several
> times recently but we never took any
Greg Hogan created FLINK-4129:
-
Summary: HITSAlgorithm should test for element-wise convergence
Key: FLINK-4129
URL: https://issues.apache.org/jira/browse/FLINK-4129
Project: Flink
Issue Type: Bu
Mao, Wei created FLINK-4128:
---
Summary: compile error about git-commit-id-plugin
Key: FLINK-4128
URL: https://issues.apache.org/jira/browse/FLINK-4128
Project: Flink
Issue Type: Bug
Repo
I'm proposing to add a formal process for how we deal with (major)
improvements to Flink and design docs. This has been mentioned several
times recently but we never took any decisive action to actually implement
such a process so here we go.
Right now, we have Jira issues and we sometimes we have
Hi Aljoscha,
Thanks a lot for your inputs.
I still did not get you when you say you will not face this issue in case
of continuous stream, lets consider the following example :
Assume that the stream runs continuously from Monday to Friday, and on
Friday it stops after 5.00 PM , will I still fac
Robert Metzger created FLINK-4127:
-
Summary: Clean up configuration and check breaking API changes
Key: FLINK-4127
URL: https://issues.apache.org/jira/browse/FLINK-4127
Project: Flink
Issue T
Hi,
ingestion time can only be used if you don't care about the timestamp in
the elements. So if you have those you should probably use event time.
If your timestamps really are strictly increasing then the ascending
extractor is good. And if you have a continuous stream of incoming elements
you w
Hi Marius,
The current chaining code assumes there will never be too complex
chains. Which is probably true for most jobs but could become in issue
in some cases.
If you want to make sure the chain is broken, you can start a new
chain using `startNewChain()` on all single output operators.
Cheer
Hi Aljoscha,
Thank you for your response.
So do you suggest to use different approach for extracting timestamp (as
given in document) instead of AscendingTimeStamp Extractor ?
Is that the reason I am seeing this unexpected behaviour ? in case of
continuous stream I would not see any data loss ?
A
Hi,
first regarding tumbling windows: even if you have 5 minute windows it can
happen that elements that are only seconds apart go into different windows.
Consider the following case:
|x | x |
These are two 5-mintue windows and the two elements are only seconds apa
Hi Vijay,
Hasn't been failing for me whenever I pulled in the latest master.
Neither when I run "mvn clean verify" nor when I execute the test from
within IntelliJ. That said, there are some unstable tests that fail on
Travis.
>YARNSessionFIFOITCase.setup:71->YarnTestBase.startYARNWithConfig:343
Hi Chen,
I'm not sure what you mean with a pipeline but Flink supports the
submission of multiple jobs to the same cluster (in standalone as well as
Yarn session mode). You simply have to make sure that there are enough
slots for all jobs to be executed at the same time.
Cheers,
Till
On Jun 28, 2
14 matches
Mail list logo