The stream consists of logs from different machines with synchronized
clocks. As a result timestamps are not strictly increasing but there is a
bound on how much out of order they can be. (One aim is to detect events go
out of order more then a certain amount indication some problem in the
system s
Hi Martin,
the answer depends, because the current windowing implementation has some
problems. We are working on improving it in the 0.10 release, though.
If your elements arrive with strictly increasing timestamps and you have
parallelism=1 or don't perform any re-partitioning of data (which a
gr
Hi,
no problem. The behavior is not documented and I also needed some time to
figure this out ;)
I'm already preparing a pull request to add a note into the documentation.
On Fri, Aug 28, 2015 at 4:41 PM, LINZ, Arnaud
wrote:
> Hi Robert,
>
>
>
> As seen together, my mistake was to launch the jo
Hi Robert,
As seen together, my mistake was to launch the job in detached mode (-yd) when
my main function was not waiting after execution and was immediately ending.
Sorry for my misunderstanding of this option.
Best regards,
Arnaud
De : Robert Metzger [mailto:rmetz...@apache.org]
Envoyé : ve
Hi,
Creating a slf4j logger like this:
private static final Logger LOG =
LoggerFactory.getLogger(PimpedKafkaSink.class);
Works for me. The messages also end up in the regular YARN logs. Also
system out should end up in YARN actually (when retrieving the logs from
the log aggregation).
Regards,
Hi Martin,
you need to implement you own policy. However, this should be be
complicated. Have a look at "TimeTriggerPolicy". You just need to
provide a "Timestamp" implementation that extracts you ts-attribute from
the tuples.
-Matthias
On 08/28/2015 03:58 PM, Martin Neumann wrote:
> Hej,
>
> I
Hej,
I have a stream of timestamped events I want to process in Flink streaming.
Di I have to write my own policies to do so, or can define time based
windows to use the timestamps instead of the system time?
cheers Martin
Hi,
I am wondering if it’s possible to get my own logs inside the job functions
(sources, mappers, sinks…). It would be nice if I could get those logs in the
Yarn’s logs, but writing System.out/System.err has no effect.
For now I’m using a “StringBuffer” accumulator but it does not work in
Is the log from 0.9-SNAPSHOT or 0.10-SNAPSHOT?
Can you send me (if you want privately as well) the full log of the yarn
application:
yarn logs -applicationId .
We need to find out why the TaskManagers are shutting down. That is most
likely logged in the TaskManager logs.
On Fri, Aug 28, 2015 a
Hello,
I’ve moved my version from 0.9.0 and tried both 0.9-SNAPSHOT & 0.10-SNAPSHOT to
continue my batch execution on my secured cluster thanks to [FLINK-2555].
My application works nicely in local mode and also in yarn mode using a job
container started with yarn-session.sh, but it fails in
Hey Kristoffer,
sorry for the late reply. I was on vacation.
Here you can find my initial email that also contains a description and
a link to the patch:
http://mail.openjdk.java.net/pipermail/compiler-dev/2015-January/009220.html
The Eclipse JDT team didn't really need a patch. Their compil
Just as a quick update on this: The change has been merged into
0.10-SNAPSHOT.
Flink is now writing the jobmanager connection information into the temp
directory.
On Wed, Aug 26, 2015 at 6:00 PM, Maximilian Michels wrote:
> Nice. More configuration options :)
>
> On Wed, Aug 26, 2015 at 5:58 PM,
12 matches
Mail list logo