Thanks Ufuk, this did the trick.
Thanks,
Brian
On Wed, Dec 9, 2015 at 4:37 PM, Ufuk Celebi wrote:
> Hey Brian,
>
> did you follow the S3 setup guide?
> https://ci.apache.org/projects/flink/flink-docs-master/apis/example_connectors.html
>
> You have to set the fs.hdfs.hadoopconf property and add
Hello,
I would like to hive directions to make my code work again (thanks in advance).
My code used to work on versions equal or less than 9.1 but when i included 10
or 10.1 i got the following exception.
This type
(ObjectArrayTypeInfo>)
cannot be used as key
I understood that it is related to
Thanks!
On Thu, Dec 10, 2015 at 12:32 PM, Maximilian Michels wrote:
> Hi Cory,
>
> The issue has been fixed in the master and the latest Maven snapshot.
> https://issues.apache.org/jira/browse/FLINK-3143
>
> Cheers,
> Max
>
> On Tue, Dec 8, 2015 at 12:35 PM, Maximilian Michels
> wrote:
> > Than
Hi Cory,
The issue has been fixed in the master and the latest Maven snapshot.
https://issues.apache.org/jira/browse/FLINK-3143
Cheers,
Max
On Tue, Dec 8, 2015 at 12:35 PM, Maximilian Michels wrote:
> Thanks for the stack trace, Cory. Looks like you were on the right
> path with the Spark issue
Hi Niels!
I think there is no clean way to emit data from a trigger right now, you
can only emit data from the window functions.
You can emit two different kind of data types using an "Either" type. This
is built-in in Scala, in Java we added it on 1.0-SNAPSHOT:
https://github.com/apache/flink/bl
You are right, WindowFunctions collect all data in a window and are
evaluated at once.
Although FoldFunctions could be directly applied on each element that
enters a window, this is not done at the moment.
Only ReduceFunctions are eagerly applied.
If you port your code to a ReduceFunction, you can
I will give this a try.
Though I'm not sure I can switch over to WindowFunction.
I work with potentially huge Windows, the Fold gives me a minimal and
constant memory footprint. Switching to WindowFunction will require to keep
the Window in Memory before it can be processed (at least to my
underst
Sure. You don't need a trigger, but a WindowFunction instead of the
FoldFunction.
Only the WindowFunction has access to the Window object.
Something like this:
poissHostStreams
.timeWindow(Time.of(WINDOW_SIZE, TimeUnit.MILLISECONDS))
.apply(new WindowFunction() {
@overr
Hi Fabian,
thanks for your answer. Can I do the same in java using normal time windows
(without additional trigger)?
My current codes looks like this:
poissHostStreams
.timeWindow(Time.of(WINDOW_SIZE, TimeUnit.MILLISECONDS))
.fold(new Tuple2<>("", new HashMap<>()), new
MultiValue
Hi Martin,
you can get the start and end time of a window from the TimeWindow object.
The following Scala code snippet shows how to access the window end time
(start time is equivalent):
.timeWindow(Time.minutes(5))
.trigger(new EarlyCountTrigger(earlyCountThreshold))
.apply { (
key: Int,
win
I've finally fixed the issues identified here in the thread: The blob
manager and the application master/job manager allocate their ports in a
specified range.
You can now whitelist a port range in the firewall and Flink services will
only allocate ports in that range:
https://github.com/apache/fl
Hej,
Is it possible to extract the start and end window time stamps from within
a window operator?
I have an event time based window that does a simple fold function. I want
to put the output into elasticsearch and want to preserve the start and end
timestamp of the data so I can directly compare
Hi,
I'm working on something that uses the Flink Window feature.
I have written a custom Trigger to build the Window I need.
I am using the Window feature because I need state and I need to expire
(and clean) this state after a timeout (I use the onEventTime to do that).
Because I need the data s
13 matches
Mail list logo