:* Monday, May 01, 2017 12:56 PM
*To:* Newport, Billy [Tech]; 'user@flink.apache.org'
*Subject:* Re: Collector.collect
Oh you have multiple different output formats, missed that.
For the Batch API you are i believe correct, using a custom
output-format is the best solution.
In the Strea
, May 01, 2017 12:56 PM
To: Newport, Billy [Tech]; 'user@flink.apache.org'
Subject: Re: Collector.collect
Oh you have multiple different output formats, missed that.
For the Batch API you are i believe correct, using a custom output-format is
the best solution.
In the Streaming API the
17 10:41 AM
*To:* user@flink.apache.org
*Subject:* Re: Collector.collect
Hello,
@Billy, what prevented you from duplicating/splitting the record,
based on the bitmask, in a map function before the sink?
This shouldn't incur any serialization overhead if the sink is chained
to the map. T
approaches?
From: Chesnay Schepler [mailto:ches...@apache.org]
Sent: Monday, May 01, 2017 10:41 AM
To: user@flink.apache.org
Subject: Re: Collector.collect
Hello,
@Billy, what prevented you from duplicating/splitting the record, based on the
bitmask, in a map function before the sink?
This shouldn
ch
faster.
We’re finding a need to do this kind of optimization pretty frequently
with flink.
*From:*Gaurav Khandelwal [mailto:gaurav671...@gmail.com]
*Sent:* Saturday, April 29, 2017 4:32 AM
*To:* user@flink.apache.org
*Subject:* Collector.collect
Hello
I am working on RichProcessFunction and
Subject: Collector.collect
Hello
I am working on RichProcessFunction and I want to emit multiple records at a
time. To achieve this, I am currently doing :
while(condition)
{
Collector.collect(new Tuple<>...);
}
I was wondering, is this the correct way or there is any other alternative.
Hello
I am working on RichProcessFunction and I want to emit multiple records at
a time. To achieve this, I am currently doing :
while(condition)
{
Collector.collect(new Tuple<>...);
}
I was wondering, is this the correct way or there is any other alternative.
My flat map function is catching & logging the exception. The try block happens
to encompass the call to Collector#collect().
I will move the call to collect outside of the try. That should silence the log
message.
On 9/30/16, 3:51 AM, "Ufuk Celebi" wrote:
>On Thu, Sep 29, 2016 at 9:29 PM,
On Thu, Sep 29, 2016 at 9:29 PM, Shannon Carey wrote:
> It looks like Flink is disabling the objects that the FlatMap collector
> relies on before disabling the operator itself. Is that expected/normal? Is
> there anything I should change in my FlatMap function or job code to account
> for it?
He
When I cancel a job, I get many exceptions that look like this:
java.lang.RuntimeException: Could not forward element to next operator
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:376)
at
org.apache.flink.streaming.runtime.tasks.Opera
10 matches
Mail list logo