Hello all,
I'm trying to do a fairly simple task that is to find the maximum value
(Double) received so far in a stream. This is what I implemented:
POJO class:
public class Fish{
public Integer fish_id;
public Point coordinate; //position
public Fish() {};
public Fish(fish_id,double x
Hi, Fabian
Thanks for replying!
I implemented a Custom RichInputFormat
implementing CheckpointableInputFormat. And I found it is executed
through InputFormatSourceFunction, which doesn't
use CheckpointableInputFormat during execution. If so, how does checkpoint
work here?
I also notice when one
Hi Vishwas,
First of all, 8 GB for 60 cores is not a lot.
You might not be able to utilize all cores when running Flink.
However, the memory usage depends on several things.
Assuming your are using Flink for stream processing, the type of the state
backend is important. If you use the FSStateBack
Hi,
I’m trying to implement a failure handler for ElasticSearch from the example in
the Flink documentation
DataStream input = ...;
input.addSink(new ElasticsearchSink<>(
config, transportAddresses,
new ElasticsearchSinkFunction() {...},
new ActionRequestFailureHandler() {
@
Hi Bruce, are you able to provide us with the full debug logs? From the
excerpt itself it is hard to tell what is going on.
Cheers,
Till
On Wed, Oct 2, 2019 at 2:24 PM Fabian Hueske wrote:
> Hi Bruce,
>
> I haven't seen such an exception yet, but maybe Till (in CC) can help.
>
> Best,
> Fabian
Hi All,
We have build a Flink Job using scala. In one specific operator
(CoProcessFunction based) we store data in a MapState. The input streams are
keyed by value of type ‘Seq[(String, CustomClassHierarchy)]’. When I try to
read a savepoint with the State Processor API I get some ‘Incompatib
Hi Bruce,
I haven't seen such an exception yet, but maybe Till (in CC) can help.
Best,
Fabian
Am Di., 1. Okt. 2019 um 05:51 Uhr schrieb Hanson, Bruce <
bruce.han...@here.com>:
> Hi all,
>
>
>
> We are running some of our Flink jobs with Job Manager High Availability.
> Occasionally we get a clu
I notice
https://ci.apache.org/projects/flink/flink-docs-stable/dev/types_serialization.html#rules-for-pojo-types
says that all non-transient fields need a setter.
That means that the fields cannot be final.
That means that the hashCode() should probably just return a constant value
(otherwise an
Hi Oliwer,
I think you are right. There seems to be something going wrong.
Just to clarify, you are sure that the growing state size is caused by the
window operator?
>From your description I assume that the state size does not depend (solely)
on the number of distinct keys.
Otherwise, the state
Hi,
State is always associated with a single task in Flink.
The state of a task cannot be accessed by other tasks of the same operator
or tasks of other operators.
This is true for every type of state, including broadcast state.
Best, Fabian
Am Di., 1. Okt. 2019 um 08:22 Uhr schrieb Navneeth Kr
10 matches
Mail list logo