Hi Son
I think it might be because of not assigning operator ids to your Filter and
Map functions, you could refer to [1] to assign ids to your application.
Moreover, if you have ever removed some operators, please consider to add
--allowNonRestoredState [2] option.
[1]
https://ci.apache.org/
Thank you Addison, this is very helpful.
On Fri, Mar 22, 2019 at 10:12 AM Addison Higham wrote:
> Our implementation has quite a bit more going on just to deal with
> serialization of types, but here is pretty much the core of what we do in
> (psuedo) scala:
>
> class DynamoSink[...](...) extend
Hi Konstantin,
Thanks for the response. What still concerned me is:
1. Am I able to recover from checkpoints even if I change my program
(for example, changing Filter and Map functions, data objects, ..) ? I was
not able to recover from savepoints when I changed my program.
On Mon, Ma
I have 2 options
1. A Rest Based, in my case a Jetty/REST based QueryableStateClient in a
side car container colocated on JM ( Though it could on all TMs but that
looks to an overkill )
2.A Rest Based, in my case a Jetty/REST based QueryableStateClient in a
side car container colocated on T
Hi all!
The ExecutionConfig has some very old settings: forceAvro() and
forceKryo(), which are actually misleadingly named. They cause POJOs to use
Avro or Kryo rather than the POJO serializer.
I think we do not have a good case any more to use Avro for POJOs. POJOs
that are also Avro types go th
Hi Avi,
Good to hear that!
Cheers,
Kostas
On Mon, Mar 25, 2019 at 3:37 PM Avi Levi wrote:
> Thanks, I'll check it out. I got a bit confused with the Ingesting time
> equals to null in tests but all is ok now , I appreciate that
>
> On Mon, Mar 25, 2019 at 1:01 PM Kostas Kloudas wrote:
>
>> Hi
I think Seed is correct that we don't properly report backpressure from an
AsyncWaitOperator. The problem is that not the Task's main execution thread
but the Emitter thread will emit the elements and, thus, be stuck in the
`requestBufferBuilderBlocking` method. This, however, does not mean that
th
Thanks, I'll check it out. I got a bit confused with the Ingesting time
equals to null in tests but all is ok now , I appreciate that
On Mon, Mar 25, 2019 at 1:01 PM Kostas Kloudas wrote:
> Hi Avi,
>
> Just to verify your ITCase, I wrote the following dummy example and it
> seems to be "working"
Hi Avi,
Just to verify your ITCase, I wrote the following dummy example and it
seems to be "working" (ie. I can see non null timestamps and timers firing).
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.In
Hi Kostas,
Thank you.
I'm currently testing my job against a small file, so it's finishing before
the checkpointing starts.
But also if it was a larger file and checkpoint did happen, there would
always be the tailing events starting after the last checkpoint until the
source has finished.
So woul
Hi Son,
yes, this is possible, but your sink needs to play its part in Flink's
checkpointing mechanism. Depending on the implementation of the sink you
should either:
* implemented *CheckpointedFunction *and flush all records to BigQuery in
*snapshotState*. This way in case of a failure/restart o
Hi Jeff,
do you see any log files in the log directory of your Flink installation
directory? If so, please share them.
Cheers,
Konstantin
On Sat, Mar 23, 2019 at 5:36 PM Jeff Crane wrote:
> When downloading the latest 1.7.2 and extracting to (a free) AMZ EC2, the
> daemon (./bin/start-cluster
12 matches
Mail list logo