Hi!
I'm thinking about using a great Flink functionality - savepoints . I would
like to be able to stop my streaming application, rollback the state of it
and restart it (for example to update code, to fix a bug). Let's say I
would like travel back in time and reprocess some data.
But what if I had
Hi ,
As I reviewed the flink source code, if the ExecutionMode is set
"Pipelined", the DataExechangeMode will be set "Batch" if breaking pipelined
property is true for two input or iteration situation in order to avoid
deadlock. When the DataExechangeMode is set "Batch", the ResultPartiti
Hello Milind,
I'm not entirely sure i fully understood your question, but I'll try
anyway :)
There is now way to provide exactly-once semantics for Cassandra's
counters. As such we (will) only provide exactly-once semantics for a
subset of Cassandra operations; idempotent inserts/updates.
On Tue, May 10, 2016 at 10:56 AM, wangzhijiang999
wrote:
>As I reviewed the flink source code, if the ExecutionMode is set
> "Pipelined", the DataExechangeMode will be set "Batch" if breaking pipelined
> property is true for two input or iteration situation in order to avoid
> deadlock. Wh
hmm...
quite interesting question. But I think I don't fully understand your
use case - how are your applications coupled? Through kafka topics? E.g.
output of one is input for other?
Or do they consume from same input?
And why exactly do you want to get back to specific point in all of
them?
Hi Ufuk,
Thank you for quick response! I am not very clear of the internal realize
for iteration, so would you explain in detail why blocking results can not be
reset after each superstep?
In addition, for the below example, why it may cause deadlock in pipelined
mode?
DataSet mapped1 =
Hi,
in our more-or-less development environment we're doing sth like that in
our main method:
val processName = name_of_our_stream
val configuration = GlobalConfiguration.getConfiguration
val system = JobClient.startJobClientActorSystem(configuration)
val timeout = FiniteDura
Hi,
Some shameless self promotion:
You can also checkout:
https://github.com/ottogroup/flink-spector
which has to the goal to remove such hurdles when testing flink programs.
Best,
Alex
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/wr
thanks Alexander, I'll take a look
On 10 May 2016 at 13:07, lofifnc wrote:
> Hi,
>
> Some shameless self promotion:
>
> You can also checkout:
> https://github.com/ottogroup/flink-spector
> which has to the goal to remove such hurdles when testing flink programs.
>
> Best,
> Alex
>
>
>
>
>
> --
Hi there,
With S3 as state backend, as well as keeping a large chunk of user state on
heap. I can see task manager starts to fail without showing OOM exception.
Instead, it shows a generic error message (below) when checkpoint triggered. I
assume this has something to do with how state were kep
Hi,
I read the following in Flink doc "We can explicitly specify a Trigger to
overwrite the default Trigger provided by the WindowAssigner. Note that
specifying a triggers does not add an additional trigger condition but
replaces the current trigger."
So, I tested out the below code with count tri
Hi Chesnay
Sorry for asking the question in a confusing manner. Being new to flink,
there are many questions swirling around in my head.
Thanks for the details in your answers. Here's the facts , as I see them:
(a) Cassandra Counters are not idempotent
(b) The failures, in context of Cassandra,
Maybe the last example of this blog post is helpful [1].
Best, Fabian
[1]
https://www.mapr.com/blog/essential-guide-streaming-first-processing-apache-flink
2016-05-10 17:24 GMT+02:00 Srikanth :
> Hi,
>
> I read the following in Flink doc "We can explicitly specify a Trigger to
> overwrite the d
Yes, will work.
I was trying another route of having a "finalize & purge trigger" that will
i) onElement - Register for event time watermark but not alter nested
trigger's TriggerResult
ii) OnEventTime - Always purge after fire
That will work with CountTrigger and other custom trigger too rt?
On Tue, May 10, 2016 at 5:07 PM, Chen Qin wrote:
> Future, to keep large key/value space, wiki point out using rocksdb as
> backend. My understanding is using rocksdb will write to local file systems
> instead of sync to s3. Does flink support memory->rocksdb(local disk)->s3
> checkpoint state spl
On Tue, May 10, 2016 at 5:36 PM, milind parikh wrote:
> When will the Cassandra sink be released? I am ready to test it out even
> now.
You can work with Chesnay's branch here:
https://github.com/apache/flink/pull/1771
Clone his repo via Git, check out the branch, and then build it from
source
HBase write problem
Hi all.
I have a problem writing to HBase.
I am using a slightly modified example of this class to proof the concept:
https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseWriteExample.java
How
Do you have the hbase-site.xml available in the classpath?
On 10 May 2016 23:10, "Palle" wrote:
> HBase write problem
>
> Hi all.
>
> I have a problem writing to HBase.
>
> I am using a slightly modified example of this class to proof the concept:
>
> https://github.com/apache/flink/blob/master/f
Hi Ufuk,
Yes, it does help with Rocksdb backend!
After tune checkpoint frequency align with network throughput, task manager
released and job get cancelled are gone.
Chen
> On May 10, 2016, at 10:33 AM, Ufuk Celebi wrote:
>
>> On Tue, May 10, 2016 at 5:07 PM, Chen Qin wrote:
>> Future, to k
19 matches
Mail list logo