Hi Team,
Will you be able to guide me on this? Is this a known issue with
checkpointing ?
CVP
On 22 Sep 2016 15:57, "Chakravarthy varaga"
wrote:
> PFA, Flink_checkpoint_time.png in relation to this issue.
>
> On Thu, Sep 22, 2016 at 3:38 PM, Chakravarthy varaga <
> chakravarth...@gmail.c
PFA, Flink_checkpoint_time.png in relation to this issue.
On Thu, Sep 22, 2016 at 3:38 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Aljoscha & Fabian,
>
> Finally I got this working. Thanks for your help. In terms persisting
> the state (for S2), I tried to use checkpoint e
Hi Aljoscha & Fabian,
Finally I got this working. Thanks for your help. In terms persisting
the state (for S2), I tried to use checkpoint every 10 Secs using a
FsStateBackend... What I notice is that the checkpoint duration is almost
2 minutes for many cases, while for the other cases it vari
Hi,
you don't need the BlockedEventState class, you should be able to just do
this:
private transient ValueState blockedRoads;
@Override
public void open(final org.apache.flink.configuration.Configuration
parameters) throws Exception {
final ValueStateDescri
Hi Fabian,
I'm coding to check if your proposal works and hit with an issue with
ClassCastException.
// Here is my Value that has state information.an implementation of
my value state... where the key is a Double value... on connected stream ks2
public class BlockedEventState im
Not sure if I got your requirements right, but would this work?
KeyedStream ks1 = ds1.keyBy("*") ;
KeyedStream, String> ks2 = ds2.flatMap(split T into k-v
pairs).keyBy(0);
ks1.connect(ks2).flatMap(X)
X is a CoFlatMapFunction that inserts and removes elements from ks2 into a
key-value state membe
Hi Fabian,
First of all thanks for all your prompt responses. With regards to 2)
Multiple looks ups, I have to clarify what I mean by that...
DS1 elementKeyStream = stream1.map(String<>); this maps each
of the streaming elements into string mapped value...
DS2 = stream2.xxx(); //
sage.key))
>
> ~Pushpendra Jaiswal
>
>
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Sharing-Java-
> Collections-within-Flink-Cluster-tp8919p8965.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>
= somekafka.createStream
The final goal is to have something like:
otherStream.foreach(kafkamessage => keyedStream.lookup(kafkamessage.key))
~Pushpendra Jaiswal
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Sharing-Java-Collections-within-Fl
That depends.
1) Growing/Shrinking: This should work. New entries can always be inserted.
In order to remove entries from the k-v-state you have to set the value to
null. Note that you need an explicit delete-value record to trigger the
eviction.
2) Multiple lookups: This does only work if all look
I'm understanding this better with your explanation..
With this use case,each element in DS1 has to look up against a 'bunch
of keys' from DS2 and DS2 could shrink/expand in terms of the no., of
keys will the key-value shard work in this case?
On Wed, Sep 7, 2016 at 7:44 PM, Fabian Hueske
Operator state is always local in Flink. However, with key-value state, you
can have something which behaves kind of similar to a distribute hashmap,
because each operator holds a different shard/partition of the hashtable.
If you have to do only a single key lookup for each element of DS1, you
sh
certainly, what I thought as well...
The output of DataStream2 could be in 1000s and there are state updates...
reading this topic from the other job, job1, is okie.
However, assuming that we maintain this state into a collection, and
updating the state (by reading from the topic) in this collectio
Is writing DataStream2 to a Kafka topic and reading it from the other job
an option?
2016-09-07 19:07 GMT+02:00 Chakravarthy varaga :
> Hi Fabian,
>
> Thanks for your response. Apparently these DataStream
> (Job1-DataStream1 & Job2-DataStream2) are from different flink applications
> running
Hi Fabian,
Thanks for your response. Apparently these DataStream (Job1-DataStream1
& Job2-DataStream2) are from different flink applications running within
the same cluster.
DataStream2 (from Job2) applies transformations and updates a 'cache'
on which (Job1) needs to work on.
Our inte
Hi,
Flink does not provide shared state.
However, you can broadcast a stream to CoFlatMapFunction, such that each
operator has its own local copy of the state.
If that does not work for you because the state is too large and if it is
possible to partition the state (and both streams), you can als
Hi Team,
Can someone help me here? Appreciate any response !
Best Regards
Varaga
On Mon, Sep 5, 2016 at 4:51 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Team,
>
> I'm working on a Flink Streaming application. The data is injected
> through Kafka connectors. The payl
Hi Team,
I'm working on a Flink Streaming application. The data is injected
through Kafka connectors. The payload volume is roughly 100K/sec. The event
payload is a string. Let's call this as DataStream1.
This application also uses another DataStream, call it DataStream2,
(consumes events off
18 matches
Mail list logo