ere's really a good way yet to monitor the health
> of containerized jobs directly, so probably your best bet is to watch the
> job's metrics from outside the Flink cluster.
>
> --
> Patrick Lucas
>
> On Wed, Mar 29, 2017 at 10:58 AM, Chakravarthy varaga <
> cha
Hi,
Any updates here? I'm sure many would have faced similar issues like
these, any help here is highly appreciated.
Best Regards
CVP
On Tue, Mar 28, 2017 at 5:47 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Team,
>
>If the flink cluster is cona
Hi Team,
If the flink cluster is conainerized and managed through by a container
orchestrator,
1. the orchestrator allocates resources for each JM. TM etc., say if
the container (TM) needs to run with 2G RAM, how should this allocation be
honoured by the TM when its JVM starts. I'm thinki
; flink.jobmanager-1.weave.local, and so on …
>
> with flink Yarn it’s even simpler, but you have to dockerize a Yarn
> cluster.
>
> It works perfectly on bare metal machines and in the cloud (digital-ocean,
> aws,…).
>
>
>
> Le 24 mars 2017 à 08:50, Chakravarthy varaga
Hi,
I request someone to help here.
Best Regards
CVP
On Thu, Mar 23, 2017 at 10:13 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> I'm looking forward to hearing some updates on this...
>
> Any help here is highly appreciated !!
>
> On Th
I'm looking forward to hearing some updates on this...
Any help here is highly appreciated !!
On Thu, Mar 23, 2017 at 4:20 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Team,
>
> We are doing a PoC to deploy Flink cluster on AWS. All runtime
Hi Team,
We are doing a PoC to deploy Flink cluster on AWS. All runtime
components will be dockerized.
I have few questions in relation to discover & security:
1. How does Job Manager discover task managers? Do they talk to over
TCP ?
2. If the runtime components TM, JM a
Hi Team,
We are analysing different deployment options for managing Flink Jobs
on AWS EC2 instances.
Basically, the options (Resource Manangers) in front of us are using:
-> Standalone cluster
-> On YARN
-> Deploy using Mesos/Marthon
-> Deploy using Kubernetes/Docker
ther degradation when increasing the parallelism
> from 2 to 4, for example (given that you've increased the number of topic
> partitions to at least the maximum parallelism in your topology)?
>
> Cheers,
> Till
>
> On Tue, Jan 10, 2017 at 11:37 AM, Chakravarthy varaga <
Hi Guys,
I understand that you are extremely busy but any pointers here is
highly appreciated. I can proceed forward towards concluding the activity !
Best Regards
CVP
On Mon, Jan 9, 2017 at 11:43 AM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Anything that I could
Anything that I could check or collect for you for investigation ?
On Sat, Jan 7, 2017 at 1:35 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Stephen
>
> . Kafka version is: 0.9.0.1 the connector is flinkconsumer09
> . The flatmap n coflatmap are connected by
data based on parallelism)?
>
> Stephan
>
>
> On Fri, Jan 6, 2017 at 7:27 PM, Chen Qin wrote:
>
>> Just noticed there are only two partitions per topic. Regardless of how
>> large parallelism set. Only two of those will get partition assigned at
>> most.
>>
Hi All,
Any updates on this?
Best Regards
CVP
On Thu, Jan 5, 2017 at 1:21 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
>
> Hi All,
>
> I have a job as attached.
>
> I have a 16 Core blade running RHEL 7. The taskmanager default number of
> slots i
Hi All,
I have a job as attached.
I have a 16 Core blade running RHEL 7. The taskmanager default number of
slots is set to 1. The source is a kafka stream and each of the 2
sources(topic) have 2 partitions each.
*What I notice is that when I deploy a job to run with #parallelism=2 the
total pro
r robustness improvements [1].
> You might want to give 1.1.4 a try as well.
>
> Best, Fabian
>
> [1] http://flink.apache.org/news/2016/12/21/release-1.1.4.html
>
> 2017-01-04 16:51 GMT+01:00 Chakravarthy varaga :
>
>> Hi Stephan, All,
>>
>> I just got a
Tue, Oct 4, 2016 at 6:20 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Thanks for your prompt response Stephan.
>
> I'd wait for Flink 1.1.3 !!!
>
> Best Regards
> Varaga
>
> On Tue, Oct 4, 2016 at 5:36 PM, Stephan Ewen wrote:
>
gt;
> If you want to test it today, you would need to manually build the
> release-1.1 branch.
>
> Best,
> Stephan
>
>
> On Tue, Oct 4, 2016 at 5:46 PM, Chakravarthy varaga <
> chakravarth...@gmail.com> wrote:
>
>> Hi Gordon,
>>
>> Do I need t
mmunity is discussing to release soon.
>
> Will definitely be helpful if you can provide feedback afterwards!
>
> Best Regards,
> Gordon
>
>
> On October 3, 2016 at 9:40:14 PM, Chakravarthy varaga (
> chakravarth...@gmail.com) wrote:
>
> Hi Stephan,
>
>
Hi Stephan,
Is the Async kafka offset commit released in 1.3.1?
Varaga
On Wed, Sep 28, 2016 at 9:49 AM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Stephan,
>
> That should be great. Let me know once the fix is done and the
> snapshot version to u
reparing a fix for 1.2, possibly going into 1.1.3 as well.
> Could you try out a snapshot version to see if that fixes your problem?
>
> Greetings,
> Stephan
>
>
>
> On Tue, Sep 27, 2016 at 2:24 PM, Chakravarthy varaga <
> chakravarth...@gmail.com> wrote:
t your Kafka/ZooKeeper setup. One Kafka Input works well,
> the other does not. Do they go against different sets of brokers, or
> different ZooKeepers? Is the metadata for one input bad?
> - In the next Flink version, you may opt-out of committing offsets to
> Kafka/ZooKeeper
Hi Aljoscha & Fabian,
I have a stream application that has 2 stream source as below.
KeyedStream *ks1* = ds1.keyBy("*") ;
KeyedStream, String> *ks2* = ds2.flatMap(split T
into k-v pairs).keyBy(0);
ks1.connect(ks2).flatMap(X);
//X is a CoFlatMapFunction that inserts and re
Hi Team,
Will you be able to guide me on this? Is this a known issue with
checkpointing ?
CVP
On 22 Sep 2016 15:57, "Chakravarthy varaga"
wrote:
> PFA, Flink_checkpoint_time.png in relation to this issue.
>
> On Thu, Sep 22, 2016 at 3:38 PM, Chakravarthy var
PFA, Flink_checkpoint_time.png in relation to this issue.
On Thu, Sep 22, 2016 at 3:38 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Aljoscha & Fabian,
>
> Finally I got this working. Thanks for your help. In terms persisting
> the state (for S2), I t
> TypeInformation.of(BlockedRoadInfo.class), null);
> blockedRoads = getRuntimeContext().getState(blockedStateDesc);
>
> };
>
> }
>
> Cheers,
> Aljoscha
>
>
> On Mon, 12 Sep 2016 at 16:24 Chakravarthy varaga
> wrote:
>
>> Hi Fabian,
>>
&
n that inserts and removes elements from ks2 into a
> key-value state member. Elements from ks1 are matched against that state.
>
> Best, Fabian
>
> 2016-09-08 20:24 GMT+02:00 Chakravarthy varaga :
>
>> Hi Fabian,
>>
>> First of all thanks for a
full local copy in each operator.
>
> Let me add one more thing regarding the upcoming rescaling feature. If
> this is interesting for you, rescaling will also work (maybe not in the
> first version) for broadcasted state, i.e. state which is the same on all
> parallel operator inst
;
> If using the key-value state is possible, I'd go for that.
>
> Best, Fabian
>
> [1] https://ci.apache.org/projects/flink/flink-docs-
> release-1.1/apis/streaming/state.html
> [2] https://ci.apache.org/projects/flink/flink-docs-
> release-1.1/apis/streaming/state_backe
collection, will
this be replicated across the cluster within this job1 ?
On Wed, Sep 7, 2016 at 6:33 PM, Fabian Hueske wrote:
> Is writing DataStream2 to a Kafka topic and reading it from the other job
> an option?
>
> 2016-09-07 19:07 GMT+02:00 Chakravarthy varaga :
>
>> Hi Fab
t; Finally, you can use an external system like a KeyValue Store or In-Memory
> store like Apache Ignite to hold your distributed collection.
>
> Best, Fabian
>
> 2016-09-07 17:49 GMT+02:00 Chakravarthy varaga :
>
>> Hi Team,
>>
>> Can someone help me here?
Hi Team,
Can someone help me here? Appreciate any response !
Best Regards
Varaga
On Mon, Sep 5, 2016 at 4:51 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> Hi Team,
>
> I'm working on a Flink Streaming application. The data is injected
> throug
Hi Team,
I'm working on a Flink Streaming application. The data is injected
through Kafka connectors. The payload volume is roughly 100K/sec. The event
payload is a string. Let's call this as DataStream1.
This application also uses another DataStream, call it DataStream2,
(consumes events off
32 matches
Mail list logo