Hi all,
It turns out that there were other factors influencing my performance tests.
(actually hbase)Hence, more consumers than partitions in Flink was not the
problem. Thanks for the help!
On Wednesday, August 3, 2016 5:42 PM, neo21 zerro
wrote:
Hello,
I've tried to increas
, August 3, 2016 12:58 PM, neo21 zerro
wrote:
It's the default, ProcessingTime.
On Wednesday, August 3, 2016 12:07 PM, Stephan Ewen
wrote:
Hi!
Are you running on ProcessingTime or on EventTime?
Thanks,Stephan
On Wed, Aug 3, 2016 at 11:57 AM, neo21 zerro wrote:
Hi guys,
Tha
It's the default, ProcessingTime.
On Wednesday, August 3, 2016 12:07 PM, Stephan Ewen
wrote:
Hi!
Are you running on ProcessingTime or on EventTime?
Thanks,Stephan
On Wed, Aug 3, 2016 at 11:57 AM, neo21 zerro wrote:
Hi guys,
Thanks for getting back to me.
So to cl
,
Stephan
On Wed, Aug 3, 2016 at 12:07 PM, Stephan Ewen wrote:
Hi!
Are you running on ProcessingTime or on EventTime?
Thanks,Stephan
On Wed, Aug 3, 2016 at 11:57 AM, neo21 zerro wrote:
Hi guys,
Thanks for getting back to me.
So to clarify:
Topology wise flink kafka source (does avro
ention on HBase.
Sent from my iPhone
> On Aug 3, 2016, at 4:14 AM, neo21 zerro wrote:
>
> Hello everybody,
>
> I'm using Flink Kafka consumer 0.8.x with kafka 0.8.2 and flink 1.0.3 on YARN.
> In kafka I have a topic which have 20 partitions and my flink topology re
Hello everybody,
I'm using Flink Kafka consumer 0.8.x with kafka 0.8.2 and flink 1.0.3 on YARN.
In kafka I have a topic which have 20 partitions and my flink topology reads
from kafka (source) and writes to hbase (sink).
when:
1. flink source has parallelism set to 40 (20 of the tasks are
Nevermind, I've figured it out.
I was skipping the tuples that were coming from kafka based on some custom
login.
That custom logic made sure that the kafka operator did not emit any tuples.
Hence, the missing metrics in the flink ui.
On Thursday, April 14, 2016 1:12 AM, neo21 zerro
Hello everybody,
I have an elasticsearch sink in my flink topology.
My requirement is to write the data in a partitioned fashion to my Sink.
For example I have Tuple which contains a user id. I want to group all events
by a user id and partition all events for one particular user to the same Es
Hello everybody,
I have deployed the latest Flink Version 1.0.1 on Yarn 2.5.0-cdh5.3.0.
When I push the WordCount example shipped with the Flink distribution, I can
see metrics (bytes received) in the Flink Ui on the corresponding operator.
However, I used the flink kafka connector and when I r