Ravi, have you looked at the io operation(iops) rate of the disk? You can
monitoring the iops performance and tune it accordingly with your work load.
This helped us in our project when we hit the wall tuning prototype much all
the parameters.
Rohan
From: Ravi
Hi Tzu-Li,
Any updated on this. This is consistently reproducible.
Same jar - Separate source topic to Separate destination topic.
This sort of blocker for flink upgrada. i tried with 1.7.2 but no luck.
org.apache.flink.streaming.connectors.kafka.FlinkKafka011Exception:
Failed to send data to
It is a blocker for exactly once support from flink kafka producer.
This issue reported and closed. but still reproducible
https://issues.apache.org/jira/browse/FLINK-10455
On Mon, May 6, 2019 at 10:20 AM Slotterback, Chris <
chris_slotterb...@comcast.com> wrote:
> Hey Flink users,
>
>
>
> Curre
Hi All,
I have following requirement
1. i have avro json message containing {eventid, usage, starttime, endtime}
2. i am reading this from kafka source
3. if there is overlapping hour in a record split the record by rounding
off to hourly bounderies
4.My objective is a) read the message b) aggr
gt; Gary
>
> [1] https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_
> timestamp_extractors.html
>
> On Fri, Jan 12, 2018 at 5:30 AM, Rohan Thimmappa <
> rohan.thimma...@gmail.com> wrote:
>
>> Hi All,
>>
>>
>> I have following
om 4:50 - 5:00 to be included in the 4:00 - 5:00 window? What if the
> event had
> an end time of 5:31? Do you then want to ignore the event for the 4:00 -
> 5:00
> window?
>
> Best,
>
> Gary
>
> On Fri, Jan 12, 2018 at 8:45 PM, Rohan Thimmappa <
> rohan.thimma...
ceive any
> events
> at all which could advance the watermark?
>
> I am asking because if you are receiving events for other keys/ids from
> your
> KafkaSource after 5:40, the watermark will still be advanced and fire the
> tumbling window.
>
> Best,
> Gary
>
> On
Hi All,
i have table containing usage which is counter data type. every time i get
usage for a id and would like to user counter data time to increment it.
https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_counter_t.html
Is it support POJO approach of cassandra sync or i have use SQL appro
Unsubscribe
--
Thanks
Rohan