Thanks Till, your reply answered my questions perfectly.
Regards,
Eric
On Fri, Oct 28, 2016 at 11:00 AM, Till Rohrmann
wrote:
> Hi Eric,
>
> concerning your first question. I think that
> AdvertisingTopologyFlinkStateHighKeyCard
> models a different scenario where one tries to count the number
Hi All,
I tried testing fault tolerance in a different way(not sure if it as
appropriate way) of my running flink application. I ran the flink
application on YARN and after completing few checkpoints, killed the YARN
application using:
yarn application -kill application_1476277440022_
Furthe
Hi All,
I'm trying to plot the flink application metrics using grafana backed by
influxdb. I need to plot/monitor the 'numRecordsIn' & 'numRecordsOut' for
each operator/operation. I'm finding it hard to generate the influxdb query
in grafana which can help me make this plot.
I am able to plot the
Hi Aljoscha,
I am using the custom trigger with GlobalWindows window assigner. Do I still
need to override clear method and delete the ProcessingTimeTimer using-
triggerContext.deleteProcessingTimeTimer(prevTime)?
Regards,
Anchit
--
View this message in context:
http://apache-flink-user-maili
Hello,
I am using Flink to write data to elasticsearch.
Flink version : 1.1.3
Elasticsearch version: 2.4.1
But I am getting the following error:
1/0/28/2016 18:58:56 Job execution switched to status FAILING.
java.lang.NoSuchMethodError:
org.elasticsearch.common.settings.Settings.settingsBui
Hello Flink community,
I am running into an issue with a custom FileInputFormat class and would
appreciate your help.
My goal is to read all files from a directory as paths:
val env : ExecutionEnvironment =
ExecutionEnvironment.getExecutionEnvironment
var source : DataSet[String] = env.rea
Hi Eric,
concerning your first question. I think that
AdvertisingTopologyFlinkStateHighKeyCard models a different scenario where
one tries to count the number ads per campaign for a large number of
campaigns. In this scenario, the input data already contains the campaign
id for each ad. I think th
Hi Fabian,
We have reworked our execution to remove the group reduce step and replaced
it with a map partition and we're seeing data passing more immediately now.
Thanks for your quick reply, it was very useful.
Regards,
Paul
On 26 October 2016 at 19:57, Fabian Hueske wrote:
> Hi Paul,
>
> Fl
Spark Cassandra connector does it! but I don't think it really implements a
custom partitioner I think it just leverages token aware policy and does
batch writes by default within a partition but you can also do across
partitions with the same replica!
On Thu, Oct 27, 2016 at 8:41 AM, Shannon Care