.
And will restrict the maximum value to 1000 so that no mishap happens about
memory. and will tune this max value with memories of JobManager and my
application.
And try to explore other solutions in flink.
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
On Thu, Jan 17, 2019 at 9:40 AM
values of myKey.
Is there any alternate solution? I am looking for a solution where I
achieve above functionality to maintain approx. 100 thousands counter
metrics without keeping their reference in map (or any other data
structure).
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
ric by storing a reference to it in your function.
> Why do you want to avoid the member variable?
>
> On 08.01.2019 17:24, Gaurav Luthra wrote:
>
> Hi Chesnay,
>
> If removing the metrics is not possible from Flink GUI, while the job is
> running.
> Then kindly tell me
Hi Chesnay,
If removing the metrics is not possible from Flink GUI, while the job is
running.
Then kindly tell me how to update a metric counter.
Explaination:
Suppose I created a metric Counter with key "chesnay" and incremented the
counter to 20, by code mentioned below.
getRuntimeContext().get
ated using myMetricGroup shall be
removed from Flink GUI.
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
erstanding the reason of this exception. And kindly tell how can we
get different transactional ID for two jobs.
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
AggregateFunction in aggregate() method.
So, Kindly guide me, how can I create custom metrics in my code?
Note: As we know we can not user RichAggregateFunction with aggregate()
method
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
Hi Chesnay,
My End user will be aware about the fields of "input records"
(GenericRecord). In configuration my end user only will tell me the name and
number of the fields, based on these fields of GenericRecord I will have to
partition the DataStream and make Keyed Stream.
Currently, I have impl
There is a data stream of some records, Lets call them "input records".
Now, I want to partition this data stream by using keyBy(). I want
partitioning based on one or more fields of "input record", But the number
and type of fields are not fixed.
So, Kindly tell me how should I achieve this partit
Hi Fabian,
Thanks for explaining in detail. But we know and you also mentioned the
issues in 1) and 2). So, I am continuing with point 3).
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
On Mon, Oct 1, 2018 at 3:11 PM Fabian Hueske wrote:
> Hi,
>
> There are basically three
()
method.
So, I am looking for inputs if anyone has tried implementing aggregation
using ProcessFunction and process() function. As it very much needed thing
with flink.
Thanks and Regards,
Gaurav Luthra
Mob:- +91-9901945206
On Sun, Sep 30, 2018 at 5:12 AM Ken Krugler
wrote:
> Hi Gau
ot my point.
Thanks & Regards
Gaurav Luthra
On Fri, Sep 28, 2018 at 4:22 PM Chesnay Schepler wrote:
> Please see: https://issues.apache.org/jira/browse/FLINK-10250
>
> On 28.09.2018 11:27, vino yang wrote:
>
> Hi Gaurav,
>
> Yes, you are right. It is really not allow
ReturnType(
function, input.getType(), null, false);
return aggregate(function, accumulatorType, resultType);
}
Kindly, check above snapshot of flink;s aggregate() method, that got
applied on windowed stream.
Thanks & Regards
Gaurav Luthra
Mob:- +91-9901945206
On Fri, Sep 28, 2018 at 1:40 PM vino y
to implement it. So, kindly share
your feedback on this.
As I need to implement this.
Thanks & Regards
Gaurav Luthra
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
14 matches
Mail list logo