Got it. Thanks
On Mon, Jan 7, 2019 at 5:32 PM Dominik WosiĆski wrote:
> Hey,
> AFAIK, Flink supports dynamic properties currently only on YARN and not
> really in standalone mode.
> If You are using YARN it should indeed be possible to set such
> configuration. If not, then I am afraid it is not
Hi Chesnay,
I do not want to store metric counter in reference variable because I want
to create metric counter for every key of keyed stream.
There can be n number of keys and I do not want to have n number of
references.
On Tue, 8 Jan, 2019, 11:01 PM Chesnay Schepler What you're trying to do
Get it, thanks.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Bruno,
there are multiple reasons why one of the subtasks can take longer for
checkpointing. It looks as if there is not much data skew since the state
sizes are relatively equal. It also looks as if the individual tasks all
start at the same time with the checkpointing which indicates that the
What you're trying to do is not possible. Even if you close the group
/it still exists/, and is returned by subsequent calls to
addGroup("mygroup").
However since it is closed all registration calls will be ignored, hence
why the value isn't updating.
You can only update a metric by storing a
Hi Chesnay,
If removing the metrics is not possible from Flink GUI, while the job is
running.
Then kindly tell me how to update a metric counter.
Explaination:
Suppose I created a metric Counter with key "chesnay" and incremented the
counter to 20, by code mentioned below.
getRuntimeContext().get
Metrics for a given job will be available in the GUI until the Job has
finished.
On 08.01.2019 17:08, Gaurav Luthra wrote:
Hi,
I am using ProcessWindowFunction, and in process() function I am
adding user scoped Group as mentioned below.
MetricGroup myMetricGroup=
getRuntimeContext().getMetri
Hi,
I am using ProcessWindowFunction, and in process() function I am adding
user scoped Group as mentioned below.
MetricGroup myMetricGroup = getRuntimeContext().getMetricGroup().addGroup(
"myGroup")
Now, I am creating counter metrics using my myMetricGroup, and I am able to
see these counters in
Sure, I will do that.
On Tue, Jan 8, 2019 at 7:25 PM Hequn Cheng wrote:
> Hi Puneet,
>
> Can you explain it in more detail? Do you mean the job is finished before
> you call ctx.timeservice()?
> Maybe you have to let your source running for a longer time.
>
> It's better to show us the whole pip
Hi Wenrui,
the exception now occurs while finishing the connection creation. I'm not
sure whether this is so different. Could it be that your network is
overloaded or not very reliable? Have you tried running your Flink job
outside of AthenaX?
Cheers,
Till
On Tue, Jan 8, 2019 at 2:50 PM Wenrui M
Hi Puneet,
Can you explain it in more detail? Do you mean the job is finished before
you call ctx.timeservice()?
Maybe you have to let your source running for a longer time.
It's better to show us the whole pipeline of your job. For example, write
a sample code(or provide a git link) that can rep
Hi,
A print user-defined table sink is helpful. I think a print user-defined
UDF is another workaround.
Hope this helps.
Best, Hequn
On Tue, Jan 8, 2019 at 1:45 PM yinhua.dai wrote:
> In our case, we wrote a console table sink which print everything on the
> console, and use "insert into" to w
Hi Till,
Thanks for your reply. Our cluster is Yarn cluster. I found that if we
decrease the total parallel the timeout issue can be avoided. But we do
need that amount of taskManagers to process data. In addition, once I
increase the netty server threads to 128, the error is changed to to
followi
Hi,
We are using Flink 1.6.1 at the moment and we have a streaming job
configured to create a checkpoint every 10 seconds. Looking at the
checkpointing times in the UI, we can see that one subtask is much slower
creating the endpoint, at least in its "End to End Duration", and seems
caused by a lo
Currently, this functionality is hard-coded in the aggregation
translation. Namely in
`org.apache.flink.table.runtime.aggregate.AggregateUtil#transformToAggregateFunctions`
[1].
Timo
[1]
https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/t
Thanks Timo for suggested solution. Will go with idea of artificial key for
our use case.
Gagan
On Mon, Jan 7, 2019 at 10:21 PM Timo Walther wrote:
> Hi Gagan,
>
> a typical solution to such a problem is to introduce an artifical key
> (enrichment id + some additional suffix), you can then keyB
16 matches
Mail list logo