One more suggestion is to run the same job in regular 2 node cluster and
see whether you are getting the same exception. So that you can narrow down
the issue easily.
Regards
Bhaskar
On Mon, Sep 23, 2019 at 7:50 AM Zili Chen wrote:
> Hi Debasish,
>
> As mentioned by Dian, it is an internal e
Hi Jiayi
We have face the same challenge as we deal with IoT unit and they do not
necessarily share the same timestamp. Watermark or. Key would be perfect match
here. We tried to workaround with handle late events as special case with
sideoutputs but isn’t the perfect solution.
My conclusion i
Hi Dipanjan,
I think you are right that it's already been supported to submit a job to
cluster via SQL client. This is supported in [1]. Besides, I think that it is
not configured in the YAML. It's specified in the CLI options when you start up
the SQL client CLI.
[1] https://issues.apache.org
Hi all,
Currently Watermark can only be supported on task’s level(or partition level),
which means that the data belonging to the faster key has to share the same
watermark with the data belonging to the slower key in the same key group of a
KeyedStream. This will lead to two problems:
1. L
The situation is as Dian said. Flink identifies jobs by job id instead of
job name.
However, I think it is still a valid question if it is an alternative Flink
identifies jobs by job name and
leaves the work to distinguish jobs by name to users. The advantages in
this way includes a readable
displ
Steven,
Thanks for the information. If we can determine this a common issue, we can
solve it in Flink core.
To get to that state, I have two questions which need your help:
1. Why is gauge not good for alerting? The metric "fullRestart" is a
Gauge. Does the metric reporter you use report Counter a
Hi Dipanjan:
I just looked through the Flink SQL client code and got the same conclusion as
you.
Look forward to receiving other comments.
Best,
Terry Wang
> 在 2019年9月22日,下午11:53,Dipanjan Mazumder 写道:
>
> Hi ,
> Thanks again for responding on my earlier queries..
> I was again going t
Hi David,
The jobs are identified by job id, not by job name internally in Flink and so
It will only check if there are two jobs with the same job id.
If you submit the job via CLI[1], I'm afraid there are still no built-in ways
provided as currently the job id is generated randomly when submi
Thanks Terry.
I would need some volunteers to speak about their use cases and the best
practised they have been following around flink.
—DK
On Sun, 22 Sep 2019 at 5:36 PM, Terry Wang wrote:
> Hi, Deepak~
>
> I appreciate your idea and cc to dev mail too.
>
> Best,
> Terry Wang
>
>
>
> 在 2019年9月
Hi Debasish,
As mentioned by Dian, it is an internal exception that should be always
caught by
Flink internally. I would suggest you share the job(abstractly). Generally
it is because
you use StreamPlanEnvironment/OptimizerPlanEnvironment directly.
Best,
tison.
Austin Cawley-Edwards 于2019年9月23
Have you reached out to the FlinkK8sOperator team on Slack? They’re usually
pretty active on there.
Here’s the link:
https://join.slack.com/t/flinkk8soperator/shared_invite/enQtNzIxMjc5NDYxODkxLTEwMThmN2I0M2QwYjM3ZDljYTFhMGRiNDUzM2FjZGYzNTRjYWNmYTE1NzNlNWM2YWM5NzNiNGFhMTkxZjA4OGU
Best,
Austin
Hi,
What is the best way to prevent from launching 2 jobs with the same name
concurrently ?
Instead of doing a check in the script that starts the Flink job, I would
prefer to stop a job if another one with the same name is in progress
(Exception or something like that).
David
The problem is I am submitting Flink jobs to Kubernetes cluster using a
Flink Operator. Hence it's difficult to debug in the traditional sense of
the term. And all I get is the exception that I reported ..
Caused by:
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException
at
Dear community,
happy to share this week's community update with a FLIP for the Pulsar
Connector contribution, three FLIPs for the SQL Ecosystem (plugin system,
computed columns, extended support for views), and a bit more. Enjoy!
Flink Development
==
* [connectors] After some discu
Hi ,
Thanks again for responding on my earlier queries..
I was again going through the Flink SQL client code and came across the
default custom command-line , few days back i came to know that Flink sql
client is not supported in a full fledged cluster with different resource
managers like
Hi,
I have a Flink 1.9.0 cluster deployed on AWS ECS. Cluster is running, but
metrics are not showing in the UI.
For other services (RPC / Data) it works because the connection is
initiated from the TM to the JM through a load-balancer. But it does not
work for metrics where JM tries to initiate
16 matches
Mail list logo