Hi Yang,
Thanks for your help, that command worked, so we connected a remote debugger
and found the root exception was initially a timeout exception from okhttp. The
increases you mentioned worked.
Thanks again for all the help!
Best,
kevin
On 2020/06/19 03:46:36, Yang Wang mailto:d...@gmail.
Thanks for sharing the DEBUG level log.
I carefully check the logs and find that the kubernetes-client discovered
the
api server address and token successfully. However, it could not contact
with
api server(10.100.0.1:443). Could you check whether you api server is
configured
to allow accessing w
Hi Kevin,
Sorry for not notice your last response.
Could you share you full DEBUG level jobmanager logs? I will try to figure
out
whether it is a issue of Flink or K8s. Because i could not reproduce your
situation
with my local K8s cluster.
Best,
Yang
Yang Wang 于2020年6月8日周一 上午11:02写道:
> Hi Ke
Hi Yang
I’m using DEBUG level; do you know what to search for to see kubernetes-client
K8s apiserver address? I don’t see anything useful so far.
Best
kevin
On 2020/06/08 16:02:07, "Bohinski, Kevin"
mailto:k...@comcast.com>> wrote:
> Hi Yang>
>
>
>
> Thanks again for your help so far.>
>
> I t
Hi Yang
Thanks again for your help so far.
I tried your suggestion, still with no luck.
Attached are the logs, please let me know if there are more I should send.
Best
kevin
On 2020/06/08 03:02:40, Yang Wang mailto:d...@gmail.com>> wrote:
> Hi Kevin,>
>
> It may because the characters length li
Hi Kevin,
It may because the characters length limitation of K8s(no more than 63)[1].
So the pod
name could not be too long. I notice that you are using the client
automatic generated
cluster-id. It may cause problem and could you set a meaningful cluster-id
for your Flink
session? For example,
k
Thanks Yang for the suggestion, I have tried it and I'm still getting the
same exception. Is it possible its due to the null pod name? Operation:
[create] for kind: [Pod] with name: [null] in namespace: [default]
failed.
Best,
kevin
--
Sent from: http://apache-flink-user-mailing-list-archiv
If you have created the role binding "flink-role-binding-default"
successfully,
then it should not be the RBAC issue.
It seems that kubernetes-client in JobManager pod could not contact to
K8s apiserver due to okhttp issue with java 8u252. Could you add the
following
config option to disable http2
Thanks!
I do not see any pods of the form `flink-taskmanager-1-1`, so I tried the
exec suggestion.
The logs are attached below. Is there a quick RBAC check I could perform? I
followed the command on the docs page linked (kubectl create
clusterrolebinding flink-role-binding-default --clusterrole=ed
I second Yangze's suggestion. You need to get the jobmanager log first. Then
it will be easier to find the root cause. I know that it is not convenient
for users
to access the log via kubectl and we already have a ticket for this[1].
Usually, the reason that Flink resourcemanager could not allocat
Amend: for release 1.10.1, please refer to this guide [1].
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/native_kubernetes.html#log-files
Best,
Yangze Guo
On Thu, Jun 4, 2020 at 9:52 AM Yangze Guo wrote:
>
> Hi, Kevin,
>
> Regarding logs, you could follow this
Hi, Kevin,
Regarding logs, you could follow this guide [1].
BTW, you could execute "kubectl get pod" to get the current pods. If
there is something like "flink-taskmanager-1-1", you could execute
"kubectl describe pod flink-taskmanager-1-1" to see the status of it.
[1]
https://ci.apache.org/pro
Hi
We are using 1.10.1 with native k8s and while the service appears to be
created and I can submit a job & see it via Web UI, TMs/pods are never
created thus the jobs never start.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
Could not allocate the required slot wit
13 matches
Mail list logo