Hi Harry,

A significant improvement to OOM was pushed on Friday, see the attached email.  
This new error refers to missing values in the 
/oom/kubernetes/config/onap-parameters.yaml configuration file.

Cheers,
Roger

From: [email protected] 
[mailto:[email protected]] On Behalf Of huangxiangyu
Sent: Monday, September 11, 2017 3:42 AM
To: Michael O'Brien; Tina Tsou
Cc: [email protected]; [email protected]
Subject: [onap-discuss] 答复: [opnfv-tech-discuss] [Auto] Error when create onap 
pods using kubernetes

Michael

Those two nginx containers are created manually for test purpose. I’m afraid 
that I don’t have any Ubuntu 16 environment right now so I have to continue 
test with Ubuntu 14. I just pull the oom repo today and there is new error 
appears when running ./createConfig.sh -n onap.
kubectl logs:
Validating onap-parameters.yaml has been populated
Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml

However It’s not always successful when I creating the config pod even before I 
pulled the oom repo. Usually, this config pod stays at creating state and never 
be completed. I ran the deleteAll.bash and delete /dockerdata-nfs just as wiki 
instructed. So am I missing any step ?

Thanks
Harry

发件人: Michael O'Brien [mailto:[email protected]]
发送时间: 2017年9月8日 11:59
收件人: huangxiangyu <[email protected]<mailto:[email protected]>>; 
Tina Tsou <[email protected]<mailto:[email protected]>>
抄送: 
[email protected]<mailto:[email protected]>; 
[email protected]<mailto:[email protected]>
主题: RE: [opnfv-tech-discuss] [Auto] Error when create onap pods using kubernetes

Harry,
   Hi, some comments.
   The nginx containers – we are not deploying these, since they are in the 
default namespace they don’t look part of a normal rancher setup as well – I am 
curious where these 2 pods came from and why you would need a reverse proxy – 
also verify you are running rancher on 8880 to avoid 80/8080 conflicts with 
these.
    I am suspicious of this reverse proxy – I would expect your forwarding 
issues stem from these 2 servers – normally used to manage a private subnet – 
they might not be handling IPV6 traffic (I remember an issue with this in 
general from a couple years ago)

default               nginx-deployment-431080787-6chxv       1/1       Running  
          0          1d
default               nginx-deployment-431080787-9nswb       1/1       Running  
          0          1d

    I would recommend you run on Ubuntu 16.04 not 14 – may work OK but all our 
examples assume 16 and we have not tested 14 (note that I had issues with 17).
    In a build before last Saturday – I did see the same 3 pods fail to come up 
vnc-portal, sdc-be and sdc-fe – make sure you have pulled from master this week 
- (try a git pull) – then reset your config pod via the following if any new 
content comes in.
https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Delete/Rerunconfig-initcontainerfor/dockerdata-nfsrefresh

    The aai-gremlin pod is not required and can be ignored for now – but it 
normally starts up – see the OOM-Kubernetes status section on

https://wiki.onap.org/display/DW/ONAP+master+branch+Stabilization#ONAPmasterbranchStabilization-20170907:OOMKubernetes

     thank you
     /michael



From: huangxiangyu [mailto:[email protected]]
Sent: Thursday, September 7, 2017 22:56
To: Michael O'Brien <[email protected]<mailto:[email protected]>>; 
Tina Tsou <[email protected]<mailto:[email protected]>>
Cc: 
[email protected]<mailto:[email protected]>; 
[email protected]<mailto:[email protected]>
Subject: 答复: [opnfv-tech-discuss] [Auto] Error when create onap pods using 
kubernetes

Hi Michael

Here the environment info:
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.5 LTS
Release:        14.04
Codename:       trusty

I didn’t check the pod status after I saw the Error message. Seems this ipv6 
error didn't stop pod creating process maybe this error only leads to no ipv6 
address for every pod.
Here the pods status:
NAMESPACE             NAME                                   READY     STATUS   
          RESTARTS   AGE
default               nginx-deployment-431080787-6chxv       1/1       Running  
          0          1d
default               nginx-deployment-431080787-9nswb       1/1       Running  
          0          1d
kube-system           heapster-4285517626-7vnf3              1/1       Running  
          0          7d
kube-system           kube-dns-646531078-x4h83               3/3       Running  
          0          7d
kube-system           kubernetes-dashboard-716739405-6pz6n   1/1       Running  
          20         7d
kube-system           monitoring-grafana-3552275057-03wqg    1/1       Running  
          0          7d
kube-system           monitoring-influxdb-4110454889-527nw   1/1       Running  
          0          7d
kube-system           tiller-deploy-737598192-5d3hc          1/1       Running  
          0          7d
onap                  config-init                            0/1       
Completed          0          1d
onap-aai              aai-dmaap-522748218-2jc3p              1/1       Running  
          0          1d
onap-aai              aai-kafka-2485280328-3kd9x             1/1       Running  
          0          1d
onap-aai              aai-resources-353718113-3h81b          1/1       Running  
          0          1d
onap-aai              aai-service-3321436576-twmwf           1/1       Running  
          0          1d
onap-aai              aai-traversal-338636328-vxxd3          1/1       Running  
          0          1d
onap-aai              aai-zookeeper-1010977228-xtflg         1/1       Running  
          0          1d
onap-aai              data-router-1397019010-k0w40           1/1       Running  
          0          1d
onap-aai              elasticsearch-2660384851-gnn3w         1/1       Running  
          0          1d
onap-aai              gremlin-3971586470-rn4p1               0/1       
CrashLoopBackOff   246        1d
onap-aai              hbase-3880914143-kj4r2                 1/1       Running  
          0          1d
onap-aai              model-loader-service-226363973-g5bdv   1/1       Running  
          0          1d
onap-aai              search-data-service-1212351515-88hwk   1/1       Running  
          0          1d
onap-aai              sparky-be-2088640323-k7vvs             1/1       Running  
          0          1d
onap-appc             appc-1972362106-m9hxv                  1/1       Running  
          0          1d
onap-appc             appc-dbhost-2280647936-4bv02           1/1       Running  
          0          1d
onap-appc             appc-dgbuilder-2616852186-l39zn        1/1       Running  
          0          1d
onap-message-router   dmaap-3565545912-6dvhm                 1/1       Running  
          0          1d
onap-message-router   global-kafka-701218468-nlqf2           1/1       Running  
          0          1d
onap-message-router   zookeeper-555686225-qsnsw              1/1       Running  
          0          1d
onap-mso              mariadb-2814112212-sd34k               1/1       Running  
          0          1d
onap-mso              mso-2505152907-8jqrm                   1/1       Running  
          0          1d
onap-policy           brmsgw-362208961-nvpsn                 1/1       Running  
          0          1d
onap-policy           drools-3066421234-cb2gs                1/1       Running  
          0          1d
onap-policy           mariadb-2520934092-xvj8r               1/1       Running  
          0          1d
onap-policy           nexus-3248078429-qhqrc                 1/1       Running  
          0          1d
onap-policy           pap-4199568361-j4sf6                   1/1       Running  
          0          1d
onap-policy           pdp-785329082-qs52n                    1/1       Running  
          0          1d
onap-policy           pypdp-3381312488-lgf9d                 1/1       Running  
          0          1d
onap-portal           portalapps-2799319019-m30kb            1/1       Running  
          0          1d
onap-portal           portaldb-1564561994-bn70l              1/1       Running  
          0          1d
onap-portal           portalwidgets-1728801515-kvlzt         1/1       Running  
          0          1d
onap-portal           vnc-portal-700404418-g6vsq             0/1       Init:2/5 
          142        1d
onap-robot            robot-349535534-5q1zz                  1/1       Running  
          0          1d
onap-sdc              sdc-be-628593118-dvvm5                 0/1       Running  
          0          1d
onap-sdc              sdc-cs-2640808243-v35fh                1/1       Running  
          0          1d
onap-sdc              sdc-es-227943957-lwrpb                 1/1       Running  
          0          1d
onap-sdc              sdc-fe-1609420241-r1n3d                0/1       Init:0/1 
          143        1d
onap-sdc              sdc-kb-1998598941-7p4n3                1/1       Running  
          0          1d
onap-sdnc             sdnc-250717546-qbt17                   1/1       Running  
          0          1d
onap-sdnc             sdnc-dbhost-3807967487-l7lq1           1/1       Running  
          0          1d
onap-sdnc             sdnc-dgbuilder-3446959187-dr427        1/1       Running  
          0          1d
onap-sdnc             sdnc-portal-4253352894-x6gh7           1/1       Running  
          0          1d
onap-vid              vid-mariadb-2932072366-t8sn1           1/1       Running  
          0          1d
onap-vid              vid-server-377438368-6fpr1             1/1       Running  
          0          1

here post the ipv6 settings of my server anyway:
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_from_local = 0
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.all.accept_ra_rtr_pref = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.force_tllao = 0
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.ndisc_notify = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.regen_max_retry = 3
net.ipv6.conf.all.router_probe_interval = 60
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitations = 3
net.ipv6.conf.all.suppress_frag_ndisc = 1
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.use_tempaddr = 2

Thanks
Harry

发件人: Michael O'Brien [mailto:[email protected]]
发送时间: 2017年9月7日 19:28
收件人: Tina Tsou <[email protected]<mailto:[email protected]>>; huangxiangyu 
<[email protected]<mailto:[email protected]>>
抄送: 
[email protected]<mailto:[email protected]>; 
[email protected]<mailto:[email protected]>
主题: RE: [opnfv-tech-discuss] [Auto] Error when create onap pods using kubernetes

Tina, Harry,
   Hi, sorry to hear that.  Could you post your environment (Ubuntu 16.04 ?)
   We can compare network setups then as I have not seen an IPV6 issue yet.
   Are you seeing all 6 pods of the k8s/rancher stack
   Post your results from a
   Kubectl get pods –all-namespaces –a
   You should see all 1/1 or 3/3
   There are sometimes issues with a clustered rancher setup where the dns pod 
is 0/1 above.

    Thank you
    /michael

From: Tina Tsou [mailto:[email protected]]
Sent: Thursday, September 7, 2017 03:31
To: huangxiangyu <[email protected]<mailto:[email protected]>>; 
Michael O'Brien <[email protected]<mailto:[email protected]>>
Cc: 
[email protected]<mailto:[email protected]>
Subject: Re: [opnfv-tech-discuss] [Auto] Error when create onap pods using 
kubernetes

Dear Frank,

Would you like to help here?

Thank you,
Tina

On Sep 6, 2017, at 7:03 PM, huangxiangyu 
<[email protected]<mailto:[email protected]>> wrote:
Hi Tina

Here is the error log of the main issue I met when perform ONAP deploy 
according to  https://wiki.onap.org/display/DW/ONAP+on+Kubernetes.

Creating deployments and services **********
E0907 01:24:17.232548   42374 portforward.go:209] Unable to create listener: 
Error listen tcp6 [::1]:51
444: bind: cannot assign requested address
NAME:   mso
LAST DEPLOYED: Thu Sep  7 01:24:38 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME     CLUSTER-IP    EXTERNAL-IP  PORT(S)
         AGE
mariadb  10.43.13.178  <nodes>      3306:30252/TCP
         0s
mso      10.43.53.224  <nodes>      
8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30
250/TCP  0s

==> extensions/v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mariadb  1        1        1           0          0s
mso      1        1        1           0          0s

I try to enable ipv6 on host server but still can’t fix the error. Maybe you 
can find some help with this?

Thanks
Harry
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
--- Begin Message ---
OOM users,

I’ve just pushed a change that requires a re-build of the /dockerdata-nfs/onap/ 
mount on your K8s host.



Basically, what I’ve tried to do is port over the heat stack version of ONAPs 
configuration mechanism.  The heat way of running ONAP writes files to 
/opt/config/ based on the stack’s environment file that has the details related 
to each users environment.  These values are then swapped in to the various VMs 
containers using scripts.



Now that we are using helm for OOM, I was able to do something similar in order 
to start trying to run the vFW/vLB demo use cases.

This story tracks the functionality that was needed: 
https://jira.onap.org/browse/OOM-277



I have also been made aware that this change requires K8s 1.6 as I am making 
use of the “envFrom”  
https://kubernetes.io/docs/api-reference/v1.6/#container-v1-core.  We stated 
earlier that we are setting minimum requirements of K8s 1.7 and rancher 1.6 for 
OOM so hopefully this isn’t a big issue.



It boils down to this:

/oom/kubernetes/config/onap-parameters.yaml is kind of like file 
“onap_openstackRC.env” and you will need to define some required values 
otherwise the config pod deployment will fail.



A sample can be found here:

/oom/kubernetes/config/onap-parameters-sample.yaml

Note: If you don’t care about interacting with openstack to launch VNFs then, 
you can just use the sample file contents.



continue to run createConfig.sh –n onap and it will install the config files 
and swap in your environment specific values before it completes.



createAll.bash –n onap to recreate your ONAP K8s environment and go from there.



Thx,

Mandeep

--



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss

--- End Message ---
_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to