From cboylan at sapwetik.org Mon Nov 5 18:36:27 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 05 Nov 2018 10:36:27 -0800 Subject: [Airship-discuss] Community Infrastructure Berlin Summit Onboarding Session Message-ID: <1541442987.3761150.1566423440.2A7B1A3E@webmail.messagingengine.com> Hello everyone, My apologies for cross posting but wanted to make sure the various developer groups saw this. Rather than use the Infrastructure Onboarding session in Berlin [0] for infrastructure sysadmin/developer onboarding, I thought we could use the time for user onboarding. We've got quite a few new groups interacting with us recently, and it would probably be useful to have a session on what we do, how people can take advantage of this, and so on. I've been brainstorming ideas on this etherpad [1]. If you think you'll attend the session and find any of these subjects to be useful please +1 them. Also feel free to add additional topics. I expect this will be an informal session that directly targets the interests of those attending. Please do drop by if you have any interest in using this infrastructure at all. This is your chance to better understand Zuul job configuration, the test environments themselves, the metrics and data we collect, and basically anything else related to the community developer infrastructure. [0] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22950/infrastructure-project-onboarding [1] https://etherpad.openstack.org/p/openstack-infra-berlin-onboarding Hope to see you there, Clark From ks3019 at att.com Tue Nov 6 20:28:23 2018 From: ks3019 at att.com (SKELS, KASPARS) Date: Tue, 6 Nov 2018 20:28:23 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E6164B6@MOKSCY3MSGUSRGI.ITServices.sbc.com> Hi Qiaolin/Maxwell, it seems like you may be using 1 control node for your deployment. The ceph-mgr chart is set to have 2 replicas and is trying to launch them (here you get a port clash). Generally speaking, the manifests (by default) are targeted for 3 control node (HA) configuration, including ceph to reflect production-like HA deployment https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-client-update.yaml#L145 That said, you may adjust the replica count on site level (set 1, instead of 2) as well, as well as OSD count here by doing overrides (matching how many OSDs you have) https://github.com/openstack/airship-treasuremap/blob/master/site/airship-seaworthy/software/charts/ucp/ceph/ceph-client-update.yaml And also, adjustments in other charts may be required to set this up for a single control plane node, here is more info on reference site https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html Kindly, Kaspars From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Tuesday, November 6, 2018 2:10 PM To: qiaolin.tu at nokia-sbell.com Cc: MEADOWS, ALAN ; MONTEIRO, FELIPE C ; BARTRA, RICK ; REDDY, CHINASUBBA ; GORSHUNOV, ROMAN ; KHUNTIA, SOUMITRA ; KABANOV, DMITRII ; Mark Burnett ; VOLKOV, ANDREY ; avolkov at mirantis.com; Chris Wedgwood ; PALLAV GUPTA ; GUPTA, SANGEET ; zuul at review01.openstack.org; HUSSEY, SCOTT T ; PACHECO, RODOLFO J ; maxwell.li at nokia-sbell.com; SKELS, KASPARS Subject: Re: Airship installation Questions On Tue, Nov 6, 2018 at 3:42 AM Tu, Qiaolin (NSB - CN/Hangzhou) > wrote: Hi, I also found that pod(ceph-mgr-6dc44fc75b-25n24) is always on the pending state. It report error as below: Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. root at cab23-r720-11:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-n5vzj 0/1 Completed 0 20m ceph ceph-cephfs-client-key-generator-656rd 0/1 Completed 0 20m ceph ceph-cephfs-provisioner-676684f6bd-trf5h 1/1 Running 0 20m ceph ceph-cephfs-provisioner-676684f6bd-zfk2s 1/1 Running 0 20m ceph ceph-mds-6bfb74d9c7-btbt5 1/1 Running 0 37m ceph ceph-mds-6bfb74d9c7-qw4xl 1/1 Running 0 37m ceph ceph-mds-keyring-generator-dlqjt 0/1 Completed 0 53m ceph ceph-mgr-6dc44fc75b-25n24 0/1 Pending 0 37m ceph ceph-mgr-6dc44fc75b-42mqz 1/1 Running 0 37m ceph ceph-mgr-keyring-generator-7dftx 0/1 Completed 0 53m ceph ceph-mon-cfrf6 1/1 Running 0 41m ceph ceph-mon-check-6db6b569b6-67vvm 1/1 Running 0 53m ceph ceph-mon-keyring-generator-55ps8 0/1 Completed 0 53m root at cab23-r720-11:~# kubectl describe pod ceph-mgr-6dc44fc75b-25n24 -n ceph Name: ceph-mgr-6dc44fc75b-25n24 Namespace: ceph Node: Labels: application=ceph component=mgr pod-template-hash=2870097316 release_group=airship-ucp-ceph-client Annotations: configmap-bin-hash=d23610f5cc67014b596eb45b9d47ddb97997d6ec5fd3289fb264429a260f39a9 configmap-etc-client-hash=0174be85f29398aa5cea5fccb9537e0e0f25ea2d2f280182e9e2506a1be1b115 Status: Pending IP: Controlled By: ReplicaSet/ceph-mgr-6dc44fc75b Init Containers: init: Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Port: Host Port: Command: kubernetes-entrypoint Environment: POD_NAME: ceph-mgr-6dc44fc75b-25n24 (v1:metadata.name) NAMESPACE: ceph (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ DEPENDENCY_SERVICE: ceph:ceph-mon DEPENDENCY_JOBS: ceph-storage-keys-generator,ceph-mgr-keyring-generator DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) ceph-init-dirs: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Port: Host Port: Command: /tmp/init-dirs.sh Environment: CLUSTER: ceph Mounts: /etc/ceph from pod-etc-ceph (rw) /run from pod-run (rw) /tmp/init-dirs.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Containers: ceph-mgr: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Ports: 7000/TCP, 9283/TCP Host Ports: 7000/TCP, 9283/TCP Command: /mgr-start.sh Liveness: exec [/tmp/mgr-check.sh liveness] delay=30s timeout=5s period=10s #success=1 #failure=3 Readiness: exec [/tmp/mgr-check.sh readiness] delay=30s timeout=5s period=10s #success=1 #failure=3 Environment: CLUSTER: ceph ENABLED_MODULES: restful status prometheus Mounts: /etc/ceph from pod-etc-ceph (rw) /etc/ceph/ceph.client.admin.keyring from ceph-client-admin-keyring (ro) /etc/ceph/ceph.conf from ceph-client-etc (ro) /mgr-start.sh from ceph-client-bin (ro) /run from pod-run (rw) /tmp/mgr-check.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/lib/ceph/bootstrap-mgr/ceph.keyring from ceph-bootstrap-mgr-keyring (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Conditions: Type Status PodScheduled False Volumes: pod-etc-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: ceph-client-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-bin Optional: false ceph-client-etc: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-etc Optional: false pod-var-lib-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: pod-run: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory ceph-client-admin-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-client-admin-keyring Optional: false ceph-bootstrap-mgr-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-bootstrap-mgr-keyring Optional: false ceph-mgr-token-pgbwp: Type: Secret (a volume populated by a Secret) SecretName: ceph-mgr-token-pgbwp Optional: false QoS Class: BestEffort Node-Selectors: ceph-mgr=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 06, 2018 1:46 PM To: 'MEADOWS, ALAN' >; Kaspars Skels > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; Li, Maxwell (NSB - CN/Hangzhou) > Subject: Airship installation Questions Hi, Thanks very much for your kindly support. I also encountered a problem when install multi-node airship. It blocked by 2 ceph-rgw pods. They crash off and run again and again. Pods error log as bellow: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-6ff5ff866d-4x66n -n ceph Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned ceph-rgw-6ff5ff866d-4x66n to cab23-r720-11 Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-etc-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-var-lib-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-run" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-etc" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-bin" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-bootstrap-rgw-keyring" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-token-sjg9x" Normal Pulled 1m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Started 1m kubelet, cab23-r720-11 Started container Normal Pulled 1m kubelet, cab23-r720-11 Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine Normal Created 1m kubelet, cab23-r720-11 Created container Normal Started 1m kubelet, cab23-r720-11 Started container Warning Unhealthy 18s (x6 over 1m) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.105:8088/: dial tcp 10.97.38.105:8088: getsockopt: connection refused Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: MEADOWS, ALAN > Sent: Saturday, November 03, 2018 5:39 AM To: Kaspars Skels >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; Tu, Qiaolin (NSB - CN/Hangzhou) >; PACHECO, RODOLFO J > Subject: RE: Airship Version Questions Hi Maxwell! Great to see additional people trying Airship out. Treasuremap does change quite frequently. In fact, this is from a lot of the great work Kaspars has been doing – building a CI/CD pipeline to not only keep treasuremap fresh and referencing the latest working versions of Airship components but also validating that the YAML documents are able to provision a multi-node baremetal physical environment by way of third-party gates. This gives us a high degree of confidence that no matter what version of treasuremap you have decided to start working with to deploy your own environments it should work. To be sure, we do understand when getting started it is nice to work with a stable target that isn’t constantly changing. Kaspars has introduced tagging to provide a human readable reference (using dates). These are not on any particular cadence, but usually follow more in-depth tested release beyond just automated CI/CD using the manifests: https://github.com/openstack/airship-treasuremap/releases For right now with regard to pegleg, the best way to leverage the version of pegleg that has been tested with the treasuremap manifests is to pull it the one used in the baremetal CI/CD gate test directly from the target tag Jenkinsfile. For example, for v18.11.01: https://github.com/openstack/airship-treasuremap/blob/v18.11.01/tools/gate/Jenkinsfile#L15 Please feel free to get on the airship mailing list (airshipit.org) to connect with a wider audience or our IRC channel! Alan Meadows From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Friday, November 02, 2018 2:27 PM To: maxwell.li at nokia-sbell.com Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; qiaolin.tu at nokia-sbell.com; MEADOWS, ALAN >; PACHECO, RODOLFO J > Subject: Re: Airship Version Questions + Alan/Rodolfo On Thu, Nov 1, 2018 at 3:37 AM Li, Maxwell (NSB - CN/Hangzhou) > wrote: Hi Airship Team: I have a problem when I deploy airship on my server. There are new commits in airship-pegleg and airship-treasuremap, almost erveryday. And there are no tags or branches in these code repositories. Could I get a commit ID about these repositories so that I could deploy airship. Thanks a lot! Best Regards! Maxwell Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluejay.ahn at gmail.com Wed Nov 7 05:49:16 2018 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Wed, 7 Nov 2018 14:49:16 +0900 Subject: [Airship-discuss] Status of writing Specification for a Switch Fabric Controller Message-ID: Hi Airship developers, >From the design call happened on Oct 18th, SK Telecom presented a network fabric automation, and there was an action item to write specification for a switch fabric controller (Rodolfo). - https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions You can find a meeting recording and presentation slides on the above link. I know that everyone is very busy, especially at this time of year. I just want to check the current status, and am politely asking to give even short feedback on our presentation. If airship community can give us a list of requirements or use case scenarios, it would help us a lot to engage more proactively on this topic. We are willing to work together to build "network fabric automation" under airship project. :) Thank you. Jaesuk Ahn, Software R&D Center, SK Telecom. -- Jaesuk Ahn, Team Lead Virtualization SW Lab, SW R&D Center SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Thu Nov 8 07:25:09 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Thu, 8 Nov 2018 07:25:09 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: <2ADBF0C373B7E84E944B1E06D3CDDFC91E6164B6@MOKSCY3MSGUSRGI.ITServices.sbc.com> References: <2ADBF0C373B7E84E944B1E06D3CDDFC91E6164B6@MOKSCY3MSGUSRGI.ITServices.sbc.com> Message-ID: Hi, Thanks very much for your reply. I adjust the replica count on site level (set 1, instead of 2) and ceph-rgw pod still have getsockopt: connection refused error. Below is the log detail: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-5b6645c456-rsv4k -n ceph Name: ceph-rgw-5b6645c456-rsv4k Namespace: ceph Node: cab23-r720-11/10.23.22.11 Start Time: Wed, 07 Nov 2018 23:07:06 +0000 Labels: application=ceph component=rgw pod-template-hash=1622017012 release_group=airship-ucp-ceph-rgw Annotations: configmap-bin-hash=aee34fa624622fb03c48b85c014e805a1012f74f772bc0f3a1e91597839ce49e configmap-etc-client-hash=38eb0b2eb23d31bcd3c4238c2bf9f6b8acd335659d94b3fcfff6c333ed1339b2 Status: Running IP: 10.97.38.77 Controlled By: ReplicaSet/ceph-rgw-5b6645c456 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 5m (x737 over 7h) kubelet, cab23-r720-11 Back-off restarting failed container Warning Unhealthy 39s (x1154 over 7h) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.77:8088/: dial tcp 10.97.38.77:8088: getsockopt: connection refused root at cab23-r720-11:~# kubectl logs -f ceph-rgw-5b6645c456-rsv4k -n ceph + export LC_ALL=C + LC_ALL=C + : 0 ++ uname -n + : ceph-rgw-5b6645c456-rsv4k + : '' + : '' + : 0 + : 9000 + : 0.0.0.0 + : /etc/ceph/ceph.client.admin.keyring + : /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring + : /var/lib/ceph/bootstrap-rgw/ceph.keyring + [[ ! -e /etc/ceph/ceph.conf ]] + '[' 0 -eq 1 ']' + '[' '!' -e /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring ']' + RGW_FRONTENDS='civetweb port=8088' + '[' 0 -eq 1 ']' + /usr/bin/radosgw --cluster ceph --setuser ceph --setgroup ceph -d -n client.rgw.ceph-rgw-5b6645c456-rsv4k -k /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring --rgw-socket-path= --rgw-zonegroup= --rgw-zone= '--rgw-frontends=civetweb port=8088' 2018-11-08 06:53:53.769321 7f3c7826de80 0 deferred set uid:gid to 64045:64045 (ceph:ceph) 2018-11-08 06:53:53.769371 7f3c7826de80 0 ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable), process (unknown), pid 8 root at cab23-r720-11:~# kubectl get pods -n ceph NAME READY STATUS RESTARTS AGE airship-ucp-ceph-provisioners-ceph-ns-key-generator-8vm8b 0/1 Completed 0 20h ceph-bootstrap-mp5tn 0/1 Completed 0 22h ceph-cephfs-client-key-generator-4sqrd 0/1 Completed 0 20h ceph-cephfs-provisioner-676684f6bd-48fq8 1/1 Running 0 20h ceph-cephfs-provisioner-676684f6bd-mqwnj 1/1 Running 0 20h ceph-mds-6bfb74d9c7-gqgtl 1/1 Running 0 22h ceph-mds-6bfb74d9c7-sk4pw 1/1 Running 0 22h ceph-mds-keyring-generator-m2rjj 0/1 Completed 0 22h ceph-mgr-6dc44fc75b-jtv7w 0/1 Pending 0 22h ceph-mgr-6dc44fc75b-jvstm 1/1 Running 0 22h ceph-mgr-keyring-generator-m56p5 0/1 Completed 0 22h ceph-mon-check-6db6b569b6-flg76 1/1 Running 0 22h ceph-mon-h2gsm 1/1 Running 0 21h ceph-mon-keyring-generator-5phjm 0/1 Completed 0 22h ceph-osd-default-64779b8c-c74ms 1/1 Running 0 22h ceph-osd-default-6ea9de2c-hn4tm 1/1 Running 0 22h ceph-osd-default-70a54190-t54l7 1/1 Running 0 22h ceph-osd-default-7544b6da-mxfj6 1/1 Running 1 22h ceph-osd-default-7cfc44c1-vbqs4 1/1 Running 0 22h ceph-osd-default-83945928-66m9s 1/1 Running 0 22h ceph-osd-default-be8e8cc4-2vnxb 1/1 Running 0 22h ceph-osd-default-f9249fa9-8gw25 1/1 Running 0 22h ceph-osd-keyring-generator-fnrgk 0/1 Completed 0 22h ceph-rbd-pool-4dzjz 0/1 Completed 0 22h ceph-rbd-provisioner-84bc5c88c7-smfcn 1/1 Running 0 20h ceph-rbd-provisioner-84bc5c88c7-vtc5g 1/1 Running 0 20h ceph-rgw-5b6645c456-rsv4k 0/1 Running 74 8h ceph-rgw-5b6645c456-sgzpt 0/1 Running 74 8h ceph-rgw-storage-init-pqf6q 0/1 Completed 0 8h ceph-storage-keys-generator-9fx4k 0/1 Completed 0 22h ingress-65dc849968-9zlqn 1/1 Running 0 22h ingress-65dc849968-cc8xp 1/1 Running 0 22h ingress-error-pages-796b76c856-dnpcp 1/1 Running 0 22h ingress-error-pages-796b76c856-tt9x4 1/1 Running 0 22h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: SKELS, KASPARS Sent: Wednesday, November 07, 2018 4:28 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) ; Li, Maxwell (NSB - CN/Hangzhou) Cc: GORSHUNOV, ROMAN ; MEADOWS, ALAN ; MCEUEN, MATT ; PACHECO, RODOLFO J ; 'airship-discuss at lists.airshipit.org' Subject: RE: Airship installation Questions Hi Qiaolin/Maxwell, it seems like you may be using 1 control node for your deployment. The ceph-mgr chart is set to have 2 replicas and is trying to launch them (here you get a port clash). Generally speaking, the manifests (by default) are targeted for 3 control node (HA) configuration, including ceph to reflect production-like HA deployment https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-client-update.yaml#L145 That said, you may adjust the replica count on site level (set 1, instead of 2) as well, as well as OSD count here by doing overrides (matching how many OSDs you have) https://github.com/openstack/airship-treasuremap/blob/master/site/airship-seaworthy/software/charts/ucp/ceph/ceph-client-update.yaml And also, adjustments in other charts may be required to set this up for a single control plane node, here is more info on reference site https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html Kindly, Kaspars From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Tuesday, November 6, 2018 2:10 PM To: qiaolin.tu at nokia-sbell.com Cc: MEADOWS, ALAN >; MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; Mark Burnett >; VOLKOV, ANDREY >; avolkov at mirantis.com; Chris Wedgwood >; PALLAV GUPTA >; GUPTA, SANGEET >; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; maxwell.li at nokia-sbell.com; SKELS, KASPARS > Subject: Re: Airship installation Questions On Tue, Nov 6, 2018 at 3:42 AM Tu, Qiaolin (NSB - CN/Hangzhou) > wrote: Hi, I also found that pod(ceph-mgr-6dc44fc75b-25n24) is always on the pending state. It report error as below: Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. root at cab23-r720-11:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-n5vzj 0/1 Completed 0 20m ceph ceph-cephfs-client-key-generator-656rd 0/1 Completed 0 20m ceph ceph-cephfs-provisioner-676684f6bd-trf5h 1/1 Running 0 20m ceph ceph-cephfs-provisioner-676684f6bd-zfk2s 1/1 Running 0 20m ceph ceph-mds-6bfb74d9c7-btbt5 1/1 Running 0 37m ceph ceph-mds-6bfb74d9c7-qw4xl 1/1 Running 0 37m ceph ceph-mds-keyring-generator-dlqjt 0/1 Completed 0 53m ceph ceph-mgr-6dc44fc75b-25n24 0/1 Pending 0 37m ceph ceph-mgr-6dc44fc75b-42mqz 1/1 Running 0 37m ceph ceph-mgr-keyring-generator-7dftx 0/1 Completed 0 53m ceph ceph-mon-cfrf6 1/1 Running 0 41m ceph ceph-mon-check-6db6b569b6-67vvm 1/1 Running 0 53m ceph ceph-mon-keyring-generator-55ps8 0/1 Completed 0 53m root at cab23-r720-11:~# kubectl describe pod ceph-mgr-6dc44fc75b-25n24 -n ceph Name: ceph-mgr-6dc44fc75b-25n24 Namespace: ceph Node: Labels: application=ceph component=mgr pod-template-hash=2870097316 release_group=airship-ucp-ceph-client Annotations: configmap-bin-hash=d23610f5cc67014b596eb45b9d47ddb97997d6ec5fd3289fb264429a260f39a9 configmap-etc-client-hash=0174be85f29398aa5cea5fccb9537e0e0f25ea2d2f280182e9e2506a1be1b115 Status: Pending IP: Controlled By: ReplicaSet/ceph-mgr-6dc44fc75b Init Containers: init: Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Port: Host Port: Command: kubernetes-entrypoint Environment: POD_NAME: ceph-mgr-6dc44fc75b-25n24 (v1:metadata.name) NAMESPACE: ceph (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ DEPENDENCY_SERVICE: ceph:ceph-mon DEPENDENCY_JOBS: ceph-storage-keys-generator,ceph-mgr-keyring-generator DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) ceph-init-dirs: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Port: Host Port: Command: /tmp/init-dirs.sh Environment: CLUSTER: ceph Mounts: /etc/ceph from pod-etc-ceph (rw) /run from pod-run (rw) /tmp/init-dirs.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Containers: ceph-mgr: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Ports: 7000/TCP, 9283/TCP Host Ports: 7000/TCP, 9283/TCP Command: /mgr-start.sh Liveness: exec [/tmp/mgr-check.sh liveness] delay=30s timeout=5s period=10s #success=1 #failure=3 Readiness: exec [/tmp/mgr-check.sh readiness] delay=30s timeout=5s period=10s #success=1 #failure=3 Environment: CLUSTER: ceph ENABLED_MODULES: restful status prometheus Mounts: /etc/ceph from pod-etc-ceph (rw) /etc/ceph/ceph.client.admin.keyring from ceph-client-admin-keyring (ro) /etc/ceph/ceph.conf from ceph-client-etc (ro) /mgr-start.sh from ceph-client-bin (ro) /run from pod-run (rw) /tmp/mgr-check.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/lib/ceph/bootstrap-mgr/ceph.keyring from ceph-bootstrap-mgr-keyring (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Conditions: Type Status PodScheduled False Volumes: pod-etc-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: ceph-client-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-bin Optional: false ceph-client-etc: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-etc Optional: false pod-var-lib-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: pod-run: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory ceph-client-admin-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-client-admin-keyring Optional: false ceph-bootstrap-mgr-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-bootstrap-mgr-keyring Optional: false ceph-mgr-token-pgbwp: Type: Secret (a volume populated by a Secret) SecretName: ceph-mgr-token-pgbwp Optional: false QoS Class: BestEffort Node-Selectors: ceph-mgr=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 06, 2018 1:46 PM To: 'MEADOWS, ALAN' >; Kaspars Skels > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; Li, Maxwell (NSB - CN/Hangzhou) > Subject: Airship installation Questions Hi, Thanks very much for your kindly support. I also encountered a problem when install multi-node airship. It blocked by 2 ceph-rgw pods. They crash off and run again and again. Pods error log as bellow: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-6ff5ff866d-4x66n -n ceph Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned ceph-rgw-6ff5ff866d-4x66n to cab23-r720-11 Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-etc-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-var-lib-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-run" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-etc" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-bin" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-bootstrap-rgw-keyring" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-token-sjg9x" Normal Pulled 1m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Started 1m kubelet, cab23-r720-11 Started container Normal Pulled 1m kubelet, cab23-r720-11 Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine Normal Created 1m kubelet, cab23-r720-11 Created container Normal Started 1m kubelet, cab23-r720-11 Started container Warning Unhealthy 18s (x6 over 1m) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.105:8088/: dial tcp 10.97.38.105:8088: getsockopt: connection refused Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: MEADOWS, ALAN > Sent: Saturday, November 03, 2018 5:39 AM To: Kaspars Skels >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; Tu, Qiaolin (NSB - CN/Hangzhou) >; PACHECO, RODOLFO J > Subject: RE: Airship Version Questions Hi Maxwell! Great to see additional people trying Airship out. Treasuremap does change quite frequently. In fact, this is from a lot of the great work Kaspars has been doing – building a CI/CD pipeline to not only keep treasuremap fresh and referencing the latest working versions of Airship components but also validating that the YAML documents are able to provision a multi-node baremetal physical environment by way of third-party gates. This gives us a high degree of confidence that no matter what version of treasuremap you have decided to start working with to deploy your own environments it should work. To be sure, we do understand when getting started it is nice to work with a stable target that isn’t constantly changing. Kaspars has introduced tagging to provide a human readable reference (using dates). These are not on any particular cadence, but usually follow more in-depth tested release beyond just automated CI/CD using the manifests: https://github.com/openstack/airship-treasuremap/releases For right now with regard to pegleg, the best way to leverage the version of pegleg that has been tested with the treasuremap manifests is to pull it the one used in the baremetal CI/CD gate test directly from the target tag Jenkinsfile. For example, for v18.11.01: https://github.com/openstack/airship-treasuremap/blob/v18.11.01/tools/gate/Jenkinsfile#L15 Please feel free to get on the airship mailing list (airshipit.org) to connect with a wider audience or our IRC channel! Alan Meadows From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Friday, November 02, 2018 2:27 PM To: maxwell.li at nokia-sbell.com Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; qiaolin.tu at nokia-sbell.com; MEADOWS, ALAN >; PACHECO, RODOLFO J > Subject: Re: Airship Version Questions + Alan/Rodolfo On Thu, Nov 1, 2018 at 3:37 AM Li, Maxwell (NSB - CN/Hangzhou) > wrote: Hi Airship Team: I have a problem when I deploy airship on my server. There are new commits in airship-pegleg and airship-treasuremap, almost erveryday. And there are no tags or branches in these code repositories. Could I get a commit ID about these repositories so that I could deploy airship. Thanks a lot! Best Regards! Maxwell Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From ks3019 at att.com Thu Nov 8 07:49:06 2018 From: ks3019 at att.com (SKELS, KASPARS) Date: Thu, 8 Nov 2018 07:49:06 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: <2ADBF0C373B7E84E944B1E06D3CDDFC91E6164B6@MOKSCY3MSGUSRGI.ITServices.sbc.com> Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E616E83@MOKSCY3MSGUSRGI.ITServices.sbc.com> Hey! Have a look if ceph is healthy in general with following command (cluster status) kubectl exec –it –n ceph ceph-mon-h2gsm -- ceph -s I still see that you have 2 instances of ceph-mgr from your terminal outputs ceph-mgr-6dc44fc75b-jtv7w 0/1 Pending 0 22h ceph-mgr-6dc44fc75b-jvstm 1/1 Running 0 22h There are 2 ceph-client charts, 1 for initial single node/genesis, and ceph-client-update for final state (when additional nodes join). Feel free to explore https://github.com/openstack/airship-treasuremap/tree/master/global/software/charts/ucp/ceph I think simplest way for you to get ceph running (if you are using single control plane) would be to just stay with initial ceph as it will also keep crush rules and other settings as needed (final state requires 3 domains/hosts) I would say change https://github.com/openstack/airship-treasuremap/blob/master/global/software/manifests/full-site.yaml#L22 From `ucp-ceph-update` to just `ucp-ceph` instead; this will keep using initial single node/genesis ceph and keep it healthy. For RGW – is similar – you will need to set it to 1 replica (right now there is no site override so you would need to craft one or set on global level) https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-rgw.yaml#L133 Kindly, Kaspars From: Tu, Qiaolin (NSB - CN/Hangzhou) [mailto:qiaolin.tu at nokia-sbell.com] Sent: Thursday, November 8, 2018 1:25 AM To: SKELS, KASPARS ; Li, Maxwell (NSB - CN/Hangzhou) Cc: GORSHUNOV, ROMAN ; MEADOWS, ALAN ; MCEUEN, MATT ; PACHECO, RODOLFO J ; 'airship-discuss at lists.airshipit.org' Subject: RE: Airship installation Questions Hi, Thanks very much for your reply. I adjust the replica count on site level (set 1, instead of 2) and ceph-rgw pod still have getsockopt: connection refused error. Below is the log detail: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-5b6645c456-rsv4k -n ceph Name: ceph-rgw-5b6645c456-rsv4k Namespace: ceph Node: cab23-r720-11/10.23.22.11 Start Time: Wed, 07 Nov 2018 23:07:06 +0000 Labels: application=ceph component=rgw pod-template-hash=1622017012 release_group=airship-ucp-ceph-rgw Annotations: configmap-bin-hash=aee34fa624622fb03c48b85c014e805a1012f74f772bc0f3a1e91597839ce49e configmap-etc-client-hash=38eb0b2eb23d31bcd3c4238c2bf9f6b8acd335659d94b3fcfff6c333ed1339b2 Status: Running IP: 10.97.38.77 Controlled By: ReplicaSet/ceph-rgw-5b6645c456 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 5m (x737 over 7h) kubelet, cab23-r720-11 Back-off restarting failed container Warning Unhealthy 39s (x1154 over 7h) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.77:8088/: dial tcp 10.97.38.77:8088: getsockopt: connection refused root at cab23-r720-11:~# kubectl logs -f ceph-rgw-5b6645c456-rsv4k -n ceph + export LC_ALL=C + LC_ALL=C + : 0 ++ uname -n + : ceph-rgw-5b6645c456-rsv4k + : '' + : '' + : 0 + : 9000 + : 0.0.0.0 + : /etc/ceph/ceph.client.admin.keyring + : /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring + : /var/lib/ceph/bootstrap-rgw/ceph.keyring + [[ ! -e /etc/ceph/ceph.conf ]] + '[' 0 -eq 1 ']' + '[' '!' -e /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring ']' + RGW_FRONTENDS='civetweb port=8088' + '[' 0 -eq 1 ']' + /usr/bin/radosgw --cluster ceph --setuser ceph --setgroup ceph -d -n client.rgw.ceph-rgw-5b6645c456-rsv4k -k /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring --rgw-socket-path= --rgw-zonegroup= --rgw-zone= '--rgw-frontends=civetweb port=8088' 2018-11-08 06:53:53.769321 7f3c7826de80 0 deferred set uid:gid to 64045:64045 (ceph:ceph) 2018-11-08 06:53:53.769371 7f3c7826de80 0 ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable), process (unknown), pid 8 root at cab23-r720-11:~# kubectl get pods -n ceph NAME READY STATUS RESTARTS AGE airship-ucp-ceph-provisioners-ceph-ns-key-generator-8vm8b 0/1 Completed 0 20h ceph-bootstrap-mp5tn 0/1 Completed 0 22h ceph-cephfs-client-key-generator-4sqrd 0/1 Completed 0 20h ceph-cephfs-provisioner-676684f6bd-48fq8 1/1 Running 0 20h ceph-cephfs-provisioner-676684f6bd-mqwnj 1/1 Running 0 20h ceph-mds-6bfb74d9c7-gqgtl 1/1 Running 0 22h ceph-mds-6bfb74d9c7-sk4pw 1/1 Running 0 22h ceph-mds-keyring-generator-m2rjj 0/1 Completed 0 22h ceph-mgr-6dc44fc75b-jtv7w 0/1 Pending 0 22h ceph-mgr-6dc44fc75b-jvstm 1/1 Running 0 22h ceph-mgr-keyring-generator-m56p5 0/1 Completed 0 22h ceph-mon-check-6db6b569b6-flg76 1/1 Running 0 22h ceph-mon-h2gsm 1/1 Running 0 21h ceph-mon-keyring-generator-5phjm 0/1 Completed 0 22h ceph-osd-default-64779b8c-c74ms 1/1 Running 0 22h ceph-osd-default-6ea9de2c-hn4tm 1/1 Running 0 22h ceph-osd-default-70a54190-t54l7 1/1 Running 0 22h ceph-osd-default-7544b6da-mxfj6 1/1 Running 1 22h ceph-osd-default-7cfc44c1-vbqs4 1/1 Running 0 22h ceph-osd-default-83945928-66m9s 1/1 Running 0 22h ceph-osd-default-be8e8cc4-2vnxb 1/1 Running 0 22h ceph-osd-default-f9249fa9-8gw25 1/1 Running 0 22h ceph-osd-keyring-generator-fnrgk 0/1 Completed 0 22h ceph-rbd-pool-4dzjz 0/1 Completed 0 22h ceph-rbd-provisioner-84bc5c88c7-smfcn 1/1 Running 0 20h ceph-rbd-provisioner-84bc5c88c7-vtc5g 1/1 Running 0 20h ceph-rgw-5b6645c456-rsv4k 0/1 Running 74 8h ceph-rgw-5b6645c456-sgzpt 0/1 Running 74 8h ceph-rgw-storage-init-pqf6q 0/1 Completed 0 8h ceph-storage-keys-generator-9fx4k 0/1 Completed 0 22h ingress-65dc849968-9zlqn 1/1 Running 0 22h ingress-65dc849968-cc8xp 1/1 Running 0 22h ingress-error-pages-796b76c856-dnpcp 1/1 Running 0 22h ingress-error-pages-796b76c856-tt9x4 1/1 Running 0 22h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: SKELS, KASPARS > Sent: Wednesday, November 07, 2018 4:28 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: GORSHUNOV, ROMAN >; MEADOWS, ALAN >; MCEUEN, MATT >; PACHECO, RODOLFO J >; 'airship-discuss at lists.airshipit.org' > Subject: RE: Airship installation Questions Hi Qiaolin/Maxwell, it seems like you may be using 1 control node for your deployment. The ceph-mgr chart is set to have 2 replicas and is trying to launch them (here you get a port clash). Generally speaking, the manifests (by default) are targeted for 3 control node (HA) configuration, including ceph to reflect production-like HA deployment https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-client-update.yaml#L145 That said, you may adjust the replica count on site level (set 1, instead of 2) as well, as well as OSD count here by doing overrides (matching how many OSDs you have) https://github.com/openstack/airship-treasuremap/blob/master/site/airship-seaworthy/software/charts/ucp/ceph/ceph-client-update.yaml And also, adjustments in other charts may be required to set this up for a single control plane node, here is more info on reference site https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html Kindly, Kaspars From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Tuesday, November 6, 2018 2:10 PM To: qiaolin.tu at nokia-sbell.com Cc: MEADOWS, ALAN >; MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; Mark Burnett >; VOLKOV, ANDREY >; avolkov at mirantis.com; Chris Wedgwood >; PALLAV GUPTA >; GUPTA, SANGEET >; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; maxwell.li at nokia-sbell.com; SKELS, KASPARS > Subject: Re: Airship installation Questions On Tue, Nov 6, 2018 at 3:42 AM Tu, Qiaolin (NSB - CN/Hangzhou) > wrote: Hi, I also found that pod(ceph-mgr-6dc44fc75b-25n24) is always on the pending state. It report error as below: Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. root at cab23-r720-11:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-n5vzj 0/1 Completed 0 20m ceph ceph-cephfs-client-key-generator-656rd 0/1 Completed 0 20m ceph ceph-cephfs-provisioner-676684f6bd-trf5h 1/1 Running 0 20m ceph ceph-cephfs-provisioner-676684f6bd-zfk2s 1/1 Running 0 20m ceph ceph-mds-6bfb74d9c7-btbt5 1/1 Running 0 37m ceph ceph-mds-6bfb74d9c7-qw4xl 1/1 Running 0 37m ceph ceph-mds-keyring-generator-dlqjt 0/1 Completed 0 53m ceph ceph-mgr-6dc44fc75b-25n24 0/1 Pending 0 37m ceph ceph-mgr-6dc44fc75b-42mqz 1/1 Running 0 37m ceph ceph-mgr-keyring-generator-7dftx 0/1 Completed 0 53m ceph ceph-mon-cfrf6 1/1 Running 0 41m ceph ceph-mon-check-6db6b569b6-67vvm 1/1 Running 0 53m ceph ceph-mon-keyring-generator-55ps8 0/1 Completed 0 53m root at cab23-r720-11:~# kubectl describe pod ceph-mgr-6dc44fc75b-25n24 -n ceph Name: ceph-mgr-6dc44fc75b-25n24 Namespace: ceph Node: Labels: application=ceph component=mgr pod-template-hash=2870097316 release_group=airship-ucp-ceph-client Annotations: configmap-bin-hash=d23610f5cc67014b596eb45b9d47ddb97997d6ec5fd3289fb264429a260f39a9 configmap-etc-client-hash=0174be85f29398aa5cea5fccb9537e0e0f25ea2d2f280182e9e2506a1be1b115 Status: Pending IP: Controlled By: ReplicaSet/ceph-mgr-6dc44fc75b Init Containers: init: Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Port: Host Port: Command: kubernetes-entrypoint Environment: POD_NAME: ceph-mgr-6dc44fc75b-25n24 (v1:metadata.name) NAMESPACE: ceph (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ DEPENDENCY_SERVICE: ceph:ceph-mon DEPENDENCY_JOBS: ceph-storage-keys-generator,ceph-mgr-keyring-generator DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) ceph-init-dirs: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Port: Host Port: Command: /tmp/init-dirs.sh Environment: CLUSTER: ceph Mounts: /etc/ceph from pod-etc-ceph (rw) /run from pod-run (rw) /tmp/init-dirs.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Containers: ceph-mgr: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Ports: 7000/TCP, 9283/TCP Host Ports: 7000/TCP, 9283/TCP Command: /mgr-start.sh Liveness: exec [/tmp/mgr-check.sh liveness] delay=30s timeout=5s period=10s #success=1 #failure=3 Readiness: exec [/tmp/mgr-check.sh readiness] delay=30s timeout=5s period=10s #success=1 #failure=3 Environment: CLUSTER: ceph ENABLED_MODULES: restful status prometheus Mounts: /etc/ceph from pod-etc-ceph (rw) /etc/ceph/ceph.client.admin.keyring from ceph-client-admin-keyring (ro) /etc/ceph/ceph.conf from ceph-client-etc (ro) /mgr-start.sh from ceph-client-bin (ro) /run from pod-run (rw) /tmp/mgr-check.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/lib/ceph/bootstrap-mgr/ceph.keyring from ceph-bootstrap-mgr-keyring (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Conditions: Type Status PodScheduled False Volumes: pod-etc-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: ceph-client-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-bin Optional: false ceph-client-etc: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-etc Optional: false pod-var-lib-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: pod-run: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory ceph-client-admin-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-client-admin-keyring Optional: false ceph-bootstrap-mgr-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-bootstrap-mgr-keyring Optional: false ceph-mgr-token-pgbwp: Type: Secret (a volume populated by a Secret) SecretName: ceph-mgr-token-pgbwp Optional: false QoS Class: BestEffort Node-Selectors: ceph-mgr=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 06, 2018 1:46 PM To: 'MEADOWS, ALAN' >; Kaspars Skels > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; Li, Maxwell (NSB - CN/Hangzhou) > Subject: Airship installation Questions Hi, Thanks very much for your kindly support. I also encountered a problem when install multi-node airship. It blocked by 2 ceph-rgw pods. They crash off and run again and again. Pods error log as bellow: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-6ff5ff866d-4x66n -n ceph Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned ceph-rgw-6ff5ff866d-4x66n to cab23-r720-11 Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-etc-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-var-lib-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-run" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-etc" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-bin" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-bootstrap-rgw-keyring" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-token-sjg9x" Normal Pulled 1m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Started 1m kubelet, cab23-r720-11 Started container Normal Pulled 1m kubelet, cab23-r720-11 Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine Normal Created 1m kubelet, cab23-r720-11 Created container Normal Started 1m kubelet, cab23-r720-11 Started container Warning Unhealthy 18s (x6 over 1m) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.105:8088/: dial tcp 10.97.38.105:8088: getsockopt: connection refused Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: MEADOWS, ALAN > Sent: Saturday, November 03, 2018 5:39 AM To: Kaspars Skels >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; Tu, Qiaolin (NSB - CN/Hangzhou) >; PACHECO, RODOLFO J > Subject: RE: Airship Version Questions Hi Maxwell! Great to see additional people trying Airship out. Treasuremap does change quite frequently. In fact, this is from a lot of the great work Kaspars has been doing – building a CI/CD pipeline to not only keep treasuremap fresh and referencing the latest working versions of Airship components but also validating that the YAML documents are able to provision a multi-node baremetal physical environment by way of third-party gates. This gives us a high degree of confidence that no matter what version of treasuremap you have decided to start working with to deploy your own environments it should work. To be sure, we do understand when getting started it is nice to work with a stable target that isn’t constantly changing. Kaspars has introduced tagging to provide a human readable reference (using dates). These are not on any particular cadence, but usually follow more in-depth tested release beyond just automated CI/CD using the manifests: https://github.com/openstack/airship-treasuremap/releases For right now with regard to pegleg, the best way to leverage the version of pegleg that has been tested with the treasuremap manifests is to pull it the one used in the baremetal CI/CD gate test directly from the target tag Jenkinsfile. For example, for v18.11.01: https://github.com/openstack/airship-treasuremap/blob/v18.11.01/tools/gate/Jenkinsfile#L15 Please feel free to get on the airship mailing list (airshipit.org) to connect with a wider audience or our IRC channel! Alan Meadows From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Friday, November 02, 2018 2:27 PM To: maxwell.li at nokia-sbell.com Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; qiaolin.tu at nokia-sbell.com; MEADOWS, ALAN >; PACHECO, RODOLFO J > Subject: Re: Airship Version Questions + Alan/Rodolfo On Thu, Nov 1, 2018 at 3:37 AM Li, Maxwell (NSB - CN/Hangzhou) > wrote: Hi Airship Team: I have a problem when I deploy airship on my server. There are new commits in airship-pegleg and airship-treasuremap, almost erveryday. And there are no tags or branches in these code repositories. Could I get a commit ID about these repositories so that I could deploy airship. Thanks a lot! Best Regards! Maxwell Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Thu Nov 8 08:04:47 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Thu, 8 Nov 2018 08:04:47 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: <2ADBF0C373B7E84E944B1E06D3CDDFC91E616E83@MOKSCY3MSGUSRGI.ITServices.sbc.com> References: <2ADBF0C373B7E84E944B1E06D3CDDFC91E6164B6@MOKSCY3MSGUSRGI.ITServices.sbc.com> <2ADBF0C373B7E84E944B1E06D3CDDFC91E616E83@MOKSCY3MSGUSRGI.ITServices.sbc.com> Message-ID: Hi, root at cab23-r720-11:~# kubectl exec -it -n ceph ceph-mon-h2gsm -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 2973 MB used, 8185 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Which I marked with red color mean ceph state have something wrong? And this lead to Readiness probe failed: Get http://10.97.38.77:8088/: dial tcp 10.97.38.77:8088: getsockopt: connection refused ? Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: SKELS, KASPARS Sent: Thursday, November 08, 2018 3:49 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) ; Li, Maxwell (NSB - CN/Hangzhou) Cc: GORSHUNOV, ROMAN ; MEADOWS, ALAN ; MCEUEN, MATT ; PACHECO, RODOLFO J ; 'airship-discuss at lists.airshipit.org' ; REDDY, CHINASUBBA ; BIRLEY, PETE Subject: RE: Airship installation Questions Hey! Have a look if ceph is healthy in general with following command (cluster status) kubectl exec –it –n ceph ceph-mon-h2gsm -- ceph -s I still see that you have 2 instances of ceph-mgr from your terminal outputs ceph-mgr-6dc44fc75b-jtv7w 0/1 Pending 0 22h ceph-mgr-6dc44fc75b-jvstm 1/1 Running 0 22h There are 2 ceph-client charts, 1 for initial single node/genesis, and ceph-client-update for final state (when additional nodes join). Feel free to explore https://github.com/openstack/airship-treasuremap/tree/master/global/software/charts/ucp/ceph I think simplest way for you to get ceph running (if you are using single control plane) would be to just stay with initial ceph as it will also keep crush rules and other settings as needed (final state requires 3 domains/hosts) I would say change https://github.com/openstack/airship-treasuremap/blob/master/global/software/manifests/full-site.yaml#L22 From `ucp-ceph-update` to just `ucp-ceph` instead; this will keep using initial single node/genesis ceph and keep it healthy. For RGW – is similar – you will need to set it to 1 replica (right now there is no site override so you would need to craft one or set on global level) https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-rgw.yaml#L133 Kindly, Kaspars From: Tu, Qiaolin (NSB - CN/Hangzhou) [mailto:qiaolin.tu at nokia-sbell.com] Sent: Thursday, November 8, 2018 1:25 AM To: SKELS, KASPARS >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: GORSHUNOV, ROMAN >; MEADOWS, ALAN >; MCEUEN, MATT >; PACHECO, RODOLFO J >; 'airship-discuss at lists.airshipit.org' > Subject: RE: Airship installation Questions Hi, Thanks very much for your reply. I adjust the replica count on site level (set 1, instead of 2) and ceph-rgw pod still have getsockopt: connection refused error. Below is the log detail: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-5b6645c456-rsv4k -n ceph Name: ceph-rgw-5b6645c456-rsv4k Namespace: ceph Node: cab23-r720-11/10.23.22.11 Start Time: Wed, 07 Nov 2018 23:07:06 +0000 Labels: application=ceph component=rgw pod-template-hash=1622017012 release_group=airship-ucp-ceph-rgw Annotations: configmap-bin-hash=aee34fa624622fb03c48b85c014e805a1012f74f772bc0f3a1e91597839ce49e configmap-etc-client-hash=38eb0b2eb23d31bcd3c4238c2bf9f6b8acd335659d94b3fcfff6c333ed1339b2 Status: Running IP: 10.97.38.77 Controlled By: ReplicaSet/ceph-rgw-5b6645c456 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 5m (x737 over 7h) kubelet, cab23-r720-11 Back-off restarting failed container Warning Unhealthy 39s (x1154 over 7h) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.77:8088/: dial tcp 10.97.38.77:8088: getsockopt: connection refused root at cab23-r720-11:~# kubectl logs -f ceph-rgw-5b6645c456-rsv4k -n ceph + export LC_ALL=C + LC_ALL=C + : 0 ++ uname -n + : ceph-rgw-5b6645c456-rsv4k + : '' + : '' + : 0 + : 9000 + : 0.0.0.0 + : /etc/ceph/ceph.client.admin.keyring + : /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring + : /var/lib/ceph/bootstrap-rgw/ceph.keyring + [[ ! -e /etc/ceph/ceph.conf ]] + '[' 0 -eq 1 ']' + '[' '!' -e /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring ']' + RGW_FRONTENDS='civetweb port=8088' + '[' 0 -eq 1 ']' + /usr/bin/radosgw --cluster ceph --setuser ceph --setgroup ceph -d -n client.rgw.ceph-rgw-5b6645c456-rsv4k -k /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring --rgw-socket-path= --rgw-zonegroup= --rgw-zone= '--rgw-frontends=civetweb port=8088' 2018-11-08 06:53:53.769321 7f3c7826de80 0 deferred set uid:gid to 64045:64045 (ceph:ceph) 2018-11-08 06:53:53.769371 7f3c7826de80 0 ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable), process (unknown), pid 8 root at cab23-r720-11:~# kubectl get pods -n ceph NAME READY STATUS RESTARTS AGE airship-ucp-ceph-provisioners-ceph-ns-key-generator-8vm8b 0/1 Completed 0 20h ceph-bootstrap-mp5tn 0/1 Completed 0 22h ceph-cephfs-client-key-generator-4sqrd 0/1 Completed 0 20h ceph-cephfs-provisioner-676684f6bd-48fq8 1/1 Running 0 20h ceph-cephfs-provisioner-676684f6bd-mqwnj 1/1 Running 0 20h ceph-mds-6bfb74d9c7-gqgtl 1/1 Running 0 22h ceph-mds-6bfb74d9c7-sk4pw 1/1 Running 0 22h ceph-mds-keyring-generator-m2rjj 0/1 Completed 0 22h ceph-mgr-6dc44fc75b-jtv7w 0/1 Pending 0 22h ceph-mgr-6dc44fc75b-jvstm 1/1 Running 0 22h ceph-mgr-keyring-generator-m56p5 0/1 Completed 0 22h ceph-mon-check-6db6b569b6-flg76 1/1 Running 0 22h ceph-mon-h2gsm 1/1 Running 0 21h ceph-mon-keyring-generator-5phjm 0/1 Completed 0 22h ceph-osd-default-64779b8c-c74ms 1/1 Running 0 22h ceph-osd-default-6ea9de2c-hn4tm 1/1 Running 0 22h ceph-osd-default-70a54190-t54l7 1/1 Running 0 22h ceph-osd-default-7544b6da-mxfj6 1/1 Running 1 22h ceph-osd-default-7cfc44c1-vbqs4 1/1 Running 0 22h ceph-osd-default-83945928-66m9s 1/1 Running 0 22h ceph-osd-default-be8e8cc4-2vnxb 1/1 Running 0 22h ceph-osd-default-f9249fa9-8gw25 1/1 Running 0 22h ceph-osd-keyring-generator-fnrgk 0/1 Completed 0 22h ceph-rbd-pool-4dzjz 0/1 Completed 0 22h ceph-rbd-provisioner-84bc5c88c7-smfcn 1/1 Running 0 20h ceph-rbd-provisioner-84bc5c88c7-vtc5g 1/1 Running 0 20h ceph-rgw-5b6645c456-rsv4k 0/1 Running 74 8h ceph-rgw-5b6645c456-sgzpt 0/1 Running 74 8h ceph-rgw-storage-init-pqf6q 0/1 Completed 0 8h ceph-storage-keys-generator-9fx4k 0/1 Completed 0 22h ingress-65dc849968-9zlqn 1/1 Running 0 22h ingress-65dc849968-cc8xp 1/1 Running 0 22h ingress-error-pages-796b76c856-dnpcp 1/1 Running 0 22h ingress-error-pages-796b76c856-tt9x4 1/1 Running 0 22h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: SKELS, KASPARS > Sent: Wednesday, November 07, 2018 4:28 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: GORSHUNOV, ROMAN >; MEADOWS, ALAN >; MCEUEN, MATT >; PACHECO, RODOLFO J >; 'airship-discuss at lists.airshipit.org' > Subject: RE: Airship installation Questions Hi Qiaolin/Maxwell, it seems like you may be using 1 control node for your deployment. The ceph-mgr chart is set to have 2 replicas and is trying to launch them (here you get a port clash). Generally speaking, the manifests (by default) are targeted for 3 control node (HA) configuration, including ceph to reflect production-like HA deployment https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-client-update.yaml#L145 That said, you may adjust the replica count on site level (set 1, instead of 2) as well, as well as OSD count here by doing overrides (matching how many OSDs you have) https://github.com/openstack/airship-treasuremap/blob/master/site/airship-seaworthy/software/charts/ucp/ceph/ceph-client-update.yaml And also, adjustments in other charts may be required to set this up for a single control plane node, here is more info on reference site https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html Kindly, Kaspars From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Tuesday, November 6, 2018 2:10 PM To: qiaolin.tu at nokia-sbell.com Cc: MEADOWS, ALAN >; MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; Mark Burnett >; VOLKOV, ANDREY >; avolkov at mirantis.com; Chris Wedgwood >; PALLAV GUPTA >; GUPTA, SANGEET >; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; maxwell.li at nokia-sbell.com; SKELS, KASPARS > Subject: Re: Airship installation Questions On Tue, Nov 6, 2018 at 3:42 AM Tu, Qiaolin (NSB - CN/Hangzhou) > wrote: Hi, I also found that pod(ceph-mgr-6dc44fc75b-25n24) is always on the pending state. It report error as below: Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. root at cab23-r720-11:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-n5vzj 0/1 Completed 0 20m ceph ceph-cephfs-client-key-generator-656rd 0/1 Completed 0 20m ceph ceph-cephfs-provisioner-676684f6bd-trf5h 1/1 Running 0 20m ceph ceph-cephfs-provisioner-676684f6bd-zfk2s 1/1 Running 0 20m ceph ceph-mds-6bfb74d9c7-btbt5 1/1 Running 0 37m ceph ceph-mds-6bfb74d9c7-qw4xl 1/1 Running 0 37m ceph ceph-mds-keyring-generator-dlqjt 0/1 Completed 0 53m ceph ceph-mgr-6dc44fc75b-25n24 0/1 Pending 0 37m ceph ceph-mgr-6dc44fc75b-42mqz 1/1 Running 0 37m ceph ceph-mgr-keyring-generator-7dftx 0/1 Completed 0 53m ceph ceph-mon-cfrf6 1/1 Running 0 41m ceph ceph-mon-check-6db6b569b6-67vvm 1/1 Running 0 53m ceph ceph-mon-keyring-generator-55ps8 0/1 Completed 0 53m root at cab23-r720-11:~# kubectl describe pod ceph-mgr-6dc44fc75b-25n24 -n ceph Name: ceph-mgr-6dc44fc75b-25n24 Namespace: ceph Node: Labels: application=ceph component=mgr pod-template-hash=2870097316 release_group=airship-ucp-ceph-client Annotations: configmap-bin-hash=d23610f5cc67014b596eb45b9d47ddb97997d6ec5fd3289fb264429a260f39a9 configmap-etc-client-hash=0174be85f29398aa5cea5fccb9537e0e0f25ea2d2f280182e9e2506a1be1b115 Status: Pending IP: Controlled By: ReplicaSet/ceph-mgr-6dc44fc75b Init Containers: init: Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Port: Host Port: Command: kubernetes-entrypoint Environment: POD_NAME: ceph-mgr-6dc44fc75b-25n24 (v1:metadata.name) NAMESPACE: ceph (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ DEPENDENCY_SERVICE: ceph:ceph-mon DEPENDENCY_JOBS: ceph-storage-keys-generator,ceph-mgr-keyring-generator DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) ceph-init-dirs: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Port: Host Port: Command: /tmp/init-dirs.sh Environment: CLUSTER: ceph Mounts: /etc/ceph from pod-etc-ceph (rw) /run from pod-run (rw) /tmp/init-dirs.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Containers: ceph-mgr: Image: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 Ports: 7000/TCP, 9283/TCP Host Ports: 7000/TCP, 9283/TCP Command: /mgr-start.sh Liveness: exec [/tmp/mgr-check.sh liveness] delay=30s timeout=5s period=10s #success=1 #failure=3 Readiness: exec [/tmp/mgr-check.sh readiness] delay=30s timeout=5s period=10s #success=1 #failure=3 Environment: CLUSTER: ceph ENABLED_MODULES: restful status prometheus Mounts: /etc/ceph from pod-etc-ceph (rw) /etc/ceph/ceph.client.admin.keyring from ceph-client-admin-keyring (ro) /etc/ceph/ceph.conf from ceph-client-etc (ro) /mgr-start.sh from ceph-client-bin (ro) /run from pod-run (rw) /tmp/mgr-check.sh from ceph-client-bin (ro) /var/lib/ceph from pod-var-lib-ceph (rw) /var/lib/ceph/bootstrap-mgr/ceph.keyring from ceph-bootstrap-mgr-keyring (rw) /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro) Conditions: Type Status PodScheduled False Volumes: pod-etc-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: ceph-client-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-bin Optional: false ceph-client-etc: Type: ConfigMap (a volume populated by a ConfigMap) Name: ceph-client-etc Optional: false pod-var-lib-ceph: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: pod-run: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory ceph-client-admin-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-client-admin-keyring Optional: false ceph-bootstrap-mgr-keyring: Type: Secret (a volume populated by a Secret) SecretName: ceph-bootstrap-mgr-keyring Optional: false ceph-mgr-token-pgbwp: Type: Secret (a volume populated by a Secret) SecretName: ceph-mgr-token-pgbwp Optional: false QoS Class: BestEffort Node-Selectors: ceph-mgr=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 1m (x125 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 06, 2018 1:46 PM To: 'MEADOWS, ALAN' >; Kaspars Skels > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; PACHECO, RODOLFO J >; Li, Maxwell (NSB - CN/Hangzhou) > Subject: Airship installation Questions Hi, Thanks very much for your kindly support. I also encountered a problem when install multi-node airship. It blocked by 2 ceph-rgw pods. They crash off and run again and again. Pods error log as bellow: root at cab23-r720-11:~# kubectl describe pod ceph-rgw-6ff5ff866d-4x66n -n ceph Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned ceph-rgw-6ff5ff866d-4x66n to cab23-r720-11 Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-etc-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-var-lib-ceph" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-run" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-etc" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-bin" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-bootstrap-rgw-keyring" Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-token-sjg9x" Normal Pulled 1m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Started 1m kubelet, cab23-r720-11 Started container Normal Pulled 1m kubelet, cab23-r720-11 Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine Normal Created 1m kubelet, cab23-r720-11 Created container Normal Started 1m kubelet, cab23-r720-11 Started container Warning Unhealthy 18s (x6 over 1m) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.105:8088/: dial tcp 10.97.38.105:8088: getsockopt: connection refused Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: MEADOWS, ALAN > Sent: Saturday, November 03, 2018 5:39 AM To: Kaspars Skels >; Li, Maxwell (NSB - CN/Hangzhou) > Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; Tu, Qiaolin (NSB - CN/Hangzhou) >; PACHECO, RODOLFO J > Subject: RE: Airship Version Questions Hi Maxwell! Great to see additional people trying Airship out. Treasuremap does change quite frequently. In fact, this is from a lot of the great work Kaspars has been doing – building a CI/CD pipeline to not only keep treasuremap fresh and referencing the latest working versions of Airship components but also validating that the YAML documents are able to provision a multi-node baremetal physical environment by way of third-party gates. This gives us a high degree of confidence that no matter what version of treasuremap you have decided to start working with to deploy your own environments it should work. To be sure, we do understand when getting started it is nice to work with a stable target that isn’t constantly changing. Kaspars has introduced tagging to provide a human readable reference (using dates). These are not on any particular cadence, but usually follow more in-depth tested release beyond just automated CI/CD using the manifests: https://github.com/openstack/airship-treasuremap/releases For right now with regard to pegleg, the best way to leverage the version of pegleg that has been tested with the treasuremap manifests is to pull it the one used in the baremetal CI/CD gate test directly from the target tag Jenkinsfile. For example, for v18.11.01: https://github.com/openstack/airship-treasuremap/blob/v18.11.01/tools/gate/Jenkinsfile#L15 Please feel free to get on the airship mailing list (airshipit.org) to connect with a wider audience or our IRC channel! Alan Meadows From: Kaspars Skels [mailto:kaspars.skels at gmail.com] Sent: Friday, November 02, 2018 2:27 PM To: maxwell.li at nokia-sbell.com Cc: MONTEIRO, FELIPE C >; BARTRA, RICK >; REDDY, CHINASUBBA >; GORSHUNOV, ROMAN >; KHUNTIA, SOUMITRA >; KABANOV, DMITRII >; mark.m.burnett at gmail.com; VOLKOV, ANDREY >; avolkov at mirantis.com; cw at f00f.org; pallavgupta84 at gmail.com; GUPTA, SANGEET >; pete at port.direct; zuul at review01.openstack.org; HUSSEY, SCOTT T >; qiaolin.tu at nokia-sbell.com; MEADOWS, ALAN >; PACHECO, RODOLFO J > Subject: Re: Airship Version Questions + Alan/Rodolfo On Thu, Nov 1, 2018 at 3:37 AM Li, Maxwell (NSB - CN/Hangzhou) > wrote: Hi Airship Team: I have a problem when I deploy airship on my server. There are new commits in airship-pegleg and airship-treasuremap, almost erveryday. And there are no tags or branches in these code repositories. Could I get a commit ID about these repositories so that I could deploy airship. Thanks a lot! Best Regards! Maxwell Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Nov 8 19:42:19 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 08 Nov 2018 11:42:19 -0800 Subject: [Airship-discuss] OpenDev, the future of OpenStack Infra Message-ID: <1541706139.393458.1570439656.675141A3@webmail.messagingengine.com> Hello everyone, Sorry for another cross post so soon. In the land before time we had Stackforge. Stackforge gave non-OpenStack projects a place to live with their own clearly defined "not OpenStack" namespacing. As the wheel of time spun we realized that many Stackforge projects were becoming OpenStack projects and we would have to migrate them. This involved Gerrit downtimes to rename things safely. To ease the pain of this, the TC decided that all projects developed in the OpenStack Infrastructure could live under the OpenStack git namespace to simplify migrations. Unfortunately this had the effect of creating confusion over which projects were officially a part of OpenStack, and whether or not projects that were not OpenStack could use our project hosting. Stackforge lived on under a different name, "unofficial project hosting", but many potential infrastructure users either didn't understand this or didn't want that strong association to OpenStack for their project hosting [0]. Turns out that we want to be able to host OpenStack and non-OpenStack projects together without confusion in a way that makes all of the projects involved happy. In an effort to make this a reality the OpenStack Infra team has been working through a process to rename itself to make it clear that our awesome project infrastructure and open collaboration tooling is community run, not just for OpenStack, but for others that want to be involved. To this end we've acquired the opendev.org domain which will allow us to host services under a neutral name as the OpenDev Infrastructure team. The OpenStack community will continue to be the largest and a primary user for the OpenDev Infrastructure team, but our hope in making our infrastructure services more inclusive is that we'll also attract new contributors, which will ultimately benefit OpenStack and other open infrastructure projects. Our goals for OpenDev are to: * Encourage additional active infrastructure contributors to help us scale. Make it clear that this is community-run tooling & infrastructure and everyone can get involved. * Make open source collaboration tools and project infrastructure more accessible to those that want it. * Have exposure to and dogfooding of OpenStack clouds as viable open source cloud providers. * Enable more projects to take advantage of the OpenStack-pioneered model of development and collaboration, including recommended practices like code review and gating. * Help build relationships with new and adjacent open source projects and create an inclusive space for collaboration and open source development. Much of this is still in the early planning stages. This is the infrastructure team's current thinking on the subject, but understand we have an existing community from which we'd like to see buy-in and involvement. To that end we have tried to compile a list of expected FAQ/Q&A information below, but feel free to followup either on this thread or with myself for anything we haven't considered already. Any transition will be slow and considered so don't expect everything to change overnight. But don't be surprised if you run into some new HTTP redirects as we reorganize the names under which services run. We'll also be sure to keep you informed on any major (and probably minor) transition steps so that they won't surprise you. Thank you, Clark [0] It should be noted that some projects did not mind this and hosted with OpenStack Infra anyway. ARA is an excellent example of this. FAQ * What is OpenDev? OpenDev is community-run tools and infrastructure services for collaboratively developing open source software. The OpenDev infrastruture team is the community of people who operate the infrastructure under the umbrella of the OpenStack Foundation. * What services are you offering? What is the expected timeline? In the near-term we expect to transition simple services like etherpad hosting to the OpenDev domain. It wil take us months and potentially up to a year to transition key infrastructure pieces like Git and Gerrit. Example services managed by the team today include etherpad, wiki, the zuul and nodepool CI system, git and gerrit, and other minor systems like pbx conferencing and survey tools. * Where will these services live? We've acquired opendev.org and are planning to set up DNS hosting very soon. We will post a simple information page and FAQ on the website and build it out as necessary over time. * Why are you changing from the OpenStack infrastructure team to the OpenDev infrastructure team? In the same way we want to signal that our services are not strictly for OpenStack projects, and that not every project using our services is an official part of OpenStack, we want to make it clear that our team also serves this larger community. * Who should use OpenDev services? Does it have to be projects related to OpenStack, or any open source projects? In short, open source contributors who share our community values, especially those who might want to help contribute to improving and maintaining OpenDev infrastructure over time. Projects using OpenDev hosted git and gerrit services should have an OSI-approved license. * Will the OpenStack projects live at git.opendev.org? All projects hosted with OpenDev will live at git.opendev.org. For backwards compatibility reasons, at the very least git.openstack.org will be an alias for git.opendev.org for the forseeable future. The same is true of the other existing whitelabel git domains such as git.starlingx.io and git.zuul-ci.org. Whether or not other 'whitelabel' domains are created is an open question. Given a neutral domain name, the desire for such sites may not seem as necessary. * Does this mean the infrastruture team will be spending less time on OpenStack? OpenStack will continue to be the largest and a primary user for the OpenDev Infrastructure team, and we expect that our work will benefit all users. There will be additional effort required as we transition to the new namespace and reorganize, but over the long term we hope this inclusive approach will help us attract new contributors and ultimately benefit OpenStack. * Are OpenStack cloud test resources the only resources that will be used? At the present time all of the donated resources come to us from a combination of OpenStack Public and Private clouds. Nobody from any of the proprietary clouds has asked to donate resources to us. It is conceivable that the shift to OpenDev could open the door to those cloud providers wanting to donate some cloud resources. Assuming nodepool supports talking to those clouds, it is certainly a possibility, but at the moment it's all speculation. * Is this name associated with the OpenDev Conferences (opendevconf.com) that OpenStack Foundation has previously organized? Yes! They are related, albeit indirectly. To quote from the conference's promotional site, "the focus is on bringing together composable open infrastructure technologies across communities and industries." The possibility of cross-promotional tie-ins could prove synergistic, since the conference and the collaboratory we're building share a lot of similar values and goals, and are ultimately supported by the same donors, community and foundation. * How will OpenDev be governed? Will the OpenStack TC retain oversight over it? The OpenDev governance discussion is just getting started, but like all OSF-supported initiatives, OpenDev follows the Four Opens, so it will ultimately be directly governed by OpenDev contributors. While it won't be under the sole oversight of the OpenStack Technical Committee anymore, OpenDev users (in particular OpenStack) should be represented in the governance model so that they can feed back their requirements to the OpenDev team. From matthew.heler at hotmail.com Thu Nov 8 22:18:17 2018 From: matthew.heler at hotmail.com (Matthew H) Date: Thu, 8 Nov 2018 22:18:17 +0000 Subject: [Airship-discuss] Airship installation Questions Message-ID: Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Fri Nov 9 09:51:46 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Fri, 9 Nov 2018 09:51:46 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: Message-ID: Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: deploy_1_master.rar Type: application/octet-stream Size: 4791973 bytes Desc: deploy_1_master.rar URL: From matthew.heler at hotmail.com Fri Nov 9 14:42:47 2018 From: matthew.heler at hotmail.com (Matthew H) Date: Fri, 9 Nov 2018 14:42:47 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , Message-ID: Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Fri Nov 9 23:53:14 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Fri, 9 Nov 2018 23:53:14 +0000 Subject: [Airship-discuss] Fwd: Issue with setting up airship-in-a-bottle on single node In-Reply-To: References: Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896CD8ED5C@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hi Pawan, can you please try again with the latest airship-in-a-bottle code? It should work now. Sorry for the hiccup, and let us know if you see any further issues. Thanks to the folks who contributed fixes! Matt From: Pawan Singh Pal Sent: Wednesday, October 31, 2018 4:22 AM To: airship-discuss at lists.airshipit.org Subject: [Airship-discuss] Fwd: Issue with setting up airship-in-a-bottle on single node Hi, I'm facing an issue while setting up airship-in-a-bottle on single node. Below is the error log. === Generating updated certificates === + cp /root/deploy/collected/deployment_files.yaml /root/deploy/genesis ++ ls /root/deploy/genesis + docker run --rm -t -e http_proxy= -e https_proxy= -e no_proxy= -w /target -e PROMENADE_DEBUG=false -v /root/deploy/genesis:/target quay.io/airshipit/promenade:cfb8aa498c294c2adbc369ba5aaee19b49550d22 promenade generate-certs -o /target deployment_files.yaml Status: Downloaded newer image for quay.io/airshipit/promenade:cfb8aa498c294c2adbc369ba5aaee19b49550d22 + PORT=9000 + UWSGI_TIMEOUT=300 + PROMENADE_THREADS=1 + PROMENADE_WORKERS=4 + '[' promenade = server ']' + exec promenade generate-certs -o /target deployment_files.yaml /usr/local/lib/python3.6/site-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) Traceback (most recent call last): File "/usr/local/bin/promenade", line 10, in sys.exit(promenade()) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/opt/promenade/promenade/cli.py", line 67, in genereate_certs validate=False) File "/opt/promenade/promenade/config.py", line 60, in from_streams return cls(documents=documents, **kwargs) File "/opt/promenade/promenade/config.py", line 31, in __init__ fail_on_missing_sub_src=not allow_missing_substitutions) File "/usr/local/lib/python3.6/site-packages/deckhand/engine/layering.py", line 426, in __init__ self._pre_validate_documents(documents) File "/usr/local/lib/python3.6/site-packages/deckhand/engine/layering.py", line 369, in _pre_validate_documents 'error: %s.', e['schema'], e['layer'], e['name'], KeyError: 'schema' + cp /root/deploy/genesis/certificates.yaml /root/deploy/airship-in-a-bottle/deployment_files/site/demo/secrets cp: cannot stat '/root/deploy/genesis/certificates.yaml': No such file or directory + error 'setting up certs with Promenade' + set +x Error when setting up certs with Promenade. + exit 1 + clean + set +x To remove files generated during this script's execution, delete /root/deploy. Could someone please help me out with the issue. Thanks in advance. Regards, Pawan Disclaimer: The contents of this email and any attachments are confidential. They are intended for the named recipient(s) only. If you have received this email by mistake, please notify the sender immediately and do not disclose the contents to anyone or make copies thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Mon Nov 12 08:25:38 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Mon, 12 Nov 2018 08:25:38 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , Message-ID: Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: log.txt URL: From qiaolin.tu at nokia-sbell.com Mon Nov 12 09:39:25 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Mon, 12 Nov 2018 09:39:25 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , Message-ID: Hi, Add ceph rbd image related logs. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113b74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:39 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd': size 256 MB in 64 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113c74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113d74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd Watchers: none Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 5:27 PM To: Matthew H ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Mon Nov 12 09:26:45 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Mon, 12 Nov 2018 09:26:45 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , Message-ID: Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: airship-treasuremap.rar Type: application/octet-stream Size: 2268836 bytes Desc: airship-treasuremap.rar URL: From MM9745 at att.com Mon Nov 12 20:42:31 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Mon, 12 Nov 2018 20:42:31 +0000 Subject: [Airship-discuss] Berlin Airship Forums Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896CD9DCE8@MOSTLS1MSGUSRFF.ITServices.sbc.com> I wanted to make sure that all interested folks are aware of the Airship-related Forums that will be held on Tuesday: Cross-project container security discussion: https://etherpad.openstack.org/p/BER-container-security Airship Quality Assurance use cases: https://etherpad.openstack.org/p/BER-airship-qa Airship Bare Metal provisioning brainstorming & design: https://etherpad.openstack.org/p/BER-airship-bare-metal We welcome all participation and discussion - please add any topics you'd like to discuss to the etherpads! I look forward to some good sessions tomorrow. Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.heler at hotmail.com Mon Nov 12 18:32:32 2018 From: matthew.heler at hotmail.com (Matthew H) Date: Mon, 12 Nov 2018 18:32:32 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , , Message-ID: Greetings, >From your master k8s node can you resolve ceph-mon.ceph.svc.cluster.local? Please also send the output of 'cat /etc/resolv.conf' from your k8s nodes (genesis and master node). Thxs ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:39 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph rbd image related logs. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113b74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:39 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd': size 256 MB in 64 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113c74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113d74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd Watchers: none Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 5:27 PM To: Matthew H ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebell at me.com Mon Nov 12 23:29:36 2018 From: ebell at me.com (Eric Bell) Date: Mon, 12 Nov 2018 17:29:36 -0600 Subject: [Airship-discuss] Airship-in-a-bottle, install from scratch issue Message-ID: <67ABCF4B-8AC7-4BD0-B87B-AC748B27334E@me.com> Hello First time caller ;-) I’ve tried for about a week+ now to install on a fresh compute, airship-in-a-bottle. For the past few tries now, its stalling in the same place. (Each time I’ve tried this its been on a fresh Ubuntu 16.04.5 LTS install) Below is the full output. Thanks Eric unknownd0817ad6b6f2:.ssh ericbell$ rm known_hosts unknownd0817ad6b6f2:.ssh ericbell$ cd .. unknownd0817ad6b6f2:~ ericbell$ ssh eric at 192.168.1.177 Warning: Permanently added '192.168.1.177' (ECDSA) to the list of known hosts. eric at 192.168.1.177's password: Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage 73 packages can be updated. 48 updates are security updates. eric at asus:~$ eric at asus:~$ eric at asus:~$ eric at asus:~$ eric at asus:~$ sudo -i [sudo] password for eric: root at asus:~# apt update Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease Hit:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:3 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease Hit:4 http://security.ubuntu.com/ubuntu xenial-security InRelease Reading package lists... Done Building dependency tree Reading state information... Done 67 packages can be upgraded. Run 'apt list --upgradable' to see them. root at asus:~# apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: linux-headers-4.4.0-138 linux-headers-4.4.0-138-generic linux-image-4.4.0-138-generic linux-image-extra-4.4.0-138-generic linux-signed-image-4.4.0-138-generic ubuntu-advantage-tools The following packages will be upgraded: apparmor apt apt-transport-https apt-utils bind9-host cloud-initramfs-copymods cloud-initramfs-dyn-netconf curl distro-info-data dnsutils dpkg friendly-recovery gettext-base git git-man gnupg gpgv grub-legacy-ec2 initramfs-tools initramfs-tools-bin initramfs-tools-core intel-microcode kmod libapparmor-perl libapparmor1 libapt-inst2.0 libapt-pkg5.0 libasprintf0v5 libbind9-140 libcurl3-gnutls libdns-export162 libdns162 libglib2.0-0 libglib2.0-data libisc-export160 libisc160 libisccc140 libisccfg140 libkmod2 liblwres141 libmspack0 libpam-systemd libsystemd0 libudev1 libx11-6 libx11-data libxml2 linux-headers-generic linux-signed-generic linux-signed-image-generic open-iscsi openssh-client openssh-server openssh-sftp-server overlayroot python3-requests python3-update-manager python3-urllib3 squashfs-tools systemd systemd-sysv tzdata ubuntu-minimal ubuntu-server ubuntu-standard udev update-manager-core 67 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 92.0 MB of archives. After this operation, 303 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg amd64 1.18.4ubuntu1.5 [2,085 kB] Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-pkg5.0 amd64 1.2.29 [707 kB] Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-inst2.0 amd64 1.2.29 [55.5 kB] Get:4 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt amd64 1.2.29 [1,041 kB] Get:5 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-utils amd64 1.2.29 [196 kB] Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 gpgv amd64 1.4.20-1ubuntu3.3 [165 kB] Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 gnupg amd64 1.4.20-1ubuntu3.3 [626 kB] Get:8 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libsystemd0 amd64 229-4ubuntu21.8 [204 kB] Get:9 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam-systemd amd64 229-4ubuntu21.8 [115 kB] Get:10 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 systemd amd64 229-4ubuntu21.8 [3,775 kB] Get:11 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 udev amd64 229-4ubuntu21.8 [993 kB] Get:12 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 kmod amd64 22-1ubuntu5.1 [88.4 kB] Get:13 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkmod2 amd64 22-1ubuntu5.1 [39.8 kB] Get:14 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libudev1 amd64 229-4ubuntu21.8 [54.1 kB] Get:15 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 friendly-recovery all 0.2.31ubuntu2 [9,662 B] Get:16 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.13 [8,936 B] Get:17 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools-core all 0.122ubuntu8.13 [44.7 kB] Get:18 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools-bin amd64 0.122ubuntu8.13 [9,742 B] Get:19 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 systemd-sysv amd64 229-4ubuntu21.8 [11.6 kB] Get:20 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapparmor1 amd64 2.10.95-0ubuntu2.10 [29.7 kB] Get:21 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libglib2.0-0 amd64 2.48.2-0ubuntu4.1 [1,120 kB] Get:22 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 open-iscsi amd64 2.0.873+git0.3b4b4500-14ubuntu3.6 [334 kB] Get:23 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 tzdata all 2018g-0ubuntu0.16.04 [166 kB] Get:24 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 distro-info-data all 0.28ubuntu0.9 [4,534 B] Get:25 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisc-export160 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [153 kB] Get:26 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdns-export162 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [667 kB] Get:27 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-advantage-tools all 10ubuntu0.16.04.1 [11.5 kB] Get:28 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-minimal amd64 1.361.2 [2,662 B] Get:29 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapparmor-perl amd64 2.10.95-0ubuntu2.10 [31.6 kB] Get:30 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 apparmor amd64 2.10.95-0ubuntu2.10 [451 kB] Get:31 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.11 [139 kB] Get:32 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.11 [185 kB] Get:33 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.29 [26.2 kB] Get:34 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libxml2 amd64 2.9.3+dfsg1-1ubuntu0.6 [697 kB] Get:35 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 bind9-host amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [38.4 kB] Get:36 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 dnsutils amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [89.2 kB] Get:37 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisc160 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [215 kB] Get:38 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdns162 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [881 kB] Get:39 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisccc140 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [16.3 kB] Get:40 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisccfg140 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [40.4 kB] Get:41 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblwres141 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [33.7 kB] Get:42 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libbind9-140 amd64 1:9.10.3.dfsg.P4-8ubuntu1.11 [23.6 kB] Get:43 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasprintf0v5 amd64 0.19.7-2ubuntu3.1 [6,568 B] Get:44 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 gettext-base amd64 0.19.7-2ubuntu3.1 [48.0 kB] Get:45 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libglib2.0-data all 2.48.2-0ubuntu4.1 [132 kB] Get:46 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libx11-data all 2:1.6.3-1ubuntu2.1 [113 kB] Get:47 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libx11-6 amd64 2:1.6.3-1ubuntu2.1 [570 kB] Get:48 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-sftp-server amd64 1:7.2p2-4ubuntu2.6 [38.8 kB] Get:49 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-server amd64 1:7.2p2-4ubuntu2.6 [335 kB] Get:50 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-client amd64 1:7.2p2-4ubuntu2.6 [584 kB] Get:51 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-update-manager all 1:16.04.15 [33.4 kB] Get:52 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 update-manager-core all 1:16.04.15 [5,498 B] Get:53 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-standard amd64 1.361.2 [2,710 B] Get:54 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 git-man all 1:2.7.4-0ubuntu1.5 [736 kB] Get:55 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 git amd64 1:2.7.4-0ubuntu1.5 [2,714 kB] Get:56 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmspack0 amd64 0.5-1ubuntu0.16.04.3 [37.4 kB] Get:57 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-headers-4.4.0-138 all 4.4.0-138.164 [9,963 kB] Get:58 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-headers-4.4.0-138-generic amd64 4.4.0-138.164 [833 kB] Get:59 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-image-4.4.0-138-generic amd64 4.4.0-138.164 [22.1 MB] Get:60 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-signed-image-4.4.0-138-generic amd64 4.4.0-138.164 [4,014 B] Get:61 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-image-extra-4.4.0-138-generic amd64 4.4.0-138.164 [36.6 MB] Get:62 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 intel-microcode amd64 3.20180807a.0ubuntu0.16.04.1 [1,275 kB] Get:63 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-signed-generic amd64 4.4.0.138.144 [1,814 B] Get:64 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-signed-image-generic amd64 4.4.0.138.144 [2,420 B] Get:65 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-headers-generic amd64 4.4.0.138.144 [2,336 B] Get:66 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-urllib3 all 1.13.1-2ubuntu0.16.04.2 [58.1 kB] Get:67 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-requests all 2.9.1-3ubuntu0.1 [55.8 kB] Get:68 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 squashfs-tools amd64 1:4.3-3ubuntu2.16.04.3 [105 kB] Get:69 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 cloud-initramfs-copymods all 0.27ubuntu1.6 [4,380 B] Get:70 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 cloud-initramfs-dyn-netconf all 0.27ubuntu1.6 [6,892 B] Get:71 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-legacy-ec2 all 18.4-0ubuntu1~16.04.2 [26.1 kB] Get:72 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 overlayroot all 0.27ubuntu1.6 [15.7 kB] Get:73 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-server amd64 1.361.2 [2,560 B] Fetched 92.0 MB in 2s (39.0 MB/s) Extracting templates from packages: 100% Preconfiguring packages ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../dpkg_1.18.4ubuntu1.5_amd64.deb ... Unpacking dpkg (1.18.4ubuntu1.5) over (1.18.4ubuntu1.4) ... Setting up dpkg (1.18.4ubuntu1.5) ... Processing triggers for man-db (2.7.5-1) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../libapt-pkg5.0_1.2.29_amd64.deb ... Unpacking libapt-pkg5.0:amd64 (1.2.29) over (1.2.27) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Setting up libapt-pkg5.0:amd64 (1.2.29) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../libapt-inst2.0_1.2.29_amd64.deb ... Unpacking libapt-inst2.0:amd64 (1.2.29) over (1.2.27) ... Preparing to unpack .../archives/apt_1.2.29_amd64.deb ... Unpacking apt (1.2.29) over (1.2.27) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Processing triggers for man-db (2.7.5-1) ... Setting up apt (1.2.29) ... Installing new version of config file /etc/apt/apt.conf.d/01autoremove ... Processing triggers for libc-bin (2.23-0ubuntu10) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../apt-utils_1.2.29_amd64.deb ... Unpacking apt-utils (1.2.29) over (1.2.27) ... Preparing to unpack .../gpgv_1.4.20-1ubuntu3.3_amd64.deb ... Unpacking gpgv (1.4.20-1ubuntu3.3) over (1.4.20-1ubuntu3.2) ... Processing triggers for man-db (2.7.5-1) ... Setting up gpgv (1.4.20-1ubuntu3.3) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../gnupg_1.4.20-1ubuntu3.3_amd64.deb ... Unpacking gnupg (1.4.20-1ubuntu3.3) over (1.4.20-1ubuntu3.2) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for install-info (6.1.0.dfsg.1-5) ... Setting up gnupg (1.4.20-1ubuntu3.3) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../libsystemd0_229-4ubuntu21.8_amd64.deb ... Unpacking libsystemd0:amd64 (229-4ubuntu21.8) over (229-4ubuntu21.4) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Setting up libsystemd0:amd64 (229-4ubuntu21.8) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../libpam-systemd_229-4ubuntu21.8_amd64.deb ... Unpacking libpam-systemd:amd64 (229-4ubuntu21.8) over (229-4ubuntu21.4) ... Preparing to unpack .../systemd_229-4ubuntu21.8_amd64.deb ... Unpacking systemd (229-4ubuntu21.8) over (229-4ubuntu21.4) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for dbus (1.10.6-1ubuntu3.3) ... Processing triggers for ureadahead (0.100.0-19) ... ureadahead will be reprofiled on next reboot Setting up systemd (229-4ubuntu21.8) ... addgroup: The group `systemd-journal' already exists as a system group. Exiting. [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../udev_229-4ubuntu21.8_amd64.deb ... Unpacking udev (229-4ubuntu21.8) over (229-4ubuntu21.4) ... Preparing to unpack .../kmod_22-1ubuntu5.1_amd64.deb ... Unpacking kmod (22-1ubuntu5.1) over (22-1ubuntu5) ... Preparing to unpack .../libkmod2_22-1ubuntu5.1_amd64.deb ... Unpacking libkmod2:amd64 (22-1ubuntu5.1) over (22-1ubuntu5) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Setting up libkmod2:amd64 (22-1ubuntu5.1) ... Processing triggers for systemd (229-4ubuntu21.8) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../libudev1_229-4ubuntu21.8_amd64.deb ... Unpacking libudev1:amd64 (229-4ubuntu21.8) over (229-4ubuntu21.4) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Setting up libudev1:amd64 (229-4ubuntu21.8) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... (Reading database ... 60367 files and directories currently installed.) Preparing to unpack .../friendly-recovery_0.2.31ubuntu2_all.deb ... Removed symlink /etc/systemd/system/sysinit.target.wants/friendly-recovery.service. Unpacking friendly-recovery (0.2.31ubuntu2) over (0.2.31ubuntu1) ... Preparing to unpack .../initramfs-tools_0.122ubuntu8.13_all.deb ... Unpacking initramfs-tools (0.122ubuntu8.13) over (0.122ubuntu8.11) ... Preparing to unpack .../initramfs-tools-core_0.122ubuntu8.13_all.deb ... Unpacking initramfs-tools-core (0.122ubuntu8.13) over (0.122ubuntu8.11) ... Preparing to unpack .../initramfs-tools-bin_0.122ubuntu8.13_amd64.deb ... Unpacking initramfs-tools-bin (0.122ubuntu8.13) over (0.122ubuntu8.11) ... Preparing to unpack .../systemd-sysv_229-4ubuntu21.8_amd64.deb ... Unpacking systemd-sysv (229-4ubuntu21.8) over (229-4ubuntu21.4) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for man-db (2.7.5-1) ... Setting up systemd-sysv (229-4ubuntu21.8) ... (Reading database ... 60369 files and directories currently installed.) Preparing to unpack .../libapparmor1_2.10.95-0ubuntu2.10_amd64.deb ... Unpacking libapparmor1:amd64 (2.10.95-0ubuntu2.10) over (2.10.95-0ubuntu2.9) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Setting up libapparmor1:amd64 (2.10.95-0ubuntu2.10) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... (Reading database ... 60369 files and directories currently installed.) Preparing to unpack .../libglib2.0-0_2.48.2-0ubuntu4.1_amd64.deb ... Unpacking libglib2.0-0:amd64 (2.48.2-0ubuntu4.1) over (2.48.2-0ubuntu3) ... Preparing to unpack .../open-iscsi_2.0.873+git0.3b4b4500-14ubuntu3.6_amd64.deb ... Unpacking open-iscsi (2.0.873+git0.3b4b4500-14ubuntu3.6) over (2.0.873+git0.3b4b4500-14ubuntu3.4) ... Preparing to unpack .../tzdata_2018g-0ubuntu0.16.04_all.deb ... Unpacking tzdata (2018g-0ubuntu0.16.04) over (2017c-0ubuntu0.16.04) ... Preparing to unpack .../distro-info-data_0.28ubuntu0.9_all.deb ... Unpacking distro-info-data (0.28ubuntu0.9) over (0.28ubuntu0.8) ... Preparing to unpack .../libisc-export160_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libdns-export162_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Selecting previously unselected package ubuntu-advantage-tools. Preparing to unpack .../ubuntu-advantage-tools_10ubuntu0.16.04.1_all.deb ... Unpacking ubuntu-advantage-tools (10ubuntu0.16.04.1) ... Preparing to unpack .../ubuntu-minimal_1.361.2_amd64.deb ... Unpacking ubuntu-minimal (1.361.2) over (1.361.1) ... Preparing to unpack .../libapparmor-perl_2.10.95-0ubuntu2.10_amd64.deb ... Unpacking libapparmor-perl (2.10.95-0ubuntu2.10) over (2.10.95-0ubuntu2.9) ... Preparing to unpack .../apparmor_2.10.95-0ubuntu2.10_amd64.deb ... Unpacking apparmor (2.10.95-0ubuntu2.10) over (2.10.95-0ubuntu2.9) ... Preparing to unpack .../curl_7.47.0-1ubuntu2.11_amd64.deb ... Unpacking curl (7.47.0-1ubuntu2.11) over (7.47.0-1ubuntu2.8) ... Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.11_amd64.deb ... Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.11) over (7.47.0-1ubuntu2.8) ... Preparing to unpack .../apt-transport-https_1.2.29_amd64.deb ... Unpacking apt-transport-https (1.2.29) over (1.2.27) ... Preparing to unpack .../libxml2_2.9.3+dfsg1-1ubuntu0.6_amd64.deb ... Unpacking libxml2:amd64 (2.9.3+dfsg1-1ubuntu0.6) over (2.9.3+dfsg1-1ubuntu0.5) ... Preparing to unpack .../bind9-host_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking bind9-host (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../dnsutils_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking dnsutils (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libisc160_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libisc160:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libdns162_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libdns162:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libisccc140_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libisccc140:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libisccfg140_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libisccfg140:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../liblwres141_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking liblwres141:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libbind9-140_1%3a9.10.3.dfsg.P4-8ubuntu1.11_amd64.deb ... Unpacking libbind9-140:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) over (1:9.10.3.dfsg.P4-8ubuntu1.10) ... Preparing to unpack .../libasprintf0v5_0.19.7-2ubuntu3.1_amd64.deb ... Unpacking libasprintf0v5:amd64 (0.19.7-2ubuntu3.1) over (0.19.7-2ubuntu3) ... Preparing to unpack .../gettext-base_0.19.7-2ubuntu3.1_amd64.deb ... Unpacking gettext-base (0.19.7-2ubuntu3.1) over (0.19.7-2ubuntu3) ... Preparing to unpack .../libglib2.0-data_2.48.2-0ubuntu4.1_all.deb ... Unpacking libglib2.0-data (2.48.2-0ubuntu4.1) over (2.48.2-0ubuntu3) ... Preparing to unpack .../libx11-data_2%3a1.6.3-1ubuntu2.1_all.deb ... Unpacking libx11-data (2:1.6.3-1ubuntu2.1) over (2:1.6.3-1ubuntu2) ... Preparing to unpack .../libx11-6_2%3a1.6.3-1ubuntu2.1_amd64.deb ... Unpacking libx11-6:amd64 (2:1.6.3-1ubuntu2.1) over (2:1.6.3-1ubuntu2) ... Preparing to unpack .../openssh-sftp-server_1%3a7.2p2-4ubuntu2.6_amd64.deb ... Unpacking openssh-sftp-server (1:7.2p2-4ubuntu2.6) over (1:7.2p2-4ubuntu2.4) ... Preparing to unpack .../openssh-server_1%3a7.2p2-4ubuntu2.6_amd64.deb ... Unpacking openssh-server (1:7.2p2-4ubuntu2.6) over (1:7.2p2-4ubuntu2.4) ... Preparing to unpack .../openssh-client_1%3a7.2p2-4ubuntu2.6_amd64.deb ... Unpacking openssh-client (1:7.2p2-4ubuntu2.6) over (1:7.2p2-4ubuntu2.4) ... Preparing to unpack .../python3-update-manager_1%3a16.04.15_all.deb ... Unpacking python3-update-manager (1:16.04.15) over (1:16.04.13) ... Preparing to unpack .../update-manager-core_1%3a16.04.15_all.deb ... Unpacking update-manager-core (1:16.04.15) over (1:16.04.13) ... Preparing to unpack .../ubuntu-standard_1.361.2_amd64.deb ... Unpacking ubuntu-standard (1.361.2) over (1.361.1) ... Preparing to unpack .../git-man_1%3a2.7.4-0ubuntu1.5_all.deb ... Unpacking git-man (1:2.7.4-0ubuntu1.5) over (1:2.7.4-0ubuntu1.4) ... Preparing to unpack .../git_1%3a2.7.4-0ubuntu1.5_amd64.deb ... Unpacking git (1:2.7.4-0ubuntu1.5) over (1:2.7.4-0ubuntu1.4) ... Preparing to unpack .../libmspack0_0.5-1ubuntu0.16.04.3_amd64.deb ... Unpacking libmspack0:amd64 (0.5-1ubuntu0.16.04.3) over (0.5-1ubuntu0.16.04.1) ... Selecting previously unselected package linux-headers-4.4.0-138. Preparing to unpack .../linux-headers-4.4.0-138_4.4.0-138.164_all.deb ... Unpacking linux-headers-4.4.0-138 (4.4.0-138.164) ... Selecting previously unselected package linux-headers-4.4.0-138-generic. Preparing to unpack .../linux-headers-4.4.0-138-generic_4.4.0-138.164_amd64.deb ... Unpacking linux-headers-4.4.0-138-generic (4.4.0-138.164) ... Selecting previously unselected package linux-image-4.4.0-138-generic. Preparing to unpack .../linux-image-4.4.0-138-generic_4.4.0-138.164_amd64.deb ... Examining /etc/kernel/preinst.d/ run-parts: executing /etc/kernel/preinst.d/intel-microcode 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic Done. Unpacking linux-image-4.4.0-138-generic (4.4.0-138.164) ... Selecting previously unselected package linux-signed-image-4.4.0-138-generic. Preparing to unpack .../linux-signed-image-4.4.0-138-generic_4.4.0-138.164_amd64.deb ... Unpacking linux-signed-image-4.4.0-138-generic (4.4.0-138.164) ... Selecting previously unselected package linux-image-extra-4.4.0-138-generic. Preparing to unpack .../linux-image-extra-4.4.0-138-generic_4.4.0-138.164_amd64.deb ... Unpacking linux-image-extra-4.4.0-138-generic (4.4.0-138.164) ... Preparing to unpack .../intel-microcode_3.20180807a.0ubuntu0.16.04.1_amd64.deb ... Unpacking intel-microcode (3.20180807a.0ubuntu0.16.04.1) over (3.20180425.1~ubuntu0.16.04.2) ... Preparing to unpack .../linux-signed-generic_4.4.0.138.144_amd64.deb ... Unpacking linux-signed-generic (4.4.0.138.144) over (4.4.0.131.137) ... Preparing to unpack .../linux-signed-image-generic_4.4.0.138.144_amd64.deb ... Unpacking linux-signed-image-generic (4.4.0.138.144) over (4.4.0.131.137) ... Preparing to unpack .../linux-headers-generic_4.4.0.138.144_amd64.deb ... Unpacking linux-headers-generic (4.4.0.138.144) over (4.4.0.131.137) ... Preparing to unpack .../python3-urllib3_1.13.1-2ubuntu0.16.04.2_all.deb ... Unpacking python3-urllib3 (1.13.1-2ubuntu0.16.04.2) over (1.13.1-2ubuntu0.16.04.1) ... Preparing to unpack .../python3-requests_2.9.1-3ubuntu0.1_all.deb ... Unpacking python3-requests (2.9.1-3ubuntu0.1) over (2.9.1-3) ... Preparing to unpack .../squashfs-tools_1%3a4.3-3ubuntu2.16.04.3_amd64.deb ... Unpacking squashfs-tools (1:4.3-3ubuntu2.16.04.3) over (1:4.3-3ubuntu2.16.04.2) ... Preparing to unpack .../cloud-initramfs-copymods_0.27ubuntu1.6_all.deb ... Unpacking cloud-initramfs-copymods (0.27ubuntu1.6) over (0.27ubuntu1.5) ... Preparing to unpack .../cloud-initramfs-dyn-netconf_0.27ubuntu1.6_all.deb ... Unpacking cloud-initramfs-dyn-netconf (0.27ubuntu1.6) over (0.27ubuntu1.5) ... Preparing to unpack .../grub-legacy-ec2_18.4-0ubuntu1~16.04.2_all.deb ... Leaving 'diversion of /usr/sbin/grub-set-default to /usr/sbin/grub-set-default.real by grub-legacy-ec2' Unpacking grub-legacy-ec2 (18.4-0ubuntu1~16.04.2) over (18.2-4-g05926e48-0ubuntu1~16.04.2) ... Preparing to unpack .../overlayroot_0.27ubuntu1.6_all.deb ... Unpacking overlayroot (0.27ubuntu1.6) over (0.27ubuntu1.5) ... Preparing to unpack .../ubuntu-server_1.361.2_amd64.deb ... Unpacking ubuntu-server (1.361.2) over (1.361.1) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Processing triggers for systemd (229-4ubuntu21.8) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for ufw (0.35-0ubuntu2) ... Setting up libapt-inst2.0:amd64 (1.2.29) ... Setting up apt-utils (1.2.29) ... Setting up libpam-systemd:amd64 (229-4ubuntu21.8) ... Setting up udev (229-4ubuntu21.8) ... addgroup: The group `input' already exists as a system group. Exiting. update-initramfs: deferring update (trigger activated) Setting up kmod (22-1ubuntu5.1) ... Installing new version of config file /etc/modprobe.d/blacklist.conf ... Setting up friendly-recovery (0.2.31ubuntu2) ... Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.4.0-138-generic Found linux image: /boot/vmlinuz-4.4.0-131-generic Found initrd image: /boot/initrd.img-4.4.0-131-generic Adding boot menu entry for EFI firmware configuration done Setting up initramfs-tools-bin (0.122ubuntu8.13) ... Setting up initramfs-tools-core (0.122ubuntu8.13) ... Setting up initramfs-tools (0.122ubuntu8.13) ... update-initramfs: deferring update (trigger activated) Setting up libglib2.0-0:amd64 (2.48.2-0ubuntu4.1) ... No schema files found: doing nothing. Setting up open-iscsi (2.0.873+git0.3b4b4500-14ubuntu3.6) ... Setting up tzdata (2018g-0ubuntu0.16.04) ... Current default time zone: 'America/Chicago' Local time is now: Mon Nov 12 14:52:15 CST 2018. Universal Time is now: Mon Nov 12 20:52:15 UTC 2018. Run 'dpkg-reconfigure tzdata' if you wish to change it. Setting up distro-info-data (0.28ubuntu0.9) ... Setting up libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up ubuntu-advantage-tools (10ubuntu0.16.04.1) ... Setting up ubuntu-minimal (1.361.2) ... Setting up libapparmor-perl (2.10.95-0ubuntu2.10) ... Setting up apparmor (2.10.95-0ubuntu2.10) ... Installing new version of config file /etc/apparmor.d/abstractions/private-files ... Installing new version of config file /etc/apparmor.d/abstractions/private-files-strict ... Installing new version of config file /etc/apparmor.d/abstractions/ubuntu-browsers.d/user-files ... update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.11) ... Setting up curl (7.47.0-1ubuntu2.11) ... Setting up apt-transport-https (1.2.29) ... Setting up libxml2:amd64 (2.9.3+dfsg1-1ubuntu0.6) ... Setting up libisc160:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up libdns162:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up libisccc140:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up libisccfg140:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up libbind9-140:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up liblwres141:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up bind9-host (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up dnsutils (1:9.10.3.dfsg.P4-8ubuntu1.11) ... Setting up libasprintf0v5:amd64 (0.19.7-2ubuntu3.1) ... Setting up gettext-base (0.19.7-2ubuntu3.1) ... Setting up libglib2.0-data (2.48.2-0ubuntu4.1) ... Setting up libx11-data (2:1.6.3-1ubuntu2.1) ... Setting up libx11-6:amd64 (2:1.6.3-1ubuntu2.1) ... Setting up openssh-client (1:7.2p2-4ubuntu2.6) ... Setting up openssh-sftp-server (1:7.2p2-4ubuntu2.6) ... Setting up openssh-server (1:7.2p2-4ubuntu2.6) ... Setting up python3-update-manager (1:16.04.15) ... Setting up update-manager-core (1:16.04.15) ... Setting up ubuntu-standard (1.361.2) ... Setting up git-man (1:2.7.4-0ubuntu1.5) ... Setting up git (1:2.7.4-0ubuntu1.5) ... Setting up libmspack0:amd64 (0.5-1ubuntu0.16.04.3) ... Setting up linux-headers-4.4.0-138 (4.4.0-138.164) ... Setting up linux-headers-4.4.0-138-generic (4.4.0-138.164) ... Setting up linux-image-4.4.0-138-generic (4.4.0-138.164) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic update-initramfs: Generating /boot/initrd.img-4.4.0-138-generic W: mdadm: /etc/mdadm/mdadm.conf defines no arrays. run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic run-parts: executing /etc/kernel/postinst.d/x-grub-legacy-ec2 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-4.4.0-131-generic Ignoring non-Xen Kernel on Xen domU host: vmlinuz-4.4.0-131-generic.efi.signed Found kernel: /boot/vmlinuz-4.4.0-138-generic Found kernel: /boot/vmlinuz-4.4.0-131-generic Replacing config file /run/grub/menu.lst with new version Updating /boot/grub/menu.lst ... done run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.4.0-138-generic Found initrd image: /boot/initrd.img-4.4.0-138-generic Found linux image: /boot/vmlinuz-4.4.0-131-generic Found initrd image: /boot/initrd.img-4.4.0-131-generic Adding boot menu entry for EFI firmware configuration done Setting up linux-signed-image-4.4.0-138-generic (4.4.0-138.164) ... Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.4.0-138-generic Found initrd image: /boot/initrd.img-4.4.0-138-generic Found linux image: /boot/vmlinuz-4.4.0-131-generic Found initrd image: /boot/initrd.img-4.4.0-131-generic Adding boot menu entry for EFI firmware configuration done Setting up linux-image-extra-4.4.0-138-generic (4.4.0-138.164) ... run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic update-initramfs: Generating /boot/initrd.img-4.4.0-138-generic W: mdadm: /etc/mdadm/mdadm.conf defines no arrays. run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic run-parts: executing /etc/kernel/postinst.d/x-grub-legacy-ec2 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-4.4.0-138-generic Found kernel: /boot/vmlinuz-4.4.0-131-generic Ignoring non-Xen Kernel on Xen domU host: vmlinuz-4.4.0-138-generic.efi.signed Ignoring non-Xen Kernel on Xen domU host: vmlinuz-4.4.0-131-generic.efi.signed Found kernel: /boot/vmlinuz-4.4.0-138-generic Found kernel: /boot/vmlinuz-4.4.0-131-generic Updating /boot/grub/menu.lst ... done run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.4.0-138-generic /boot/vmlinuz-4.4.0-138-generic Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.4.0-138-generic Found initrd image: /boot/initrd.img-4.4.0-138-generic Found linux image: /boot/vmlinuz-4.4.0-131-generic Found initrd image: /boot/initrd.img-4.4.0-131-generic Adding boot menu entry for EFI firmware configuration done Setting up intel-microcode (3.20180807a.0ubuntu0.16.04.1) ... update-initramfs: deferring update (trigger activated) intel-microcode: microcode will be updated at next boot Setting up linux-signed-image-generic (4.4.0.138.144) ... Setting up linux-headers-generic (4.4.0.138.144) ... Setting up linux-signed-generic (4.4.0.138.144) ... Setting up python3-urllib3 (1.13.1-2ubuntu0.16.04.2) ... Setting up python3-requests (2.9.1-3ubuntu0.1) ... Setting up squashfs-tools (1:4.3-3ubuntu2.16.04.3) ... Setting up cloud-initramfs-copymods (0.27ubuntu1.6) ... Setting up cloud-initramfs-dyn-netconf (0.27ubuntu1.6) ... Setting up grub-legacy-ec2 (18.4-0ubuntu1~16.04.2) ... Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-4.4.0-138-generic Found kernel: /boot/vmlinuz-4.4.0-131-generic Ignoring non-Xen Kernel on Xen domU host: vmlinuz-4.4.0-138-generic.efi.signed Ignoring non-Xen Kernel on Xen domU host: vmlinuz-4.4.0-131-generic.efi.signed Found kernel: /boot/vmlinuz-4.4.0-138-generic Found kernel: /boot/vmlinuz-4.4.0-131-generic Updating /boot/grub/menu.lst ... done Setting up overlayroot (0.27ubuntu1.6) ... Processing triggers for initramfs-tools (0.122ubuntu8.13) ... update-initramfs: Generating /boot/initrd.img-4.4.0-138-generic W: mdadm: /etc/mdadm/mdadm.conf defines no arrays. Setting up ubuntu-server (1.361.2) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... root at asus:~# mkdir -p /root/deploy && cd "$_" root at asus:~/deploy# git clone https://github.com/openstack/airship-in-a-bottle Cloning into 'airship-in-a-bottle'... remote: Enumerating objects: 93, done. remote: Counting objects: 100% (93/93), done. remote: Compressing objects: 100% (82/82), done. remote: Total 2223 (delta 28), reused 63 (delta 9), pack-reused 2130 Receiving objects: 100% (2223/2223), 474.33 KiB | 0 bytes/s, done. Resolving deltas: 100% (1112/1112), done. Checking connectivity... done. root at asus:~/deploy# cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node root at asus:~/deploy/airship-in-a-bottle/manifests/dev_single_node# ./airship-in-a-bottle.sh Welcome to Airship in a Bottle /--------------------\ | \ | |---| \---- | | x | \ | |---| | | | / | \____|____/ /---- | / \--------------------/ A prototype example of deploying the Airship suite on a single VM. This example will run through: - Setup - Genesis of Airship (Kubernetes) - Basic deployment of Openstack (including Nova, Neutron, and Horizon using Openstack Helm) - VM creation automation using Heat The expected runtime of this script is greater than 1 hour The minimum recommended size of the Ubuntu 16.04 VM is 4 vCPUs, 20GB of RAM with 32GB disk space. Let's collect some information about your VM to get started. Is your HOST IFACE enp4s0? (Y/n) Y Is your LOCAL IP 192.168.1.177? (Y/n) Y ++ hostname -s + export SHORT_HOSTNAME=asus + SHORT_HOSTNAME=asus + set +x Updating /etc/hosts with: 192.168.1.177 asus 192.168.1.177 asus + export HOSTIP=192.168.1.177 + HOSTIP=192.168.1.177 + export HOSTCIDR=192.168.1.177/32 + HOSTCIDR=192.168.1.177/32 + export NODE_NET_IFACE=enp4s0 + NODE_NET_IFACE=enp4s0 + export TARGET_SITE=demo + TARGET_SITE=demo + set +x Using DNS servers 192.168.1.254 and 192.168.1.254. Starting Airship deployment... + LAST_STEP_NAME=demo + [[ demo == \c\o\l\l\e\c\t ]] + [[ demo == \g\e\n\e\s\i\s ]] + [[ demo == \d\e\p\l\o\y ]] + [[ demo == \d\e\m\o ]] + STEP_BREAKPOINT=40 + export WORKSPACE=/root/deploy + WORKSPACE=/root/deploy + TARGET_SITE=demo + http_proxy= + https_proxy= + no_proxy= + SHORT_HOSTNAME=asus + HOSTIP=192.168.1.177 + HOSTCIDR=192.168.1.177/32 + NODE_NET_IFACE=enp4s0 + POST_GENESIS_DELAY=60 + AIRSHIP_IN_A_BOTTLE_REPO=https://git.openstack.org/openstack/airship-in-a-bottle + AIRSHIP_IN_A_BOTTLE_REFSPEC= + PEGLEG_REPO=https://git.openstack.org/openstack/airship-pegleg.git + PEGLEG_REFSPEC= + SHIPYARD_REPO=https://git.openstack.org/openstack/airship-shipyard.git + SHIPYARD_REFSPEC= + PEGLEG_IMAGE=quay.io/airshipit/pegleg:ac6297eae6c51ab2f13a96978abaaa10cb46e3d6 + PROMENADE_IMAGE=quay.io/airshipit/promenade:master + PEGLEG=/root/deploy/airship-pegleg/tools/pegleg.sh + trap clean EXIT + check_preconditions + set +x + configure_apt + [[ ! -z '' ]] + [[ ! -z '' ]] + setup_workspace + mkdir -p /root/deploy/collected + mkdir -p /root/deploy/genesis + chmod -R 777 /root/deploy/genesis + setup_repos + get_repo airship-pegleg https://git.openstack.org/openstack/airship-pegleg.git + cd /root/deploy + '[' '!' -d airship-pegleg ']' + git clone https://git.openstack.org/openstack/airship-pegleg.git Cloning into 'airship-pegleg'... remote: Counting objects: 1397, done. remote: Compressing objects: 100% (870/870), done. remote: Total 1397 (delta 812), reused 896 (delta 399) Receiving objects: 100% (1397/1397), 294.02 KiB | 0 bytes/s, done. Resolving deltas: 100% (812/812), done. Checking connectivity... done. + '[' -n '' ']' + get_repo airship-in-a-bottle https://git.openstack.org/openstack/airship-in-a-bottle + cd /root/deploy + '[' '!' -d airship-in-a-bottle ']' + get_repo airship-shipyard https://git.openstack.org/openstack/airship-shipyard.git + cd /root/deploy + '[' '!' -d airship-shipyard ']' + git clone https://git.openstack.org/openstack/airship-shipyard.git Cloning into 'airship-shipyard'... remote: Counting objects: 5503, done. remote: Compressing objects: 100% (2224/2224), done. remote: Total 5503 (delta 3630), reused 4748 (delta 3000) Receiving objects: 100% (5503/5503), 1.54 MiB | 0 bytes/s, done. Resolving deltas: 100% (3630/3630), done. Checking connectivity... done. + '[' -n '' ']' + configure_dev_configurables + cat + install_dependencies + apt -qq update All packages are up to date. + apt -y install --no-install-recommends docker.io jq nmap Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libblas-common libblas3 liblinear3 liblua5.2-0 libonig2 lua-lpeg Suggested packages: aufs-tools debootstrap docker-doc rinse zfs-fuse | zfsutils liblinear-tools liblinear-dev Recommended packages: cgroupfs-mount | cgroup-lite ubuntu-fan ndiff The following NEW packages will be installed: docker.io jq libblas-common libblas3 liblinear3 liblua5.2-0 libonig2 lua-lpeg nmap 0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 22.3 MB of archives. After this operation, 113 MB of additional disk space will be used. Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 docker.io amd64 17.03.2-0ubuntu2~16.04.1 [17.1 MB] Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libonig2 amd64 5.9.6-1ubuntu0.1 [86.7 kB] Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 jq amd64 1.5+dfsg-1ubuntu0.1 [144 kB] Get:4 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libblas-common amd64 3.6.0-2ubuntu2 [5,342 B] Get:5 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libblas3 amd64 3.6.0-2ubuntu2 [147 kB] Get:6 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 liblinear3 amd64 2.1.0+dfsg-1 [39.3 kB] Get:7 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 liblua5.2-0 amd64 5.2.4-1ubuntu1 [106 kB] Get:8 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 lua-lpeg amd64 0.12.2-1 [28.3 kB] Get:9 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 nmap amd64 7.01-2ubuntu2 [4,638 kB] Fetched 22.3 MB in 0s (23.9 MB/s) Preconfiguring packages ... Selecting previously unselected package docker.io. (Reading database ... 92807 files and directories currently installed.) Preparing to unpack .../docker.io_17.03.2-0ubuntu2~16.04.1_amd64.deb ... Unpacking docker.io (17.03.2-0ubuntu2~16.04.1) ... Selecting previously unselected package libonig2:amd64. Preparing to unpack .../libonig2_5.9.6-1ubuntu0.1_amd64.deb ... Unpacking libonig2:amd64 (5.9.6-1ubuntu0.1) ... Selecting previously unselected package jq. Preparing to unpack .../jq_1.5+dfsg-1ubuntu0.1_amd64.deb ... Unpacking jq (1.5+dfsg-1ubuntu0.1) ... Selecting previously unselected package libblas-common. Preparing to unpack .../libblas-common_3.6.0-2ubuntu2_amd64.deb ... Unpacking libblas-common (3.6.0-2ubuntu2) ... Selecting previously unselected package libblas3. Preparing to unpack .../libblas3_3.6.0-2ubuntu2_amd64.deb ... Unpacking libblas3 (3.6.0-2ubuntu2) ... Selecting previously unselected package liblinear3:amd64. Preparing to unpack .../liblinear3_2.1.0+dfsg-1_amd64.deb ... Unpacking liblinear3:amd64 (2.1.0+dfsg-1) ... Selecting previously unselected package liblua5.2-0:amd64. Preparing to unpack .../liblua5.2-0_5.2.4-1ubuntu1_amd64.deb ... Unpacking liblua5.2-0:amd64 (5.2.4-1ubuntu1) ... Selecting previously unselected package lua-lpeg:amd64. Preparing to unpack .../lua-lpeg_0.12.2-1_amd64.deb ... Unpacking lua-lpeg:amd64 (0.12.2-1) ... Selecting previously unselected package nmap. Preparing to unpack .../nmap_7.01-2ubuntu2_amd64.deb ... Unpacking nmap (7.01-2ubuntu2) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for systemd (229-4ubuntu21.8) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Setting up docker.io (17.03.2-0ubuntu2~16.04.1) ... Adding group `docker' (GID 117) ... Done. Setting up libonig2:amd64 (5.9.6-1ubuntu0.1) ... Setting up jq (1.5+dfsg-1ubuntu0.1) ... Setting up libblas-common (3.6.0-2ubuntu2) ... Setting up libblas3 (3.6.0-2ubuntu2) ... update-alternatives: using /usr/lib/libblas/libblas.so.3 to provide /usr/lib/libblas.so.3 (libblas.so.3) in auto mode Setting up liblinear3:amd64 (2.1.0+dfsg-1) ... Setting up liblua5.2-0:amd64 (5.2.4-1ubuntu1) ... Setting up lua-lpeg:amd64 (0.12.2-1) ... Setting up nmap (7.01-2ubuntu2) ... Processing triggers for systemd (229-4ubuntu21.8) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... + configure_docker + [[ ! -z '' ]] + [[ ! -z '' ]] + [[ 40 -ge 10 ]] + echo 'This is a good time to grab a coffee :)' This is a good time to grab a coffee :) + run_pegleg_collect + IMAGE=quay.io/airshipit/pegleg:ac6297eae6c51ab2f13a96978abaaa10cb46e3d6 + TERM_OPTS=-i + /root/deploy/airship-pegleg/tools/pegleg.sh site -p /workspace/airship-in-a-bottle/deployment_files collect demo -s /workspace/collected == NOTE: Workspace /root/deploy is the execution directory in the container == Unable to find image 'quay.io/airshipit/pegleg:ac6297eae6c51ab2f13a96978abaaa10cb46e3d6' locally ac6297eae6c51ab2f13a96978abaaa10cb46e3d6: Pulling from airshipit/pegleg 05d1a5232b46: Pull complete 5cee356eda6b: Pull complete 89d3385f0fd3: Pull complete 80ae6b477848: Pull complete 28bdf9e584cc: Pull complete dec1a1f0462b: Pull complete a4670d125615: Pull complete 547b45a875f5: Pull complete 102a0247b454: Pull complete 991f46bceacf: Pull complete 4b997ac83a2c: Pull complete 4585a45fbf43: Pull complete 983ae3f172a6: Pull complete 6e585acbc3f6: Pull complete Digest: sha256:f1eb51608d496a91839d0b87bbad3ad0c82319ddae951de5e5ce004bf466913f Status: Downloaded newer image for quay.io/airshipit/pegleg:ac6297eae6c51ab2f13a96978abaaa10cb46e3d6 + [[ 40 -ge 20 ]] + generate_certs + set +x === Generating updated certificates === + cp /root/deploy/collected/deployment_files.yaml /root/deploy/genesis ++ ls /root/deploy/genesis + docker run --rm -t -e http_proxy= -e https_proxy= -e no_proxy= -w /target -e PROMENADE_DEBUG=false -v /root/deploy/genesis:/target quay.io/airshipit/promenade:master promenade generate-certs -o /target deployment_files.yaml Unable to find image 'quay.io/airshipit/promenade:master' locally master: Pulling from airshipit/promenade bc9ab73e5b14: Pull complete 193a6306c92a: Pull complete e5c3f8c317dc: Pull complete a587a86c9dcb: Pull complete 72744d0a318b: Pull complete 6598fc9d11d1: Pull complete 770079cf7a7e: Pull complete 03c4d24b3523: Pull complete aaa91e2585ce: Pull complete 03391a73abe1: Pull complete e899f028e0e3: Pull complete 24fd23e628a1: Pull complete 4e6580d0581c: Pull complete 69a90ea0c25e: Pull complete 922b02db26b0: Pull complete Digest: sha256:4b33776de7b8e8952713794bf83f69572e0d092c4ab9f88c2e69f00ffac15937 Status: Downloaded newer image for quay.io/airshipit/promenade:master + PORT=9000 + UWSGI_TIMEOUT=300 + PROMENADE_THREADS=1 + PROMENADE_WORKERS=4 + '[' promenade = server ']' + exec promenade generate-certs -o /target deployment_files.yaml /usr/local/lib/python3.6/site-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) 2018-11-12 20:55:58,890 INFO - - - promenade.config:from_streams [ 53] Loading documents from deployment_files.yaml 2018-11-12 20:56:00,320 INFO - - - promenade.config:from_streams [ 57] Successfully loaded 167 documents from deployment_files.yaml 2018-11-12 20:56:00,320 INFO - - - promenade.config:__init__ [ 24] Parsing document schemas. 2018-11-12 20:56:00,320 INFO - - - promenade.config:__init__ [ 25] Building config from 167 documents. 2018-11-12 20:56:00,320 INFO - - - promenade.config:__init__ [ 27] Rendering documents via Deckhand engine. 2018-11-12 20:56:00,343 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,344 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Manifest/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,347 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Manifest/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,350 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,354 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,357 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,360 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,363 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,366 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,369 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,372 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,375 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,378 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,381 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,384 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,388 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,391 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,394 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,397 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,400 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,403 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,406 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,409 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,412 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,415 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,419 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,422 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,425 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,428 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,431 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,434 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,437 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,440 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,443 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,446 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,449 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,453 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,456 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,460 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,463 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,466 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,469 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,472 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,476 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,479 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,483 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,486 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,489 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,492 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,496 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,498 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,502 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,505 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,508 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,512 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,515 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,518 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,521 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,524 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,527 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,530 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,533 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,536 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,539 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,543 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,546 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,549 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,552 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,556 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,559 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,562 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,565 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,570 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,573 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,576 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,580 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,583 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,586 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,590 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,593 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,597 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,600 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,603 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,607 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,610 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,613 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,618 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,621 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,624 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,627 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,630 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,633 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,637 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,641 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,645 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,648 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,651 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,654 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,657 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,661 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,664 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/EndpointCatalogue/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,667 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/AccountCatalogue/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,670 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/Kubelet/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,673 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/Docker/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,676 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/SoftwareVersions/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,679 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/Genesis/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,682 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/HostSystem/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,686 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/SeccompProfile/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,689 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,691 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,692 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,694 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,696 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,697 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,699 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,701 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,703 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,704 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,706 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,708 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,710 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,711 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,713 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,715 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,716 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,718 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,720 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,722 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,723 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,725 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,727 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,729 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,730 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,732 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,734 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,736 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema drydock/BootAction/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,739 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/KubernetesNetwork/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,742 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/SiteDefinition/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,745 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,749 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,753 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/PKICatalog/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,757 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/CommonAddresses/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,760 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema dev/Configurables/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:00,763 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema shipyard/DeploymentConfiguration/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:01,280 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] calico-etcd among the provided substitution sources. 2018-11-12 20:56:01,280 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] calico-node among the provided substitution sources. 2018-11-12 20:56:01,280 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] calico-node among the provided substitution sources. 2018-11-12 20:56:01,324 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes among the provided substitution sources. 2018-11-12 20:56:01,324 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] scheduler among the provided substitution sources. 2018-11-12 20:56:01,324 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] scheduler among the provided substitution sources. 2018-11-12 20:56:01,371 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes among the provided substitution sources. 2018-11-12 20:56:01,371 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] controller-manager among the provided substitution sources. 2018-11-12 20:56:01,371 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] controller-manager among the provided substitution sources. 2018-11-12 20:56:01,371 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/PrivateKey/v1] service-account among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] apiserver among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] apiserver among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes-etcd among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] apiserver-etcd among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] apiserver-etcd among the provided substitution sources. 2018-11-12 20:56:01,404 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/PublicKey/v1] service-account among the provided substitution sources. 2018-11-12 20:56:03,180 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] calico-etcd among the provided substitution sources. 2018-11-12 20:56:03,181 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] calico-etcd-peer among the provided substitution sources. 2018-11-12 20:56:03,181 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] calico-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,181 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] calico-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,189 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] calico-etcd among the provided substitution sources. 2018-11-12 20:56:03,189 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] calico-etcd-peer among the provided substitution sources. 2018-11-12 20:56:03,189 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] calico-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,190 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] calico-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,197 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] calico-etcd-genesis among the provided substitution sources. 2018-11-12 20:56:03,197 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] calico-etcd-genesis among the provided substitution sources. 2018-11-12 20:56:03,197 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] calico-etcd-genesis-peer among the provided substitution sources. 2018-11-12 20:56:03,197 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] calico-etcd-genesis-peer among the provided substitution sources. 2018-11-12 20:56:03,205 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes-etcd among the provided substitution sources. 2018-11-12 20:56:03,205 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes-etcd-peer among the provided substitution sources. 2018-11-12 20:56:03,206 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] kubernetes-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,206 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] kubernetes-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes-etcd among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateAuthority/v1] kubernetes-etcd-peer among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] kubernetes-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] kubernetes-etcd-anchor among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] kubernetes-etcd-genesis among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] kubernetes-etcd-genesis among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/Certificate/v1] kubernetes-etcd-genesis-peer among the provided substitution sources. 2018-11-12 20:56:03,207 WARNING - - - deckhand.engine.secrets_manager:substitute_all [294] Could not find substitution source document [deckhand/CertificateKey/v1] kubernetes-etcd-genesis-peer among the provided substitution sources. 2018-11-12 20:56:03,297 INFO - - - promenade.config:__init__ [ 39] Deckhand engine returned 165 documents. + cp /root/deploy/genesis/certificates.yaml /root/deploy/airship-in-a-bottle/deployment_files/site/demo/secrets + generate_genesis ++ ls /root/deploy/genesis + docker run --rm -t -e http_proxy= -e https_proxy= -e no_proxy= -w /target -e PROMENADE_DEBUG=false -v /root/deploy/genesis:/target quay.io/airshipit/promenade:master promenade build-all -o /target --validators certificates.yaml deployment_files.yaml + PORT=9000 + UWSGI_TIMEOUT=300 + PROMENADE_THREADS=1 + PROMENADE_WORKERS=4 + '[' promenade = server ']' + exec promenade build-all -o /target --validators certificates.yaml deployment_files.yaml /usr/local/lib/python3.6/site-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) 2018-11-12 20:56:10,135 INFO - - - promenade.config:from_streams [ 53] Loading documents from certificates.yaml 2018-11-12 20:56:10,245 INFO - - - promenade.config:from_streams [ 57] Successfully loaded 42 documents from certificates.yaml 2018-11-12 20:56:10,245 INFO - - - promenade.config:from_streams [ 53] Loading documents from deployment_files.yaml 2018-11-12 20:56:11,639 INFO - - - promenade.config:from_streams [ 57] Successfully loaded 167 documents from deployment_files.yaml 2018-11-12 20:56:11,639 INFO - - - promenade.config:__init__ [ 24] Parsing document schemas. 2018-11-12 20:56:11,639 INFO - - - promenade.config:__init__ [ 25] Building config from 209 documents. 2018-11-12 20:56:11,639 INFO - - - promenade.config:__init__ [ 27] Rendering documents via Deckhand engine. 2018-11-12 20:56:11,780 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,781 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Manifest/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,784 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Manifest/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,787 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,791 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,794 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,797 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,800 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,803 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,806 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,809 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,813 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,816 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,819 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,822 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,825 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,828 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,831 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,834 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,837 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,840 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,843 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,846 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,849 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,852 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,856 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,859 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,862 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,865 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,868 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,871 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,874 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,877 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,881 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,884 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,887 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,890 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,893 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,897 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,900 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,903 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,906 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,909 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,913 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,916 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,919 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,923 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,925 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,929 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,932 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,935 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,938 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,941 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,944 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,948 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,951 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,954 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,957 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,960 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,963 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,966 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,969 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,973 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,976 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,979 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,982 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,985 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,988 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,992 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,995 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:11,998 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,001 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,006 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,009 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,012 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,016 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,019 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,022 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,026 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,029 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,032 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,036 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,039 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,043 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,046 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,049 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,054 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,057 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,060 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,063 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,066 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,069 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,074 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,077 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,081 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,084 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,087 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,090 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/ChartGroup/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,093 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,096 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,099 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/EndpointCatalogue/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,102 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/AccountCatalogue/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,106 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/Kubelet/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,109 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/Docker/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,112 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/SoftwareVersions/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,114 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/Genesis/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,118 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/HostSystem/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,122 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/SeccompProfile/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,125 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,126 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,128 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,130 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,132 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,133 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,135 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,137 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,138 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,140 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,142 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,143 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,145 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,146 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,148 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,150 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,151 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,153 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,155 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,157 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,158 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,160 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,162 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,163 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,165 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,167 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,168 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema deckhand/DataSchema/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,170 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema drydock/BootAction/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,173 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/KubernetesNetwork/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,177 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/SiteDefinition/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,180 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,184 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema armada/Chart/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,188 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema promenade/PKICatalog/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,191 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema pegleg/CommonAddresses/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,195 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema dev/Configurables/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:12,198 INFO - - - deckhand.engine.document_validation:_validate_one [510] The provided document schema shipyard/DeploymentConfiguration/v1 is not registered. Registered schemas include: ['deckhand/Base/v1', 'deckhand/LayeringPolicy/v1', 'deckhand/PublicKey/v1', 'metadata/Control/v1', 'deckhand/CertificateAuthority/v1', 'deckhand/Certificate/v1', 'metadata/Document/v1', 'deckhand/CertificateAuthorityKey/v1', 'deckhand/CertificateKey/v1', 'deckhand/Passphrase/v1', 'deckhand/ValidationPolicy/v1', 'deckhand/PrivateKey/v1'] 2018-11-12 20:56:14,911 INFO - - - promenade.config:__init__ [ 39] Deckhand engine returned 207 documents. 2018-11-12 20:56:14,972 INFO - - - promenade.builder:build_genesis_script [ 83] Building genesis script 2018-11-12 20:56:14,972 INFO - - - promenade.config:__init__ [ 24] Parsing document schemas. 2018-11-12 20:56:14,972 INFO - - - promenade.config:__init__ [ 25] Building config from 207 documents. 2018-11-12 20:56:38,287 INFO - - - promenade.config:__init__ [ 24] Parsing document schemas. 2018-11-12 20:56:38,287 INFO - - - promenade.config:__init__ [ 25] Building config from 207 documents. + run_genesis + /root/deploy/genesis/genesis.sh + export KUBECONFIG=/etc/kubernetes/admin/kubeconfig.yaml + KUBECONFIG=/etc/kubernetes/admin/kubeconfig.yaml ++ id -u + '[' 0 '!=' 0 ']' ++ hostname + '[' asus '!=' asus ']' + resolvconf --disable-updates + CURATED_DIRS=(/etc/kubernetes /var/lib/etcd) + for DIR in '"${CURATED_DIRS[@]}"' + mkdir -p /etc/kubernetes + chmod 700 /etc/kubernetes + for DIR in '"${CURATED_DIRS[@]}"' + mkdir -p /var/lib/etcd + chmod 700 /var/lib/etcd + set +x Mon Nov 12 14:56:38 CST 2018 Mon Nov 12 14:56:38 CST 2018 === Extracting prepared files === usr/local/bin/promenade-teardown usr/local/bin/kubectl usr/local/bin/debug-report.sh etc/resolv.conf etc/hosts etc/apt/sources.list.d/promenade-sources.list etc/kubernetes/kubeconfig etc/kubernetes/pki/kubelet-key.pem etc/kubernetes/pki/kubelet-client-ca.pem etc/kubernetes/pki/kubelet.pem etc/kubernetes/pki/cluster-ca.pem etc/kubernetes/manifests/haproxy.yaml etc/kubernetes/admin/kubeconfig.yaml etc/kubernetes/admin/pki/admin-key.pem etc/kubernetes/admin/pki/admin.pem etc/kubernetes/admin/pki/cluster-ca.pem etc/systemd/system/kubelet.service etc/systemd/system/docker.service.d/http-proxy.conf etc/promenade/haproxy/haproxy.cfg etc/docker/daemon.json usr/local/bin/helm usr/local/bin/armada etc/genesis/apiserver/eventconfig.yaml etc/genesis/apiserver/acconfig.yaml etc/genesis/apiserver/pki/kubelet-client-ca.pem etc/genesis/apiserver/pki/apiserver-key.pem etc/genesis/apiserver/pki/kubelet-client.pem etc/genesis/apiserver/pki/cluster-ca.pem etc/genesis/apiserver/pki/etcd-client-ca.pem etc/genesis/apiserver/pki/etcd-client.pem etc/genesis/apiserver/pki/service-account.pub etc/genesis/apiserver/pki/etcd-client-key.pem etc/genesis/apiserver/pki/kubelet-client-key.pem etc/genesis/apiserver/pki/apiserver.pem etc/genesis/controller-manager/kubeconfig.yaml etc/genesis/controller-manager/pki/controller-manager-key.pem etc/genesis/controller-manager/pki/controller-manager.pem etc/genesis/controller-manager/pki/cluster-ca.pem etc/genesis/controller-manager/pki/service-account.key etc/genesis/scheduler/kubeconfig.yaml etc/genesis/scheduler/pki/scheduler-key.pem etc/genesis/scheduler/pki/cluster-ca.pem etc/genesis/scheduler/pki/scheduler.pem etc/genesis/etcd/pki/etcd-peer-key.pem etc/genesis/etcd/pki/etcd-peer.pem etc/genesis/etcd/pki/peer-ca.pem etc/genesis/etcd/pki/etcd-client.pem etc/genesis/etcd/pki/etcd-client-key.pem etc/genesis/etcd/pki/client-ca.pem etc/genesis/armada-cli/auth/config etc/genesis/armada-cli/auth/pki/cluster-ca.pem etc/genesis/armada-cli/auth/pki/armada-key.pem etc/genesis/armada-cli/auth/pki/armada.pem etc/genesis/armada/auth/config etc/genesis/armada/auth/pki/cluster-ca.pem etc/genesis/armada/auth/pki/armada-key.pem etc/genesis/armada/auth/pki/armada.pem etc/genesis/armada/assets/manifest.yaml etc/kubernetes/manifests/kubernetes-apiserver.yaml etc/kubernetes/manifests/auxiliary-kubernetes-etcd.yaml etc/kubernetes/manifests/kubernetes-scheduler.yaml etc/kubernetes/manifests/bootstrap-armada.yaml etc/kubernetes/manifests/kubernetes-controller-manager.yaml etc/kubernetes/manifests/kubernetes-etcd.yaml opt/kubernetes/bin/kubelet etc/logrotate.d/json-logrotate var/lib/anchor/calico-etcd-bootstrap etc/genesis/armada/assets/charts/.gitignore etc/genesis/armada/assets/charts/proxy/values.yaml etc/genesis/armada/assets/charts/proxy/requirements.yaml etc/genesis/armada/assets/charts/proxy/Chart.yaml etc/genesis/armada/assets/charts/proxy/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/proxy/templates/rbac.yaml etc/genesis/armada/assets/charts/proxy/templates/daemonset.yaml etc/genesis/armada/assets/charts/proxy/templates/bin/_liveness-probe.sh.tpl etc/genesis/armada/assets/charts/proxy/templates/bin/_readiness-probe.sh.tpl etc/genesis/armada/assets/charts/coredns/values.yaml etc/genesis/armada/assets/charts/coredns/requirements.yaml etc/genesis/armada/assets/charts/coredns/Chart.yaml etc/genesis/armada/assets/charts/coredns/templates/deployment.yaml etc/genesis/armada/assets/charts/coredns/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/coredns/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/coredns/templates/rbac.yaml etc/genesis/armada/assets/charts/coredns/templates/service.yaml etc/genesis/armada/assets/charts/coredns/templates/pod-test.yaml etc/genesis/armada/assets/charts/coredns/templates/bin/_probe.sh.tpl etc/genesis/armada/assets/charts/apiserver/values.yaml etc/genesis/armada/assets/charts/apiserver/requirements.yaml etc/genesis/armada/assets/charts/apiserver/Chart.yaml etc/genesis/armada/assets/charts/apiserver/templates/secret-ingress-tls.yaml etc/genesis/armada/assets/charts/apiserver/templates/ingress-api.yaml etc/genesis/armada/assets/charts/apiserver/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/apiserver/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/apiserver/templates/rbac.yaml etc/genesis/armada/assets/charts/apiserver/templates/service.yaml etc/genesis/armada/assets/charts/apiserver/templates/service-apiserver-ingress.yaml etc/genesis/armada/assets/charts/apiserver/templates/daemonset.yaml etc/genesis/armada/assets/charts/apiserver/templates/configmap-certs.yaml etc/genesis/armada/assets/charts/apiserver/templates/secret-apiserver.yaml etc/genesis/armada/assets/charts/apiserver/templates/bin/_pre_stop.tpl etc/genesis/armada/assets/charts/apiserver/templates/bin/_anchor.tpl etc/genesis/armada/assets/charts/apiserver/templates/etc/_kubeconfig.yaml.tpl etc/genesis/armada/assets/charts/apiserver/templates/etc/_kubernetes-apiserver.yaml.tpl etc/genesis/armada/assets/charts/promenade/values.yaml etc/genesis/armada/assets/charts/promenade/requirements.yaml etc/genesis/armada/assets/charts/promenade/Chart.yaml etc/genesis/armada/assets/charts/promenade/templates/deployment-api.yaml etc/genesis/armada/assets/charts/promenade/templates/job-ks-service.yaml etc/genesis/armada/assets/charts/promenade/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/promenade/templates/job-ks-endpoints.yaml etc/genesis/armada/assets/charts/promenade/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/promenade/templates/rbac.yaml etc/genesis/armada/assets/charts/promenade/templates/service-api.yaml etc/genesis/armada/assets/charts/promenade/templates/secret-keystone-env.yaml etc/genesis/armada/assets/charts/promenade/templates/job-ks-user.yaml etc/genesis/armada/assets/charts/promenade/templates/tests/test-promenade-api.yaml etc/genesis/armada/assets/charts/haproxy/values.yaml etc/genesis/armada/assets/charts/haproxy/requirements.yaml etc/genesis/armada/assets/charts/haproxy/Chart.yaml etc/genesis/armada/assets/charts/haproxy/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/haproxy/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/haproxy/templates/rbac.yaml etc/genesis/armada/assets/charts/haproxy/templates/daemonset.yaml etc/genesis/armada/assets/charts/haproxy/templates/bin/_pre_stop.tpl etc/genesis/armada/assets/charts/haproxy/templates/bin/_anchor.tpl etc/genesis/armada/assets/charts/haproxy/templates/etc/_haproxy.yaml.tpl etc/genesis/armada/assets/charts/haproxy/templates/tests/test-haproxy-health.yaml etc/genesis/armada/assets/charts/scheduler/values.yaml etc/genesis/armada/assets/charts/scheduler/requirements.yaml etc/genesis/armada/assets/charts/scheduler/Chart.yaml etc/genesis/armada/assets/charts/scheduler/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/scheduler/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/scheduler/templates/sched-anchor.yaml etc/genesis/armada/assets/charts/scheduler/templates/secret.yaml etc/genesis/armada/assets/charts/scheduler/templates/bin/_pre_stop.tpl etc/genesis/armada/assets/charts/scheduler/templates/bin/_anchor.tpl etc/genesis/armada/assets/charts/scheduler/templates/etc/_kubeconfig.yaml.tpl etc/genesis/armada/assets/charts/scheduler/templates/etc/_kubernetes-scheduler.yaml.tpl etc/genesis/armada/assets/charts/apiserver-webhook/values.yaml etc/genesis/armada/assets/charts/apiserver-webhook/requirements.yaml etc/genesis/armada/assets/charts/apiserver-webhook/Chart.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/secret-ingress-tls.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/deployment.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/ingress-api.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/secret-keystone.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/service.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/service-apiserver-ingress.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/configmap-certs.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/secret-apiserver.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/secret-webhook.yaml etc/genesis/armada/assets/charts/apiserver-webhook/templates/bin/_webhook_start.sh.tpl etc/genesis/armada/assets/charts/apiserver-webhook/templates/etc/_kubeconfig.yaml.tpl etc/genesis/armada/assets/charts/apiserver-webhook/templates/etc/_webhook.kubeconfig.tpl etc/genesis/armada/assets/charts/etcd/values.yaml etc/genesis/armada/assets/charts/etcd/requirements.yaml etc/genesis/armada/assets/charts/etcd/Chart.yaml etc/genesis/armada/assets/charts/etcd/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/etcd/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/etcd/templates/daemonset-anchor.yaml etc/genesis/armada/assets/charts/etcd/templates/service.yaml etc/genesis/armada/assets/charts/etcd/templates/secret-keys.yaml etc/genesis/armada/assets/charts/etcd/templates/configmap-certs.yaml etc/genesis/armada/assets/charts/etcd/templates/bin/_etcdctl_anchor.tpl etc/genesis/armada/assets/charts/etcd/templates/bin/_readiness.tpl etc/genesis/armada/assets/charts/etcd/templates/bin/_pre_stop.tpl etc/genesis/armada/assets/charts/etcd/templates/etc/_kubernetes-etcd.yaml.tpl etc/genesis/armada/assets/charts/etcd/templates/tests/test-etcd-health.yaml etc/genesis/armada/assets/charts/controller_manager/values.yaml etc/genesis/armada/assets/charts/controller_manager/requirements.yaml etc/genesis/armada/assets/charts/controller_manager/Chart.yaml etc/genesis/armada/assets/charts/controller_manager/templates/configmap-etc.yaml etc/genesis/armada/assets/charts/controller_manager/templates/configmap-bin.yaml etc/genesis/armada/assets/charts/controller_manager/templates/secret.yaml etc/genesis/armada/assets/charts/controller_manager/templates/daemonset.yaml etc/genesis/armada/assets/charts/controller_manager/templates/bin/_pre_stop.tpl etc/genesis/armada/assets/charts/controller_manager/templates/bin/_anchor.tpl etc/genesis/armada/assets/charts/controller_manager/templates/etc/_kubeconfig.yaml.tpl etc/genesis/armada/assets/charts/controller_manager/templates/etc/_kubernetes-controller-manager.yaml.tpl + for DIR in '"${CURATED_DIRS[@]}"' + chmod -R go-rwx /etc/kubernetes + for DIR in '"${CURATED_DIRS[@]}"' + chmod -R go-rwx /var/lib/etcd + set +x Mon Nov 12 14:56:43 CST 2018 Mon Nov 12 14:56:43 CST 2018 === Adding APT Keys=== + apt-key add - OK + set +x Mon Nov 12 14:56:44 CST 2018 Mon Nov 12 14:56:44 CST 2018 === Disabling swap === + swapoff -a + sed --in-place '/\bswap\b/d' /etc/fstab + set +x Mon Nov 12 14:56:44 CST 2018 Mon Nov 12 14:56:44 CST 2018 === Setting proxy variables === + export http_proxy= + http_proxy= + export https_proxy= + https_proxy= + export no_proxy= + no_proxy= + set +x Mon Nov 12 14:56:44 CST 2018 Mon Nov 12 14:56:44 CST 2018 === Installing system packages === ++ date +%s + end=1542056804 + true + apt-get update Get:1 http://apt.dockerproject.org/repo ubuntu-xenial InRelease [48.7 kB] Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease Get:3 http://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages [4,177 B] Hit:4 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease Hit:6 http://security.ubuntu.com/ubuntu xenial-security InRelease Fetched 52.9 kB in 2s (22.6 kB/s) Reading package lists... Done + break ++ date +%s + end=1542056807 + true + DEBIAN_FRONTEND=noninteractive + apt-get install -y --no-install-recommends nfs-common ceph-common docker.io socat=1.7.3.1-1 Reading package lists... Done Building dependency tree Reading state information... Done docker.io is already the newest version (17.03.2-0ubuntu2~16.04.1). The following additional packages will be installed: keyutils libbabeltrace-ctf1 libbabeltrace1 libboost-iostreams1.58.0 libboost-program-options1.58.0 libboost-random1.58.0 libboost-regex1.58.0 libboost-system1.58.0 libboost-thread1.58.0 libcephfs1 libfcgi0ldbl libnfsidmap2 libnspr4 libnss3 libnss3-nssdb libpython-stdlib libpython2.7-minimal libpython2.7-stdlib librados2 libradosstriper1 librbd1 librgw2 libtirpc1 python python-cephfs python-chardet python-minimal python-pkg-resources python-rados python-rbd python-requests python-six python-urllib3 python2.7 python2.7-minimal rpcbind Suggested packages: ceph ceph-mds watchdog python-doc python-tk python-setuptools python-ndg-httpsclient python-openssl python-pyasn1 python-ntlm python2.7-doc binutils binfmt-support Recommended packages: python-ndg-httpsclient python-openssl python-pyasn1 The following NEW packages will be installed: ceph-common keyutils libbabeltrace-ctf1 libbabeltrace1 libboost-iostreams1.58.0 libboost-program-options1.58.0 libboost-random1.58.0 libboost-regex1.58.0 libboost-system1.58.0 libboost-thread1.58.0 libcephfs1 libfcgi0ldbl libnfsidmap2 libnspr4 libnss3 libnss3-nssdb libpython-stdlib libpython2.7-minimal libpython2.7-stdlib librados2 libradosstriper1 librbd1 librgw2 libtirpc1 nfs-common python python-cephfs python-chardet python-minimal python-pkg-resources python-rados python-rbd python-requests python-six python-urllib3 python2.7 python2.7-minimal rpcbind socat 0 upgraded, 39 newly installed, 0 to remove and 0 not upgraded. Need to get 25.4 MB of archives. After this operation, 105 MB of additional disk space will be used. Get:1 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libnfsidmap2 amd64 0.25-5 [32.2 kB] Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-minimal amd64 2.7.12-1ubuntu0~16.04.3 [340 kB] Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7-minimal amd64 2.7.12-1ubuntu0~16.04.3 [1,261 kB] Get:4 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-minimal amd64 2.7.12-1~16.04 [28.1 kB] Get:5 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-stdlib amd64 2.7.12-1ubuntu0~16.04.3 [1,880 kB] Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7 amd64 2.7.12-1ubuntu0~16.04.3 [224 kB] Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython-stdlib amd64 2.7.12-1~16.04 [7,768 B] Get:8 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python amd64 2.7.12-1~16.04 [137 kB] Get:9 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libboost-iostreams1.58.0 amd64 1.58.0+dfsg-5ubuntu3.1 [29.0 kB] Get:10 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libboost-system1.58.0 amd64 1.58.0+dfsg-5ubuntu3.1 [9,146 B] Get:11 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libboost-random1.58.0 amd64 1.58.0+dfsg-5ubuntu3.1 [11.7 kB] Get:12 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libboost-thread1.58.0 amd64 1.58.0+dfsg-5ubuntu3.1 [47.0 kB] Get:13 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnspr4 amd64 2:4.13.1-0ubuntu0.16.04.1 [112 kB] Get:14 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnss3-nssdb all 2:3.28.4-0ubuntu0.16.04.3 [10.6 kB] Get:15 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnss3 amd64 2:3.28.4-0ubuntu0.16.04.3 [1,148 kB] Get:16 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 librados2 amd64 10.2.10-0ubuntu0.16.04.1 [1,646 kB] Get:17 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 librbd1 amd64 10.2.10-0ubuntu0.16.04.1 [2,193 kB] Get:18 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcephfs1 amd64 10.2.10-0ubuntu0.16.04.1 [1,846 kB] Get:19 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-cephfs amd64 10.2.10-0ubuntu0.16.04.1 [235 kB] Get:20 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-rados amd64 10.2.10-0ubuntu0.16.04.1 [648 kB] Get:21 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-rbd amd64 10.2.10-0ubuntu0.16.04.1 [320 kB] Get:22 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 python-six all 1.10.0-3 [10.9 kB] Get:23 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-urllib3 all 1.13.1-2ubuntu0.16.04.2 [57.9 kB] Get:24 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 python-pkg-resources all 20.7.0-1 [108 kB] Get:25 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 python-chardet all 2.3.0-2 [96.3 kB] Get:26 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-requests all 2.9.1-3ubuntu0.1 [55.9 kB] Get:27 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libbabeltrace1 amd64 1.3.2-1 [34.7 kB] Get:28 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libbabeltrace-ctf1 amd64 1.3.2-1 [88.3 kB] Get:29 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libboost-program-options1.58.0 amd64 1.58.0+dfsg-5ubuntu3.1 [138 kB] Get:30 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libboost-regex1.58.0 amd64 1.58.0+dfsg-5ubuntu3.1 [261 kB] Get:31 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libradosstriper1 amd64 10.2.10-0ubuntu0.16.04.1 [1,878 kB] Get:32 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libfcgi0ldbl amd64 2.4.0-8.3 [161 kB] Get:33 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 librgw2 amd64 10.2.10-0ubuntu0.16.04.1 [2,912 kB] Get:34 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ceph-common amd64 10.2.10-0ubuntu0.16.04.1 [6,723 kB] Get:35 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 keyutils amd64 1.5.9-8ubuntu1 [47.1 kB] Get:36 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtirpc1 amd64 0.2.5-1ubuntu0.1 [75.4 kB] Get:37 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 rpcbind amd64 0.2.3-0.2 [40.3 kB] Get:38 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 nfs-common amd64 1:1.2.8-9ubuntu12.1 [184 kB] Get:39 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB] Fetched 25.4 MB in 3s (8,020 kB/s) Extracting templates from packages: 100% Selecting previously unselected package libnfsidmap2:amd64. (Reading database ... 93699 files and directories currently installed.) Preparing to unpack .../libnfsidmap2_0.25-5_amd64.deb ... Unpacking libnfsidmap2:amd64 (0.25-5) ... Selecting previously unselected package libpython2.7-minimal:amd64. Preparing to unpack .../libpython2.7-minimal_2.7.12-1ubuntu0~16.04.3_amd64.deb ... Unpacking libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.3) ... Selecting previously unselected package python2.7-minimal. Preparing to unpack .../python2.7-minimal_2.7.12-1ubuntu0~16.04.3_amd64.deb ... Unpacking python2.7-minimal (2.7.12-1ubuntu0~16.04.3) ... Selecting previously unselected package python-minimal. Preparing to unpack .../python-minimal_2.7.12-1~16.04_amd64.deb ... Unpacking python-minimal (2.7.12-1~16.04) ... Selecting previously unselected package libpython2.7-stdlib:amd64. Preparing to unpack .../libpython2.7-stdlib_2.7.12-1ubuntu0~16.04.3_amd64.deb ... Unpacking libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.3) ... Selecting previously unselected package python2.7. Preparing to unpack .../python2.7_2.7.12-1ubuntu0~16.04.3_amd64.deb ... Unpacking python2.7 (2.7.12-1ubuntu0~16.04.3) ... Selecting previously unselected package libpython-stdlib:amd64. Preparing to unpack .../libpython-stdlib_2.7.12-1~16.04_amd64.deb ... Unpacking libpython-stdlib:amd64 (2.7.12-1~16.04) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for mime-support (3.59ubuntu1) ... Setting up libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.3) ... Setting up python2.7-minimal (2.7.12-1ubuntu0~16.04.3) ... Setting up python-minimal (2.7.12-1~16.04) ... Selecting previously unselected package python. (Reading database ... 94455 files and directories currently installed.) Preparing to unpack .../python_2.7.12-1~16.04_amd64.deb ... Unpacking python (2.7.12-1~16.04) ... Selecting previously unselected package libboost-iostreams1.58.0:amd64. Preparing to unpack .../libboost-iostreams1.58.0_1.58.0+dfsg-5ubuntu3.1_amd64.deb ... Unpacking libboost-iostreams1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Selecting previously unselected package libboost-system1.58.0:amd64. Preparing to unpack .../libboost-system1.58.0_1.58.0+dfsg-5ubuntu3.1_amd64.deb ... Unpacking libboost-system1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Selecting previously unselected package libboost-random1.58.0:amd64. Preparing to unpack .../libboost-random1.58.0_1.58.0+dfsg-5ubuntu3.1_amd64.deb ... Unpacking libboost-random1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Selecting previously unselected package libboost-thread1.58.0:amd64. Preparing to unpack .../libboost-thread1.58.0_1.58.0+dfsg-5ubuntu3.1_amd64.deb ... Unpacking libboost-thread1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Selecting previously unselected package libnspr4:amd64. Preparing to unpack .../libnspr4_2%3a4.13.1-0ubuntu0.16.04.1_amd64.deb ... Unpacking libnspr4:amd64 (2:4.13.1-0ubuntu0.16.04.1) ... Selecting previously unselected package libnss3-nssdb. Preparing to unpack .../libnss3-nssdb_2%3a3.28.4-0ubuntu0.16.04.3_all.deb ... Unpacking libnss3-nssdb (2:3.28.4-0ubuntu0.16.04.3) ... Selecting previously unselected package libnss3:amd64. Preparing to unpack .../libnss3_2%3a3.28.4-0ubuntu0.16.04.3_amd64.deb ... Unpacking libnss3:amd64 (2:3.28.4-0ubuntu0.16.04.3) ... Selecting previously unselected package librados2. Preparing to unpack .../librados2_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking librados2 (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package librbd1. Preparing to unpack .../librbd1_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking librbd1 (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package libcephfs1. Preparing to unpack .../libcephfs1_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking libcephfs1 (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package python-cephfs. Preparing to unpack .../python-cephfs_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking python-cephfs (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package python-rados. Preparing to unpack .../python-rados_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking python-rados (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package python-rbd. Preparing to unpack .../python-rbd_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking python-rbd (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package python-six. Preparing to unpack .../python-six_1.10.0-3_all.deb ... Unpacking python-six (1.10.0-3) ... Selecting previously unselected package python-urllib3. Preparing to unpack .../python-urllib3_1.13.1-2ubuntu0.16.04.2_all.deb ... Unpacking python-urllib3 (1.13.1-2ubuntu0.16.04.2) ... Selecting previously unselected package python-pkg-resources. Preparing to unpack .../python-pkg-resources_20.7.0-1_all.deb ... Unpacking python-pkg-resources (20.7.0-1) ... Selecting previously unselected package python-chardet. Preparing to unpack .../python-chardet_2.3.0-2_all.deb ... Unpacking python-chardet (2.3.0-2) ... Selecting previously unselected package python-requests. Preparing to unpack .../python-requests_2.9.1-3ubuntu0.1_all.deb ... Unpacking python-requests (2.9.1-3ubuntu0.1) ... Selecting previously unselected package libbabeltrace1:amd64. Preparing to unpack .../libbabeltrace1_1.3.2-1_amd64.deb ... Unpacking libbabeltrace1:amd64 (1.3.2-1) ... Selecting previously unselected package libbabeltrace-ctf1:amd64. Preparing to unpack .../libbabeltrace-ctf1_1.3.2-1_amd64.deb ... Unpacking libbabeltrace-ctf1:amd64 (1.3.2-1) ... Selecting previously unselected package libboost-program-options1.58.0:amd64. Preparing to unpack .../libboost-program-options1.58.0_1.58.0+dfsg-5ubuntu3.1_amd64.deb ... Unpacking libboost-program-options1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Selecting previously unselected package libboost-regex1.58.0:amd64. Preparing to unpack .../libboost-regex1.58.0_1.58.0+dfsg-5ubuntu3.1_amd64.deb ... Unpacking libboost-regex1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Selecting previously unselected package libradosstriper1. Preparing to unpack .../libradosstriper1_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking libradosstriper1 (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package libfcgi0ldbl. Preparing to unpack .../libfcgi0ldbl_2.4.0-8.3_amd64.deb ... Unpacking libfcgi0ldbl (2.4.0-8.3) ... Selecting previously unselected package librgw2. Preparing to unpack .../librgw2_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking librgw2 (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package ceph-common. Preparing to unpack .../ceph-common_10.2.10-0ubuntu0.16.04.1_amd64.deb ... Unpacking ceph-common (10.2.10-0ubuntu0.16.04.1) ... Selecting previously unselected package keyutils. Preparing to unpack .../keyutils_1.5.9-8ubuntu1_amd64.deb ... Unpacking keyutils (1.5.9-8ubuntu1) ... Selecting previously unselected package libtirpc1:amd64. Preparing to unpack .../libtirpc1_0.2.5-1ubuntu0.1_amd64.deb ... Unpacking libtirpc1:amd64 (0.2.5-1ubuntu0.1) ... Selecting previously unselected package rpcbind. Preparing to unpack .../rpcbind_0.2.3-0.2_amd64.deb ... Unpacking rpcbind (0.2.3-0.2) ... Selecting previously unselected package nfs-common. Preparing to unpack .../nfs-common_1%3a1.2.8-9ubuntu12.1_amd64.deb ... Unpacking nfs-common (1:1.2.8-9ubuntu12.1) ... Selecting previously unselected package socat. Preparing to unpack .../socat_1.7.3.1-1_amd64.deb ... Unpacking socat (1.7.3.1-1) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for systemd (229-4ubuntu21.8) ... Setting up libnfsidmap2:amd64 (0.25-5) ... Setting up libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.3) ... Setting up python2.7 (2.7.12-1ubuntu0~16.04.3) ... Setting up libpython-stdlib:amd64 (2.7.12-1~16.04) ... Setting up python (2.7.12-1~16.04) ... Setting up libboost-iostreams1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Setting up libboost-system1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Setting up libboost-random1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Setting up libboost-thread1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Setting up libnspr4:amd64 (2:4.13.1-0ubuntu0.16.04.1) ... Setting up python-six (1.10.0-3) ... Setting up python-urllib3 (1.13.1-2ubuntu0.16.04.2) ... Setting up python-pkg-resources (20.7.0-1) ... Setting up python-chardet (2.3.0-2) ... Setting up python-requests (2.9.1-3ubuntu0.1) ... Setting up libbabeltrace1:amd64 (1.3.2-1) ... Setting up libbabeltrace-ctf1:amd64 (1.3.2-1) ... Setting up libboost-program-options1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Setting up libboost-regex1.58.0:amd64 (1.58.0+dfsg-5ubuntu3.1) ... Setting up libfcgi0ldbl (2.4.0-8.3) ... Setting up keyutils (1.5.9-8ubuntu1) ... Setting up libtirpc1:amd64 (0.2.5-1ubuntu0.1) ... Setting up rpcbind (0.2.3-0.2) ... Setting up nfs-common (1:1.2.8-9ubuntu12.1) ... Creating config file /etc/idmapd.conf with new version Creating config file /etc/default/nfs-common with new version Adding system user `statd' (UID 111) ... Adding new user `statd' (UID 111) with group `nogroup' ... Not creating home directory `/var/lib/nfs'. nfs-utils.service is a disabled or a static unit, not starting it. Setting up socat (1.7.3.1-1) ... Setting up libnss3-nssdb (2:3.28.4-0ubuntu0.16.04.3) ... Setting up libnss3:amd64 (2:3.28.4-0ubuntu0.16.04.3) ... Setting up librados2 (10.2.10-0ubuntu0.16.04.1) ... Setting up librbd1 (10.2.10-0ubuntu0.16.04.1) ... Setting up libcephfs1 (10.2.10-0ubuntu0.16.04.1) ... Setting up python-cephfs (10.2.10-0ubuntu0.16.04.1) ... Setting up python-rados (10.2.10-0ubuntu0.16.04.1) ... Setting up python-rbd (10.2.10-0ubuntu0.16.04.1) ... Setting up libradosstriper1 (10.2.10-0ubuntu0.16.04.1) ... Setting up librgw2 (10.2.10-0ubuntu0.16.04.1) ... Setting up ceph-common (10.2.10-0ubuntu0.16.04.1) ... [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Processing triggers for libc-bin (2.23-0ubuntu10) ... Processing triggers for systemd (229-4ubuntu21.8) ... Processing triggers for ureadahead (0.100.0-19) ... + break + set +x Mon Nov 12 14:57:02 CST 2018 Mon Nov 12 14:57:02 CST 2018 === Starting Docker and Kubelet === + systemctl daemon-reload + systemctl restart docker + systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. + systemctl restart kubelet + mkdir -p /var/log/armada + touch /var/log/armada/bootstrap-armada.log + chmod 777 /var/log/armada/bootstrap-armada.log + set +x Mon Nov 12 14:57:05 CST 2018 Mon Nov 12 14:57:05 CST 2018 === Waiting for Kubernetes API availablity === + wait_for_kubernetes_api 3600 + set +x Mon Nov 12 14:57:05 CST 2018 Waiting 3600 seconds for API response. Unable to find image 'gcr.io/google_containers/hyperkube-amd64:v1.10.2' locally v1.10.2: Pulling from google_containers/hyperkube-amd64 d093dc0d702b: Pulling fs layer 0dc0248c9360: Pulling fs layer 2b6ac55109de: Pulling fs layer 5f2ca80930ac: Pulling fs layer c99d41457802: Pulling fs layer 7460c29cea53: Pulling fs layer d5d8d09ad268: Pulling fs layer 82ba55e5bd57: Pulling fs layer 5f2ca80930ac: Verifying Checksum 5f2ca80930ac: Download complete 2b6ac55109de: Verifying Checksum 2b6ac55109de: Download complete d5d8d09ad268: Verifying Checksum d5d8d09ad268: Download complete 0dc0248c9360: Verifying Checksum 0dc0248c9360: Download complete 7460c29cea53: Verifying Checksum 7460c29cea53: Download complete c99d41457802: Verifying Checksum c99d41457802: Download complete 82ba55e5bd57: Verifying Checksum 82ba55e5bd57: Download complete d093dc0d702b: Download complete d093dc0d702b: Pull complete 0dc0248c9360: Pull complete 2b6ac55109de: Pull complete 5f2ca80930ac: Pull complete c99d41457802: Pull complete 7460c29cea53: Pull complete d5d8d09ad268: Pull complete 82ba55e5bd57: Pull complete Digest: sha256:badd2b1da29d4d530b10b920f64bf66a1b41150db46c3c99b49d56f3f18a82db Status: Downloaded newer image for gcr.io/google_containers/hyperkube-amd64:v1.10.2 The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? .The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? Mon Nov 12 15:57:09 CST 2018 API not returning node list before timeout. Mon Nov 12 15:57:09 CST 2018 General docker state report Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 1 Server Version: 17.03.2-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-131-generic Operating System: Ubuntu 16.04.5 LTS OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 31.31 GiB Name: asus ID: PBS7:FC7Q:Y3JR:CUBT:PLLF:Q4NC:ITTS:LPCT:DXTX:VLF4:ETX3:SP4B Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: true WARNING: No swap limit support CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Mon Nov 12 15:57:09 CST 2018 General cluster state report The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port? + error 'running genesis' + set +x Error when running genesis. + exit 1 + clean + set +x To remove files generated during this script's execution, delete /root/deploy. This VM is disposable. Re-deployment in this same VM will lead to unpredictable results. root at asus:~/deploy/airship-in-a-bottle/manifests/dev_single_node# -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Tue Nov 13 08:45:03 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Tue, 13 Nov 2018 08:45:03 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , , Message-ID: Hi, We just deploy genesis node and haven't deploy master node yet. root at cab23-r720-11:~# cat /etc/resolv.conf options timeout:1 attempts:1 domain local nameserver 10.96.0.10 nameserver 10.56.126.31 nameserver 10.96.0.10 nameserver 8.8.8.8 root at cab23-r720-11:~# vi /etc/hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-npzgl 0/1 Completed 0 23h ceph ceph-bootstrap-mtlgj 0/1 Completed 0 23h ceph ceph-cephfs-client-key-generator-zbp65 0/1 Completed 0 23h ceph ceph-cephfs-provisioner-676684f6bd-n5hjm 1/1 Running 0 23h ceph ceph-mds-5f547b6fd7-sg49g 1/1 Running 0 23h ceph ceph-mds-keyring-generator-pzvs2 0/1 Completed 0 23h ceph ceph-mgr-69d599864b-dqkzv 1/1 Running 0 23h ceph ceph-mgr-keyring-generator-4b6ww 0/1 Completed 0 23h ceph ceph-mon-check-6db6b569b6-w5kjk 1/1 Running 0 23h ceph ceph-mon-keyring-generator-bpm8q 0/1 Completed 0 23h ceph ceph-mon-qqjzz 1/1 Running 0 23h ceph ceph-osd-default-83945928-qqz4c 1/1 Running 0 23h ceph ceph-osd-keyring-generator-wc4rg 0/1 Completed 0 23h ceph ceph-rbd-pool-9gqx4 0/1 Completed 0 23h ceph ceph-rbd-provisioner-84bc5c88c7-jstt8 1/1 Running 0 23h ceph ceph-rgw-5b6645c456-tpqsq 1/1 Running 0 23h ceph ceph-rgw-storage-init-45zkv 0/1 Completed 0 23h ceph ceph-storage-keys-generator-9kxd9 0/1 Completed 0 23h ceph ingress-65dc849968-96k57 1/1 Running 0 23h ceph ingress-error-pages-796b76c856-dfk5w 1/1 Running 0 23h kube-system auxiliary-etcd-cab23-r720-11 3/3 Running 0 1d kube-system bootstrap-armada-cab23-r720-11 4/4 Running 0 1d kube-system calico-etcd-anchor-tnqrq 1/1 Running 0 1d kube-system calico-etcd-cab23-r720-11 1/1 Running 0 1d kube-system calico-kube-controllers-68f5b99d47-zh84k 1/1 Running 0 1d kube-system calico-node-kbpns 2/2 Running 0 1d kube-system calico-settings-p76tq 0/1 Completed 0 1d kube-system coredns-69bc679c6f-8qxr2 1/1 Running 0 1d kube-system coredns-69bc679c6f-klts5 1/1 Running 0 1d kube-system coredns-69bc679c6f-wk27v 1/1 Running 0 1d kube-system haproxy-anchor-2bdjd 1/1 Running 0 1d kube-system haproxy-cab23-r720-11 1/1 Running 1 1d kube-system ingress-error-pages-5ccf96bf7d-42lq9 1/1 Running 0 1d kube-system ingress-lrjr4 1/1 Running 0 1d kube-system kubernetes-apiserver-anchor-vbghn 1/1 Running 0 1d kube-system kubernetes-apiserver-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-controller-manager-anchor-kbckh 1/1 Running 0 1d kube-system kubernetes-controller-manager-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-etcd-anchor-x78wh 1/1 Running 0 1d kube-system kubernetes-etcd-cab23-r720-11 1/1 Running 0 1d kube-system kubernetes-proxy-w7tc6 1/1 Running 0 1d kube-system kubernetes-scheduler-anchor-wdqn8 1/1 Running 0 1d kube-system kubernetes-scheduler-cab23-r720-11 1/1 Running 0 23h ucp airship-ucp-ceph-config-ceph-ns-key-generator-xh6rj 0/1 Completed 0 23h ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 5h ucp ingress-6cd5b89d5d-6r6q9 1/1 Running 0 23h ucp ingress-error-pages-5c97bb46bb-lnxzp 1/1 Running 0 23h ucp mariadb-ingress-85b8556fbc-7hg9b 0/1 Running 0 5h ucp mariadb-ingress-85b8556fbc-mrv6k 0/1 Running 0 5h ucp mariadb-ingress-error-pages-64f89dc697-p47gg 1/1 Running 0 5h ucp mariadb-server-0 0/1 Init:0/2 0 5h ucp postgresql-0 0/1 Init:0/1 0 5h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H Sent: Tuesday, November 13, 2018 2:33 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: Re: [Airship-discuss] Airship installation Questions Greetings, >From your master k8s node can you resolve ceph-mon.ceph.svc.cluster.local? Please also send the output of 'cat /etc/resolv.conf' from your k8s nodes (genesis and master node). Thxs ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Monday, November 12, 2018 4:39 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph rbd image related logs. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113b74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:39 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd': size 256 MB in 64 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113c74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113d74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd Watchers: none Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 5:27 PM To: Matthew H >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaolin.tu at nokia-sbell.com Tue Nov 13 09:15:09 2018 From: qiaolin.tu at nokia-sbell.com (Tu, Qiaolin (NSB - CN/Hangzhou)) Date: Tue, 13 Nov 2018 09:15:09 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , , Message-ID: Hi, I checked ceph-mon and ucp mariadb-ingress resolv.conf. It seems ceph namespaces related pod use ceph.svc.cluster.local svc.cluster.local cluster.local but ucp namespaces related pod only use ucp.svc.cluster.local svc.cluster.local cluster.local. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-qqjzz -n ceph -- /bin/sh # cat resolv.conf nameserver 10.96.0.10 search ceph.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # cat hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl exec -it mariadb-ingress-85b8556fbc-7hg9b -n ucp -- /bin/sh # cat resolv.conf nameserver 10.96.0.10 search ucp.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # cat hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.97.38.118 mariadb-ingress-85b8556fbc-7hg9b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 13, 2018 4:45 PM To: 'Matthew H' ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, We just deploy genesis node and haven't deploy master node yet. root at cab23-r720-11:~# cat /etc/resolv.conf options timeout:1 attempts:1 domain local nameserver 10.96.0.10 nameserver 10.56.126.31 nameserver 10.96.0.10 nameserver 8.8.8.8 root at cab23-r720-11:~# vi /etc/hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-npzgl 0/1 Completed 0 23h ceph ceph-bootstrap-mtlgj 0/1 Completed 0 23h ceph ceph-cephfs-client-key-generator-zbp65 0/1 Completed 0 23h ceph ceph-cephfs-provisioner-676684f6bd-n5hjm 1/1 Running 0 23h ceph ceph-mds-5f547b6fd7-sg49g 1/1 Running 0 23h ceph ceph-mds-keyring-generator-pzvs2 0/1 Completed 0 23h ceph ceph-mgr-69d599864b-dqkzv 1/1 Running 0 23h ceph ceph-mgr-keyring-generator-4b6ww 0/1 Completed 0 23h ceph ceph-mon-check-6db6b569b6-w5kjk 1/1 Running 0 23h ceph ceph-mon-keyring-generator-bpm8q 0/1 Completed 0 23h ceph ceph-mon-qqjzz 1/1 Running 0 23h ceph ceph-osd-default-83945928-qqz4c 1/1 Running 0 23h ceph ceph-osd-keyring-generator-wc4rg 0/1 Completed 0 23h ceph ceph-rbd-pool-9gqx4 0/1 Completed 0 23h ceph ceph-rbd-provisioner-84bc5c88c7-jstt8 1/1 Running 0 23h ceph ceph-rgw-5b6645c456-tpqsq 1/1 Running 0 23h ceph ceph-rgw-storage-init-45zkv 0/1 Completed 0 23h ceph ceph-storage-keys-generator-9kxd9 0/1 Completed 0 23h ceph ingress-65dc849968-96k57 1/1 Running 0 23h ceph ingress-error-pages-796b76c856-dfk5w 1/1 Running 0 23h kube-system auxiliary-etcd-cab23-r720-11 3/3 Running 0 1d kube-system bootstrap-armada-cab23-r720-11 4/4 Running 0 1d kube-system calico-etcd-anchor-tnqrq 1/1 Running 0 1d kube-system calico-etcd-cab23-r720-11 1/1 Running 0 1d kube-system calico-kube-controllers-68f5b99d47-zh84k 1/1 Running 0 1d kube-system calico-node-kbpns 2/2 Running 0 1d kube-system calico-settings-p76tq 0/1 Completed 0 1d kube-system coredns-69bc679c6f-8qxr2 1/1 Running 0 1d kube-system coredns-69bc679c6f-klts5 1/1 Running 0 1d kube-system coredns-69bc679c6f-wk27v 1/1 Running 0 1d kube-system haproxy-anchor-2bdjd 1/1 Running 0 1d kube-system haproxy-cab23-r720-11 1/1 Running 1 1d kube-system ingress-error-pages-5ccf96bf7d-42lq9 1/1 Running 0 1d kube-system ingress-lrjr4 1/1 Running 0 1d kube-system kubernetes-apiserver-anchor-vbghn 1/1 Running 0 1d kube-system kubernetes-apiserver-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-controller-manager-anchor-kbckh 1/1 Running 0 1d kube-system kubernetes-controller-manager-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-etcd-anchor-x78wh 1/1 Running 0 1d kube-system kubernetes-etcd-cab23-r720-11 1/1 Running 0 1d kube-system kubernetes-proxy-w7tc6 1/1 Running 0 1d kube-system kubernetes-scheduler-anchor-wdqn8 1/1 Running 0 1d kube-system kubernetes-scheduler-cab23-r720-11 1/1 Running 0 23h ucp airship-ucp-ceph-config-ceph-ns-key-generator-xh6rj 0/1 Completed 0 23h ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 5h ucp ingress-6cd5b89d5d-6r6q9 1/1 Running 0 23h ucp ingress-error-pages-5c97bb46bb-lnxzp 1/1 Running 0 23h ucp mariadb-ingress-85b8556fbc-7hg9b 0/1 Running 0 5h ucp mariadb-ingress-85b8556fbc-mrv6k 0/1 Running 0 5h ucp mariadb-ingress-error-pages-64f89dc697-p47gg 1/1 Running 0 5h ucp mariadb-server-0 0/1 Init:0/2 0 5h ucp postgresql-0 0/1 Init:0/1 0 5h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Tuesday, November 13, 2018 2:33 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, >From your master k8s node can you resolve ceph-mon.ceph.svc.cluster.local? Please also send the output of 'cat /etc/resolv.conf' from your k8s nodes (genesis and master node). Thxs ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Monday, November 12, 2018 4:39 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph rbd image related logs. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113b74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:39 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd': size 256 MB in 64 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113c74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113d74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd Watchers: none Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 5:27 PM To: Matthew H >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.heler at hotmail.com Tue Nov 13 15:58:54 2018 From: matthew.heler at hotmail.com (Matthew H) Date: Tue, 13 Nov 2018 15:58:54 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , , , Message-ID: Greetings, Can you resolve ceph-mon.ceph.svc.cluster.local from your genesis node? dig ceph-mon.ceph.svc.cluster.local @10.96.0.10 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 13, 2018 4:15 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I checked ceph-mon and ucp mariadb-ingress resolv.conf. It seems ceph namespaces related pod use ceph.svc.cluster.local svc.cluster.local cluster.local but ucp namespaces related pod only use ucp.svc.cluster.local svc.cluster.local cluster.local. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-qqjzz -n ceph -- /bin/sh # cat resolv.conf nameserver 10.96.0.10 search ceph.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # cat hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl exec -it mariadb-ingress-85b8556fbc-7hg9b -n ucp -- /bin/sh # cat resolv.conf nameserver 10.96.0.10 search ucp.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # cat hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.97.38.118 mariadb-ingress-85b8556fbc-7hg9b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 13, 2018 4:45 PM To: 'Matthew H' ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, We just deploy genesis node and haven’t deploy master node yet. root at cab23-r720-11:~# cat /etc/resolv.conf options timeout:1 attempts:1 domain local nameserver 10.96.0.10 nameserver 10.56.126.31 nameserver 10.96.0.10 nameserver 8.8.8.8 root at cab23-r720-11:~# vi /etc/hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-npzgl 0/1 Completed 0 23h ceph ceph-bootstrap-mtlgj 0/1 Completed 0 23h ceph ceph-cephfs-client-key-generator-zbp65 0/1 Completed 0 23h ceph ceph-cephfs-provisioner-676684f6bd-n5hjm 1/1 Running 0 23h ceph ceph-mds-5f547b6fd7-sg49g 1/1 Running 0 23h ceph ceph-mds-keyring-generator-pzvs2 0/1 Completed 0 23h ceph ceph-mgr-69d599864b-dqkzv 1/1 Running 0 23h ceph ceph-mgr-keyring-generator-4b6ww 0/1 Completed 0 23h ceph ceph-mon-check-6db6b569b6-w5kjk 1/1 Running 0 23h ceph ceph-mon-keyring-generator-bpm8q 0/1 Completed 0 23h ceph ceph-mon-qqjzz 1/1 Running 0 23h ceph ceph-osd-default-83945928-qqz4c 1/1 Running 0 23h ceph ceph-osd-keyring-generator-wc4rg 0/1 Completed 0 23h ceph ceph-rbd-pool-9gqx4 0/1 Completed 0 23h ceph ceph-rbd-provisioner-84bc5c88c7-jstt8 1/1 Running 0 23h ceph ceph-rgw-5b6645c456-tpqsq 1/1 Running 0 23h ceph ceph-rgw-storage-init-45zkv 0/1 Completed 0 23h ceph ceph-storage-keys-generator-9kxd9 0/1 Completed 0 23h ceph ingress-65dc849968-96k57 1/1 Running 0 23h ceph ingress-error-pages-796b76c856-dfk5w 1/1 Running 0 23h kube-system auxiliary-etcd-cab23-r720-11 3/3 Running 0 1d kube-system bootstrap-armada-cab23-r720-11 4/4 Running 0 1d kube-system calico-etcd-anchor-tnqrq 1/1 Running 0 1d kube-system calico-etcd-cab23-r720-11 1/1 Running 0 1d kube-system calico-kube-controllers-68f5b99d47-zh84k 1/1 Running 0 1d kube-system calico-node-kbpns 2/2 Running 0 1d kube-system calico-settings-p76tq 0/1 Completed 0 1d kube-system coredns-69bc679c6f-8qxr2 1/1 Running 0 1d kube-system coredns-69bc679c6f-klts5 1/1 Running 0 1d kube-system coredns-69bc679c6f-wk27v 1/1 Running 0 1d kube-system haproxy-anchor-2bdjd 1/1 Running 0 1d kube-system haproxy-cab23-r720-11 1/1 Running 1 1d kube-system ingress-error-pages-5ccf96bf7d-42lq9 1/1 Running 0 1d kube-system ingress-lrjr4 1/1 Running 0 1d kube-system kubernetes-apiserver-anchor-vbghn 1/1 Running 0 1d kube-system kubernetes-apiserver-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-controller-manager-anchor-kbckh 1/1 Running 0 1d kube-system kubernetes-controller-manager-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-etcd-anchor-x78wh 1/1 Running 0 1d kube-system kubernetes-etcd-cab23-r720-11 1/1 Running 0 1d kube-system kubernetes-proxy-w7tc6 1/1 Running 0 1d kube-system kubernetes-scheduler-anchor-wdqn8 1/1 Running 0 1d kube-system kubernetes-scheduler-cab23-r720-11 1/1 Running 0 23h ucp airship-ucp-ceph-config-ceph-ns-key-generator-xh6rj 0/1 Completed 0 23h ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 5h ucp ingress-6cd5b89d5d-6r6q9 1/1 Running 0 23h ucp ingress-error-pages-5c97bb46bb-lnxzp 1/1 Running 0 23h ucp mariadb-ingress-85b8556fbc-7hg9b 0/1 Running 0 5h ucp mariadb-ingress-85b8556fbc-mrv6k 0/1 Running 0 5h ucp mariadb-ingress-error-pages-64f89dc697-p47gg 1/1 Running 0 5h ucp mariadb-server-0 0/1 Init:0/2 0 5h ucp postgresql-0 0/1 Init:0/1 0 5h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Tuesday, November 13, 2018 2:33 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, >From your master k8s node can you resolve ceph-mon.ceph.svc.cluster.local? Please also send the output of 'cat /etc/resolv.conf' from your k8s nodes (genesis and master node). Thxs ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Monday, November 12, 2018 4:39 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph rbd image related logs. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113b74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:39 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd': size 256 MB in 64 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113c74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113d74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd Watchers: none Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 5:27 PM To: Matthew H >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxwell.li at nokia-sbell.com Wed Nov 14 05:49:06 2018 From: maxwell.li at nokia-sbell.com (Li, Maxwell (NSB - CN/Hangzhou)) Date: Wed, 14 Nov 2018 05:49:06 +0000 Subject: [Airship-discuss] Airship installation Questions In-Reply-To: References: , , , Message-ID: Hi, I can resolve ceph-min in Genesis node: root at cab23-r720-11:~# dig ceph-mon.ceph.svc.cluster.local @10.96.0.10 ; <<>> DiG 9.10.3-P4-Ubuntu <<>> ceph-mon.ceph.svc.cluster.local @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7595 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;ceph-mon.ceph.svc.cluster.local. IN A ;; ANSWER SECTION: ceph-mon.ceph.svc.cluster.local. 5 IN A 10.23.22.11 ;; Query time: 1 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Sun Nov 11 05:03:22 CST 2018 ;; MSG SIZE rcvd: 76 By the way, I found that mariadb-ingress pod is running, but DO NOT BE READY. root at cab23-r720-11:~# kubectl get pods -n ucp NAME READY STATUS RESTARTS AGE airship-ucp-ceph-config-ceph-ns-key-generator-qxj6r 0/1 Completed 0 1d airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1d ingress-6cd5b89d5d-98kbj 1/1 Running 0 1d ingress-6cd5b89d5d-s7qsz 1/1 Running 0 1d ingress-error-pages-5c97bb46bb-2bdlx 1/1 Running 0 1d ingress-error-pages-5c97bb46bb-g62b8 1/1 Running 0 1d mariadb-ingress-85b8556fbc-8v47h 0/1 Running 0 1d mariadb-ingress-85b8556fbc-hk9qr 0/1 Running 0 1d mariadb-ingress-error-pages-64f89dc697-g2l4g 1/1 Running 0 1d mariadb-server-0 0/1 Init:0/2 0 1d mariadb-server-1 0/1 Init:0/2 0 1d mariadb-server-2 0/1 Init:0/2 0 1d postgresql-0 0/1 Init:0/1 0 1d root at cab23-r720-11:~# kubectl logs -f mariadb-ingress-85b8556fbc-hk9qr -n ucp + COMMAND=start + start + exec /usr/bin/dumb-init /nginx-ingress-controller --force-namespace-isolation --watch-namespace ucp --election-id=airship-ucp-mariadb --ingress-class=airship-ucp-mariadb-mariadb-ingress --default-backend-service=ucp/mariadb-ingress-error-pages --tcp-services-configmap=ucp/mariadb-services-tcp ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.9.0 Build: git-6816630 Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- I1109 00:05:30.619177 7 flags.go:151] Watching for ingress class: airship-ucp-mariadb-mariadb-ingress W1109 00:05:30.619260 7 flags.go:154] only Ingress with class "airship-ucp-mariadb-mariadb-ingress" will be processed by this ingress controller I1109 00:05:30.620028 7 main.go:227] Creating API client for https://10.96.0.1:443 I1109 00:05:30.641766 7 main.go:239] Running in Kubernetes Cluster version v1.10 (v1.10.2) - git (clean) commit 81753b10df112992bf51bbc2c2f85208aad78335 - platform linux/amd64 I1109 00:05:30.646318 7 main.go:83] validated ucp/mariadb-ingress-error-pages as the default backend I1109 00:05:30.989789 7 stat_collector.go:77] starting new nginx stats collector for Ingress controller running in namespace ucp (class airship-ucp-mariadb-mariadb-ingress) I1109 00:05:30.989841 7 stat_collector.go:78] collector extracting information from port 18080 I1109 00:05:31.006073 7 nginx.go:250] starting Ingress controller I1109 00:05:31.009821 7 listers.go:69] ignoring add for ingress airship-ucp--mgr-8e72c0 based on annotation kubernetes.io/ingress.class with value nginx I1109 00:05:31.009861 7 listers.go:69] ignoring add for ingress ucp-airship-ingress based on annotation kubernetes.io/ingress.class with value nginx-cluster I1109 00:05:31.106538 7 nginx.go:255] running initial sync of secrets I1109 00:05:31.106894 7 nginx.go:261] ignoring add for ingress airship-ucp--mgr-8e72c0 based on annotation kubernetes.io/ingress.class with value nginx I1109 00:05:31.106989 7 nginx.go:261] ignoring add for ingress ucp-airship-ingress based on annotation kubernetes.io/ingress.class with value nginx-cluster I1109 00:05:31.107145 7 nginx.go:288] starting NGINX process... I1109 00:05:31.107192 7 leaderelection.go:174] attempting to acquire leader lease... I1109 00:05:31.120826 7 status.go:196] new leader elected: mariadb-ingress-85b8556fbc-4r7vb W1109 00:05:31.137271 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP I1109 00:05:31.137361 7 controller.go:211] backend reload required I1109 00:05:31.137591 7 stat_collector.go:34] changing prometheus collector from to default I1109 00:05:31.232578 7 controller.go:220] ingress backend successfully reloaded... W1109 00:05:40.746197 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 00:05:55.991850 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP I1109 00:06:01.847400 7 leaderelection.go:184] successfully acquired lease ucp/airship-ucp-mariadb-airship-ucp-mariadb-mariadb-ingress I1109 00:06:01.847433 7 status.go:196] new leader elected: mariadb-ingress-85b8556fbc-hk9qr W1109 00:15:31.010319 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 00:22:36.016006 7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource version: 25309 (25334) W1109 00:25:31.010617 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 00:35:31.011230 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 00:44:28.051272 7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource version: 27637 (28265) W1109 00:45:31.011379 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 00:55:31.012008 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 00:55:34.323748 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:02:04.084532 7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource version: 30554 (30646) W1109 01:05:31.012255 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:15:31.012528 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:15:34.324431 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:20:54.121121 7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource version: 32819 (33059) W1109 01:25:31.012743 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:35:31.013183 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:39:48.142735 7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource version: 35215 (35473) W1109 01:45:31.013443 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 01:55:31.013670 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 02:05:29.154286 7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource version: 37485 (38558) W1109 02:05:31.014193 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 02:15:31.014413 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP W1109 02:25:31.015156 7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP Does it health? Best Regards! Maxwell Li From: Matthew H Sent: Tuesday, November 13, 2018 11:59 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) ; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Can you resolve ceph-mon.ceph.svc.cluster.local from your genesis node? dig ceph-mon.ceph.svc.cluster.local @10.96.0.10 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Tuesday, November 13, 2018 4:15 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I checked ceph-mon and ucp mariadb-ingress resolv.conf. It seems ceph namespaces related pod use ceph.svc.cluster.local svc.cluster.local cluster.local but ucp namespaces related pod only use ucp.svc.cluster.local svc.cluster.local cluster.local. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-qqjzz -n ceph -- /bin/sh # cat resolv.conf nameserver 10.96.0.10 search ceph.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # cat hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl exec -it mariadb-ingress-85b8556fbc-7hg9b -n ucp -- /bin/sh # cat resolv.conf nameserver 10.96.0.10 search ucp.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # cat hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.97.38.118 mariadb-ingress-85b8556fbc-7hg9b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Tuesday, November 13, 2018 4:45 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, We just deploy genesis node and haven't deploy master node yet. root at cab23-r720-11:~# cat /etc/resolv.conf options timeout:1 attempts:1 domain local nameserver 10.96.0.10 nameserver 10.56.126.31 nameserver 10.96.0.10 nameserver 8.8.8.8 root at cab23-r720-11:~# vi /etc/hosts # This file is controlled by Promenade. Do not modify. # 127.0.0.1 cab23-r720-11.local cab23-r720-11 127.0.0.1 localhost root at cab23-r720-11:~# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-npzgl 0/1 Completed 0 23h ceph ceph-bootstrap-mtlgj 0/1 Completed 0 23h ceph ceph-cephfs-client-key-generator-zbp65 0/1 Completed 0 23h ceph ceph-cephfs-provisioner-676684f6bd-n5hjm 1/1 Running 0 23h ceph ceph-mds-5f547b6fd7-sg49g 1/1 Running 0 23h ceph ceph-mds-keyring-generator-pzvs2 0/1 Completed 0 23h ceph ceph-mgr-69d599864b-dqkzv 1/1 Running 0 23h ceph ceph-mgr-keyring-generator-4b6ww 0/1 Completed 0 23h ceph ceph-mon-check-6db6b569b6-w5kjk 1/1 Running 0 23h ceph ceph-mon-keyring-generator-bpm8q 0/1 Completed 0 23h ceph ceph-mon-qqjzz 1/1 Running 0 23h ceph ceph-osd-default-83945928-qqz4c 1/1 Running 0 23h ceph ceph-osd-keyring-generator-wc4rg 0/1 Completed 0 23h ceph ceph-rbd-pool-9gqx4 0/1 Completed 0 23h ceph ceph-rbd-provisioner-84bc5c88c7-jstt8 1/1 Running 0 23h ceph ceph-rgw-5b6645c456-tpqsq 1/1 Running 0 23h ceph ceph-rgw-storage-init-45zkv 0/1 Completed 0 23h ceph ceph-storage-keys-generator-9kxd9 0/1 Completed 0 23h ceph ingress-65dc849968-96k57 1/1 Running 0 23h ceph ingress-error-pages-796b76c856-dfk5w 1/1 Running 0 23h kube-system auxiliary-etcd-cab23-r720-11 3/3 Running 0 1d kube-system bootstrap-armada-cab23-r720-11 4/4 Running 0 1d kube-system calico-etcd-anchor-tnqrq 1/1 Running 0 1d kube-system calico-etcd-cab23-r720-11 1/1 Running 0 1d kube-system calico-kube-controllers-68f5b99d47-zh84k 1/1 Running 0 1d kube-system calico-node-kbpns 2/2 Running 0 1d kube-system calico-settings-p76tq 0/1 Completed 0 1d kube-system coredns-69bc679c6f-8qxr2 1/1 Running 0 1d kube-system coredns-69bc679c6f-klts5 1/1 Running 0 1d kube-system coredns-69bc679c6f-wk27v 1/1 Running 0 1d kube-system haproxy-anchor-2bdjd 1/1 Running 0 1d kube-system haproxy-cab23-r720-11 1/1 Running 1 1d kube-system ingress-error-pages-5ccf96bf7d-42lq9 1/1 Running 0 1d kube-system ingress-lrjr4 1/1 Running 0 1d kube-system kubernetes-apiserver-anchor-vbghn 1/1 Running 0 1d kube-system kubernetes-apiserver-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-controller-manager-anchor-kbckh 1/1 Running 0 1d kube-system kubernetes-controller-manager-cab23-r720-11 1/1 Running 0 23h kube-system kubernetes-etcd-anchor-x78wh 1/1 Running 0 1d kube-system kubernetes-etcd-cab23-r720-11 1/1 Running 0 1d kube-system kubernetes-proxy-w7tc6 1/1 Running 0 1d kube-system kubernetes-scheduler-anchor-wdqn8 1/1 Running 0 1d kube-system kubernetes-scheduler-cab23-r720-11 1/1 Running 0 23h ucp airship-ucp-ceph-config-ceph-ns-key-generator-xh6rj 0/1 Completed 0 23h ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 5h ucp ingress-6cd5b89d5d-6r6q9 1/1 Running 0 23h ucp ingress-error-pages-5c97bb46bb-lnxzp 1/1 Running 0 23h ucp mariadb-ingress-85b8556fbc-7hg9b 0/1 Running 0 5h ucp mariadb-ingress-85b8556fbc-mrv6k 0/1 Running 0 5h ucp mariadb-ingress-error-pages-64f89dc697-p47gg 1/1 Running 0 5h ucp mariadb-server-0 0/1 Init:0/2 0 5h ucp postgresql-0 0/1 Init:0/1 0 5h Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Tuesday, November 13, 2018 2:33 AM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, >From your master k8s node can you resolve ceph-mon.ceph.svc.cluster.local? Please also send the output of 'cat /etc/resolv.conf' from your k8s nodes (genesis and master node). Thxs ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Monday, November 12, 2018 4:39 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph rbd image related logs. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113b74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:39 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd': size 256 MB in 64 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113c74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.113d74b0dc51 format: 2 features: layering flags: create_timestamp: Mon Nov 12 09:06:40 2018 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd Watchers: none root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd Watchers: none Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 5:27 PM To: Matthew H >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Add ceph-mod logs and yaml files. root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_OK services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 18 pools, 93 pgs objects: 1164 objects, 3407 bytes usage: 374 MB used, 1023 GB / 1023 GB avail pgs: 93 active+clean root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 1.00000 root default -2 1.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 root at cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump epoch 219 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-12 08:53:17.281208 modified 2018-11-12 09:06:40.314892 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 6 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd removed_snaps [1~5] pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 1 osd.0 up in weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Tu, Qiaolin (NSB - CN/Hangzhou) Sent: Monday, November 12, 2018 4:25 PM To: 'Matthew H' >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: RE: [Airship-discuss] Airship installation Questions Hi, Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment log for detail, thanks very much! ucp airship-ucp-rabbitmq-rabbitmq-0 0/1 Init:0/2 0 1m ucp ingress-6cd5b89d5d-nmwpt 1/1 Running 0 18m ucp ingress-6cd5b89d5d-nr65b 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-2mvgm 1/1 Running 0 18m ucp ingress-error-pages-5c97bb46bb-wzzdz 1/1 Running 0 18m ucp mariadb-ingress-85b8556fbc-xpvwc 0/1 Running 0 1m ucp mariadb-ingress-85b8556fbc-zv72k 0/1 Running 0 1m ucp mariadb-ingress-error-pages-64f89dc697-2trh9 1/1 Running 0 1m ucp mariadb-server-0 0/1 Init:0/2 0 1m ucp postgresql-0 0/1 Init:0/1 0 1m root at cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp Name: mariadb-ingress-85b8556fbc-xpvwc Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:05:00 +0000 Labels: application=mariadb component=ingress pod-template-hash=4164112967 release_group=airship-ucp-mariadb Annotations: configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1 Status: Running IP: 10.97.38.125 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11 Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "airship-ucp-mariadb-ingress-token-htf82" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-etc" Normal SuccessfulMountVolume 3m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "mariadb-bin" Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Normal Pulled 3m kubelet, cab23-r720-11 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0" already present on machine Normal Created 3m kubelet, cab23-r720-11 Created container Normal Started 3m kubelet, cab23-r720-11 Started container Warning Unhealthy 26s (x16 over 2m) kubelet, cab23-r720-11 Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused root at cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp Name: postgresql-0 Namespace: ucp Node: cab23-r720-11/10.23.22.11 Start Time: Mon, 12 Nov 2018 08:04:56 +0000 Labels: application=postgresql component=server controller-revision-hash=postgresql-566fd45fd7 release_group=airship-ucp-postgresql statefulset.kubernetes.io/pod-name=postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulAttachVolume 4m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" Normal Scheduled 4m default-scheduler Successfully assigned postgresql-0 to cab23-r720-11 Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-bin" Normal SuccessfulMountVolume 4m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "postgresql-token-rmkq9" Warning FailedMount 2m kubelet, cab23-r720-11 MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings. server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known) unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789' rbd: couldn't connect to the cluster! ) Warning FailedMount 19s (x2 over 2m) kubelet, cab23-r720-11 Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9] Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 10:43 PM To: Tu, Qiaolin (NSB - CN/Hangzhou) >; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Thanks, >From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1]. Let me know if this helps get you in the right direction. [1] https://github.com/openstack/airship-in-a-bottle/blob/master/deployment_files/site/gate-multinode/software/charts/ucp/storage_provisioner/ceph.yaml#L173-L250 ________________________________ From: Tu, Qiaolin (NSB - CN/Hangzhou) > Sent: Friday, November 9, 2018 4:51 AM To: Matthew H; airship-discuss at lists.airshipit.org Cc: Li, Maxwell (NSB - CN/Hangzhou) Subject: RE: [Airship-discuss] Airship installation Questions Hi, I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much! root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 8.00000 root default -2 8.00000 host cab23-r720-11 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000 3 hdd 1.00000 osd.3 up 1.00000 1.00000 4 hdd 1.00000 osd.4 up 1.00000 1.00000 5 hdd 1.00000 osd.5 up 1.00000 1.00000 6 hdd 1.00000 osd.6 up 1.00000 1.00000 7 hdd 1.00000 osd.7 up 1.00000 1.00000 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump epoch 231 fsid 7b7576f4-3358-4668-9112-100440079807 created 2018-11-07 09:08:39.208517 modified 2018-11-09 09:40:10.639284 flags sortbitwise,recovery_deletes,purged_snapdirs crush_version 21 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client hammer require_osd_release luminous pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw max_osd 8 osd.0 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1 osd.1 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4 osd.2 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3 osd.3 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4 osd.4 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7 osd.5 up in weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43 osd.6 up in weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d osd.7 up in weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9 root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized PG_AVAILABILITY Reduced data availability: 338 pgs inactive pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4] pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2] pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5] pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7] pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3] pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0] pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6] pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0] pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1] pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6] pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5] pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4] pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7] PG_DEGRADED Degraded data redundancy: 338 pgs undersized pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0] pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5] pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3] pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4] pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5] pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7] pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7] pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4] pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6] pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0] pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2] pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5] pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6] pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4] pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3] pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2] pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5] pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3] pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7] pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5] pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3] pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7] pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3] pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0] pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6] pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1] pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6] pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5] pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6] pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7] pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7] pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4] pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1] pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1] pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7] pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0] pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7] pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6] pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4] pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0] pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6] pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5] pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6] pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6] pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1] pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5] pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6] pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6] pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5] pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4] pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7] root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN Reduced data availability: 338 pgs inactive Degraded data redundancy: 338 pgs undersized services: mon: 1 daemons, quorum cab23-r720-11 mgr: cab23-r720-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby osd: 8 osds: 8 up, 8 in data: pools: 18 pools, 338 pgs objects: 0 objects, 0 bytes usage: 3229 MB used, 8184 GB / 8187 GB avail pgs: 100.000% pgs not active 338 undersized+peered Best Regards! Qiaolin Tu NSB MN 5G ECE HZ CN2 SG04 Mobile: +86 138 057 59684 E-Mail: qiaolin.tu at nokia-sbell.com From: Matthew H > Sent: Friday, November 09, 2018 6:18 AM To: airship-discuss at lists.airshipit.org Cc: Tu, Qiaolin (NSB - CN/Hangzhou) > Subject: Re: [Airship-discuss] Airship installation Questions Greetings, Could you run the following commands from a MON pod: ceph osd tree ceph osd dump Also how many nodes did you deploy on? one or one or more nodes? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp2723 at att.com Wed Nov 14 18:39:08 2018 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Wed, 14 Nov 2018 18:39:08 +0000 Subject: [Airship-discuss] Reminder - Airship - Open Design Call - Next call Nov 29, 2018 Message-ID: REMINDER – CANCELLED CALL: Nov 15 , 2018 – Cancelled many attending Openstack Summit Nov 22 , 2018 – Cancelled its Thanksgiving Holiday in US -------------- next part -------------- An HTML attachment was scrubbed... URL: From santosh.thapamager at as.ntt-at.co.jp Fri Nov 16 04:30:44 2018 From: santosh.thapamager at as.ntt-at.co.jp (santosh.thapamager) Date: Fri, 16 Nov 2018 13:30:44 +0900 Subject: [Airship-discuss] =?iso-2022-jp?b?RGVwbG95bWVudCBCZWhpbmQgUHJv?= =?iso-2022-jp?b?eHkbJEIhJxsoQmRldl9taW5pbWFsLGRldl9zaW5nbGVfbm9k?= =?iso-2022-jp?b?ZQ==?= Message-ID: <000d01d47d65$23d68fe0$6b83afa0$@as.ntt-at.co.jp> Hi There! A newbie here. I have some questions regarding deploying Airship in a bottle behind a proxy. Corret me if my understanding is wrong. I am trying to install dev_minimal and dev_single_node behind corporate proxy. For dev_minimal: I have configured proxy configuration in for dev_minimal as described here. https://git.airshipit.org/cgit/airship-in-a-bottle/tree/manifests/dev_minima l/README.txt However I am getting following issue. Mon Nov 12 17:10:02 JST 2018 === Waiting for Kubernetes API availablity ===   + wait_for_kubernetes_api 3600   + set +x   Mon Nov 12 17:10:02 JST 2018 Waiting 3600 seconds for API response.   Unable to connect to the server: EOF   .Unable to connect to the server: EOF   .Unable to connect to the server: net/http: request canceled (Client.Timeout exceeded while awaiting headers)   .Unable to connect to the server: net/http: request canceled (Client.Timeout exceeded while awaiting headers) I would be very grateful if I can get any guidence regarding this issue. For dev_single_node: As for dev_single_node I could not find any instructions for setting up behind proxy. So I followed same instruction as mentioned for dev_minimal behind proxy.For this I am getting docker error as below.   Mon Nov 15 17:10:02 JST 2018 === Waiting for Kubernetes API availablity ===   + wait_for_kubernetes_api 3600   + set +x   Mon Nov 15 17:10:02 JST 2018 Waiting 3600 seconds for API response.   Unable to find image 'gcr.io/google_containers/hyperkube-amd64:v1. 10.2' locally   docker:'Error response from daemon: Get https://gcr.io/v1/_ping : dial tcp: lookup gcr.io on 8.8.4.4:53: read udp xx.xx.xx.xx:xxxx -> 8.8.4.4: 53" i/o timeout'   See docker run --help As per error I think docker is not configured for running behind proxy. Currently can we install dev_single_node behind proxy? I would be very grateful if you can help me solving above mentioned issues Thanking you, Santosh Thapa Magar From rp2723 at att.com Mon Nov 26 16:10:18 2018 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Mon, 26 Nov 2018 16:10:18 +0000 Subject: [Airship-discuss] Airship - Open Design Call - PM EST Call Message-ID: <99088997CCAD0C4BA20008FD0943449632655150@MISOUT7MSGUSRDI.ITServices.sbc.com> When: Occurs on Thursday every other week from 5:00 PM to 6:00 PM effective 11/29/2018 until 12/20/2018. (UTC-05:00) Eastern Time (US & Canada) Where: https://bluejeans.com/421706689 *~*~*~*~*~*~*~*~*~* Reminder we will resume Airship Design Calls these week * Etherpad for the Airship Open Design discussion * https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions * Storyboard in flight Specs * https://storyboard.openstack.org/#!/project/openstack/airship-specs * Github Airship Specs * https://github.com/openstack/airship-specs/tree/master/specs * Inflight/reviewing specs * https://review.openstack.org/#/q/status:open+airship-specs __________________________________________ To join the Meeting: https://bluejeans.com/421706689 To join via Room System: Video Conferencing System: bjn.vc -or-199.48.152.152 Meeting ID : 421706689 To join via phone : 1) Dial: +1.408.317.9254 (BlueJeans U.S. Toll) +1.866.226.4650 (US Toll Free) (see all numbers - http://bluejeans.com/numbers) 2) Enter Conference ID : 421706689 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2810 bytes Desc: not available URL: From dd7022 at att.com Tue Nov 27 22:35:19 2018 From: dd7022 at att.com (KATARIA, DEEPAK) Date: Tue, 27 Nov 2018 22:35:19 +0000 Subject: [Airship-discuss] keystone-rabbit-init failing forever Message-ID: <90BF8249EF30DB4A83C0F4D241F3D72C27A774E6@MISOUT7MSGUSRCC.ITServices.sbc.com> Dear Airship Team, We are using Airship v18.11.01. 1prom-gen.sh is successful and genesis.sh is failing and here is the error. Please see additional information highlighted in yellow below. 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller [-] [chart=ucp-keystone]: Error while installing release airship-ucp-keystone: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release airship-ucp-keystone failed: timed out waiting for the condition)> 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller Traceback (most recent call last): 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/site-packages/armada/handlers/tiller.py", line 455, in install_release 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller metadata=self.metadata) 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 487, in __call__ 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, deadline) 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release airship-ucp-keystone failed: timed out waiting for the condition)> 2018-11-17 00:47:44.809 5176 ERROR armada.handlers.tiller ^[[00m 2018-11-17 00:47:45.104 5176 ERROR armada.handlers.armada [-] Chart deploy [ucp-keystone] failed: Failed to Install release: airship-ucp-keystone - Tiller Message: b'Release "airship-ucp-keystone" failed: timed out waiting for the condition': armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: airship-ucp-keystone - Tiller Message: b'Release "airship-ucp-keystone" failed: timed out waiting for the condition'^[[00m 2018-11-17 00:47:45.105 5176 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['ucp-keystone']^[[00m 2018-11-17 00:47:45.105 5176 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['ucp-keystone'] 2018-11-17 00:47:45.105 5176 ERROR armada.cli Traceback (most recent call last): 2018-11-17 00:47:45.105 5176 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2018-11-17 00:47:45.105 5176 ERROR armada.cli self.invoke() 2018-11-17 00:47:45.105 5176 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/cli/apply.py", line 216, in invoke 2018-11-17 00:47:45.105 5176 ERROR armada.cli resp = armada.sync() 2018-11-17 00:47:45.105 5176 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/armada.py", line 270, in sync 2018-11-17 00:47:45.105 5176 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2018-11-17 00:47:45.105 5176 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['ucp-keystone'] 2018-11-17 00:47:45.105 5176 ERROR armada.cli ^[[00m Here is some more information that may help root at aknode30:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ceph airship-ucp-ceph-provisioners-ceph-ns-key-generator-xv9sc 0/1 Completed 0 1h ceph ceph-bootstrap-82qx2 0/1 Completed 0 1h ceph ceph-cephfs-client-key-generator-26qhs 0/1 Completed 0 1h ceph ceph-cephfs-provisioner-676684f6bd-kfs9m 1/1 Running 0 1h ceph ceph-cephfs-provisioner-676684f6bd-wbwl9 1/1 Running 0 1h ceph ceph-mds-869b96cd5f-wtwn2 1/1 Running 0 1h ceph ceph-mds-keyring-generator-fwl2l 0/1 Completed 0 1h ceph ceph-mgr-6dd7fcc86b-q4j9l 1/1 Running 0 1h ceph ceph-mgr-keyring-generator-72p4q 0/1 Completed 0 1h ceph ceph-mon-check-6db6b569b6-z4gst 1/1 Running 0 1h ceph ceph-mon-keyring-generator-jzg6l 0/1 Completed 0 1h ceph ceph-mon-pw7mt 1/1 Running 0 1h ceph ceph-osd-default-64779b8c-s8x2s 1/1 Running 0 1h ceph ceph-osd-default-6ea9de2c-r5c2g 1/1 Running 0 1h ceph ceph-osd-default-70a54190-lmrj2 1/1 Running 0 1h ceph ceph-osd-default-7544b6da-86z5c 1/1 Running 0 1h ceph ceph-osd-default-7cfc44c1-47sxn 1/1 Running 0 1h ceph ceph-osd-default-83945928-fv9gb 1/1 Running 0 1h ceph ceph-osd-default-be8e8cc4-2x5fd 1/1 Running 0 1h ceph ceph-osd-default-f9249fa9-2mszd 1/1 Running 0 1h ceph ceph-osd-keyring-generator-wf97s 0/1 Completed 0 1h ceph ceph-rbd-pool-tt69r 0/1 Completed 0 1h ceph ceph-rbd-provisioner-84bc5c88c7-hmxhb 1/1 Running 0 1h ceph ceph-rbd-provisioner-84bc5c88c7-t5665 1/1 Running 0 1h ceph ceph-rgw-d6878bcc7-btm5v 1/1 Running 0 1h ceph ceph-rgw-d6878bcc7-v2smd 1/1 Running 0 1h ceph ceph-rgw-storage-init-xpbml 0/1 Completed 0 1h ceph ceph-storage-keys-generator-p6qnc 0/1 Completed 0 1h ceph ingress-65dc849968-h562n 1/1 Running 0 1h ceph ingress-65dc849968-rhjz2 1/1 Running 0 1h ceph ingress-error-pages-796b76c856-9775r 1/1 Running 0 1h ceph ingress-error-pages-796b76c856-jwpnx 1/1 Running 0 1h kube-system airship-kubernetes-calico-etcd-etcd-test 0/1 Completed 0 1h kube-system auxiliary-etcd-aknode30 3/3 Running 0 1h kube-system bootstrap-armada-aknode30 4/4 Running 0 1h kube-system calico-etcd-aknode30 1/1 Running 0 1h kube-system calico-etcd-anchor-g8l5s 1/1 Running 0 1h kube-system calico-kube-controllers-6ddd8598f-mk8md 1/1 Running 0 1h kube-system calico-node-4lv6w 2/2 Running 0 1h kube-system calico-settings-d62l6 0/1 CrashLoopBackOff 21 1h kube-system coredns-7d69b6b56c-dbbxw 1/1 Running 0 1h kube-system coredns-7d69b6b56c-x9f6j 1/1 Running 0 1h kube-system coredns-7d69b6b56c-xrkxc 1/1 Running 0 1h kube-system haproxy-aknode30 1/1 Running 1 1h kube-system haproxy-anchor-t8zjg 1/1 Running 0 1h kube-system ingress-error-pages-5ccf96bf7d-dbcxq 1/1 Running 0 1h kube-system ingress-error-pages-5ccf96bf7d-zh8fq 1/1 Running 0 1h kube-system ingress-hfzfs 1/1 Running 0 1h kube-system kubernetes-apiserver-aknode30 1/1 Running 0 1h kube-system kubernetes-apiserver-anchor-g97js 1/1 Running 0 1h kube-system kubernetes-controller-manager-aknode30 1/1 Running 0 1h kube-system kubernetes-controller-manager-anchor-7kb5h 1/1 Running 0 1h kube-system kubernetes-etcd-aknode30 1/1 Running 0 1h kube-system kubernetes-etcd-anchor-nhcwf 1/1 Running 0 1h kube-system kubernetes-proxy-mppl4 1/1 Running 0 1h kube-system kubernetes-scheduler-aknode30 1/1 Running 0 1h kube-system kubernetes-scheduler-anchor-f7v8k 1/1 Running 0 1h ucp airship-ucp-ceph-config-ceph-ns-key-generator-7h28c 0/1 Completed 0 1h ucp airship-ucp-keystone-memcached-memcached-74d79d8896-j6m46 1/1 Running 0 58m ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h ucp airship-ucp-rabbitmq-rabbitmq-1 1/1 Running 0 59m ucp airship-ucp-rabbitmq-rabbitmq-2 1/1 Running 0 58m ucp ingress-6cd5b89d5d-gl9kl 1/1 Running 0 1h ucp ingress-6cd5b89d5d-jcjjk 1/1 Running 0 1h ucp ingress-error-pages-5c97bb46bb-djgzj 1/1 Running 0 1h ucp ingress-error-pages-5c97bb46bb-s2wr5 1/1 Running 0 1h ucp keystone-api-56c8985585-cmv8l 0/1 Init:0/1 0 4m ucp keystone-api-56c8985585-dhdrk 0/1 Init:0/1 0 4m ucp keystone-bootstrap-mqbxh 0/1 Init:0/1 0 4m ucp keystone-credential-setup-78c7s 0/1 Completed 0 4m ucp keystone-db-init-pv5ns 0/1 Completed 0 4m ucp keystone-db-sync-s9x2l 0/1 Init:0/1 0 4m ucp keystone-domain-manage-wtxsr 0/1 Init:0/2 0 4m ucp keystone-fernet-rotate-1542412800-zs299 0/1 Completed 0 52m ucp keystone-fernet-setup-z5bq5 0/1 Completed 0 4m ucp keystone-rabbit-init-29q6j 0/1 CrashLoopBackOff 5 4m ucp mariadb-ingress-56df696f99-hltbl 1/1 Running 0 1h ucp mariadb-ingress-56df696f99-tntgp 1/1 Running 0 1h ucp mariadb-ingress-error-pages-67db6bf8df-qsn8m 1/1 Running 0 1h ucp mariadb-server-0 1/1 Running 0 1h ucp mariadb-server-1 1/1 Running 0 1h ucp mariadb-server-2 1/1 Running 0 1h ucp postgresql-0 1/1 Running 0 1h root at aknode30:~# kubectl logs -n ucp keystone-rabbit-init-29q6j Managing: User: keystone user declared Managing: vHost: openstack vhost declared Managing: Permissions: keystone on openstack permission declared Applying additional configuration *** Please create virtual host "keystone" prior to importing definitions. root at aknode30:~# kubectl logs -n ucp keystone-rabbit-init-29q6j Managing: User: keystone user declared Managing: vHost: openstack vhost declared Managing: Permissions: keystone on openstack permission declared Applying additional configuration *** Please create virtual host "keystone" prior to importing definitions. root at aknode30:~# kubectl describe pod -n ucp keystone-rabbit-init-29q6j Name: keystone-rabbit-init-29q6j Namespace: ucp Node: aknode30/172.29.1.30 Start Time: Sat, 17 Nov 2018 00:48:28 +0000 Labels: application=keystone component=rabbit-init controller-uid=7f2dea33-ea02-11e8-9d4f-3cfdfeaa90b1 job-name=keystone-rabbit-init release_group=airship-ucp-keystone Annotations: Status: Running IP: 10.99.161.96 Controlled By: Job/keystone-rabbit-init Init Containers: init: Container ID: docker://43242c89f546337e1dcb2a1fca14077f5d435ee7cbe2712737d78a0277912868 Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 Port: Host Port: Command: kubernetes-entrypoint State: Terminated Reason: Completed Exit Code: 0 Started: Sat, 17 Nov 2018 00:48:42 +0000 Finished: Sat, 17 Nov 2018 00:48:44 +0000 Ready: True Restart Count: 0 Environment: POD_NAME: keystone-rabbit-init-29q6j (v1:metadata.name) NAMESPACE: ucp (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ DEPENDENCY_SERVICE: ucp:rabbitmq DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from keystone-rabbit-init-token-gftxq (ro) Containers: rabbit-init: Container ID: docker://58e8b01ce85aac3876c1f40c01cf6a032cdeae7fcbcbcb516ab556dcce58e07b Image: docker.io/rabbitmq:3.7-management Image ID: docker-pullable://rabbitmq at sha256:3eb2fa0f83914999846f831f14b900c0c85cea8e5d2db48ff73cf7defa12fe96 Port: Host Port: Command: /tmp/rabbit-init.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 17 Nov 2018 00:51:55 +0000 Finished: Sat, 17 Nov 2018 00:51:56 +0000 Ready: False Restart Count: 5 Environment: RABBITMQ_ADMIN_CONNECTION: Optional: false RABBITMQ_USER_CONNECTION: Optional: false RABBITMQ_AUXILIARY_CONFIGURATION: {"policies":[{"apply-to":"all","definition":{"ha-mode":"all","ha-sync-mode":"automatic","message-ttl":70000},"name":"ha_ttl_keystone","pattern":"(notifications)\\.","priority":0,"vhost":"keystone"}]} Mounts: /tmp/rabbit-init.sh from rabbit-init-sh (ro) /var/run/secrets/kubernetes.io/serviceaccount from keystone-rabbit-init-token-gftxq (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: rabbit-init-sh: Type: ConfigMap (a volume populated by a ConfigMap) Name: keystone-bin Optional: false keystone-rabbit-init-token-gftxq: Type: Secret (a volume populated by a Secret) SecretName: keystone-rabbit-init-token-gftxq Optional: false QoS Class: BestEffort Node-Selectors: ucp-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned keystone-rabbit-init-29q6j to aknode30 Normal SuccessfulMountVolume 5m kubelet, aknode30 MountVolume.SetUp succeeded for volume "keystone-rabbit-init-token-gftxq" Normal SuccessfulMountVolume 5m kubelet, aknode30 MountVolume.SetUp succeeded for volume "rabbit-init-sh" Normal Pulled 5m kubelet, aknode30 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine Normal Created 5m kubelet, aknode30 Created container Normal Started 5m kubelet, aknode30 Started container Normal Started 4m (x4 over 4m) kubelet, aknode30 Started container Normal Pulled 3m (x5 over 4m) kubelet, aknode30 Container image "docker.io/rabbitmq:3.7-management" already present on machine Normal Created 3m (x5 over 4m) kubelet, aknode30 Created container Warning BackOff 2s (x22 over 4m) kubelet, aknode30 Back-off restarting failed container Best Regards, Deepak Kataria -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick at intracom-telecom.com Wed Nov 28 14:46:46 2018 From: nick at intracom-telecom.com (Nikos Karandreas) Date: Wed, 28 Nov 2018 16:46:46 +0200 Subject: [Airship-discuss] airship-in-a-bottle: deployment fails with exited airflow containers Message-ID: <84485c2aaba55443913ad7b4a846d33b@iris> Hi airship people, I'm trying to deploy airship-in-a-bottle in a fresh Ubuntu 16.04 VM (8 vCPU/24GB RAM/48GB disk) following the instructions in [1] but I'm getting the failure below (until the end in [2]) + sudo docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD=password18 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service quay.io/airshipit/shipyard:master create action deploy_site Error: Unable to complete request to Airflow Reason: Airflow could not be contacted properly by Shipyard. - Error: and giving, after that, docker ps I get: root at airhost2:~/deploy/airship-in-a-bottle/manifests/dev_single_node# docker ps --all | grep "quay.io/airshipit/shipyard" CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f4c0747b25ec quay.io/airshipit/shipyard at sha256:da847565f42af6b3286c95ad474db6acd2ad8dac83b6185dd4388ad530441fed "/tmp/airflow-ship..." 47 minutes ago Exited (0) 47 minutes ago k8s_airflow-shipyard-init_airflow-web-75d98b59b5-tbtsg_ucp_677730b5-f2ff-11e8-af0b-52540085805e_0 04241489b515 quay.io/airshipit/shipyard at sha256:da847565f42af6b3286c95ad474db6acd2ad8dac83b6185dd4388ad530441fed "/tmp/airflow-ship..." 47 minutes ago Exited (0) 47 minutes ago k8s_airflow-shipyard-init_airflow-scheduler-5588bd948f-dlq2n_ucp_677727c7-f2ff-11e8-af0b-52540085805e_0 d433934eec1d quay.io/airshipit/shipyard at sha256:da847565f42af6b3286c95ad474db6acd2ad8dac83b6185dd4388ad530441fed "/tmp/airflow-ship..." 47 minutes ago Exited (0) 47 minutes ago k8s_airflow-shipyard-init_airflow-worker-0_ucp_68133c76-f2ff-11e8-af0b-52540085805e_0 Can you please give some insight on what might be wrong or what to look for? Cheers Nikos Nikos Karandreas Developer, OPNFV Software Development Center _____________________________________ Intracom Telecom Marinou Antipa 41, Pilea-Thessaloniki, GR 57001 t: +30 2310497353, f: +30 2310497330 nick at intracom-telecom.com www.intracom-telecom.com [1] https://github.com/openstack/airship-in-a-bottle [2] Execute deploy_site Dag... + [[ -n '' ]] + [[ -n deploy_site ]] + sudo docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD=password18 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service quay.io/airshipit/shipyard:master create action deploy_site Error: Unable to complete request to Airflow Reason: Airflow could not be contacted properly by Shipyard. - Error: #### Errors: 1, Warnings: 0, Infos: 0, Other: 0 #### + sudo docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD=password18 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service quay.io/airshipit/shipyard:master get actions Name Action Lifecycle Execution Time Step Succ/Fail/Oth Footnotes None + echo -e 'Retrieving Action ID...\n' Retrieving Action ID... ++ sudo docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD=password18 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service quay.io/airshipit/shipyard:master get actions ++ grep -i Processing ++ awk '{print $2}' ++ grep deploy_site + action_id= + retrieve_shipyard_action_counter=0 + retrieve_shipyard_action_limit=2 + [[ 0 -le 2 ]] + [[ -n '' ]] + [[ 0 == 2 ]] + echo -e 'Unable to Retrieve Action ID!' Unable to Retrieve Action ID! + echo -e 'Retrying in 30 seconds...' Retrying in 30 seconds... + sleep 30 ++ sudo docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD=password18 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service quay.io/airshipit/shipyard:master get actions ++ grep deploy_site ++ grep -i Processing ++ awk '{print $2}' + action_id= + (( retrieve_shipyard_action_counter ++ )) + error 'executing deploy_site from the /site directory' + set +x Error when executing deploy_site from the /site directory. + exit 1 + clean + set +x To remove files generated during this script's execution, delete /root/deploy. This VM is disposable. Re-deployment in this same VM will lead to unpredictable results. -------------- next part -------------- An HTML attachment was scrubbed... URL: From santosh.thapamager at as.ntt-at.co.jp Thu Nov 29 06:29:10 2018 From: santosh.thapamager at as.ntt-at.co.jp (santosh.thapamager) Date: Thu, 29 Nov 2018 15:29:10 +0900 Subject: [Airship-discuss] =?iso-2022-jp?b?QWlyc2hpcC1pbi1hLWJvdHRsZQ==?= =?iso-2022-jp?b?GyRCIScbKEJkZXZfbWluaW1hbCxkZXZfc2luZ2xlX25vZGUg?= =?iso-2022-jp?b?YmVoaW5kIHByb3h5?= Message-ID: <001e01d487ac$d6963580$83c2a080$@as.ntt-at.co.jp> Hi all!! I am trying to install dev_minimal and dev_single_node behind the proxy. I could find the installation instruction of dev_minimal behind proxy however I could not find instruction for installing dev_single_node behind proxy. I would be thankful if you could answer me if we can deploy dev_single_node behind proxy. Currently I have be trying to install dev_minimal behind proxy. During installation I encountered following issues. Issue 1: During installation I found that helm test for airship-maas has failed. This is the snippet of installation log. ************************************************************** ************************************** **** 2018-11-26 04:41:37.536 8 INFO armada.handlers.armada [-] Install completed with results from Tiller: {'namespace': 'ucp', 'status': 'DEPLOYED', 'description': 'Install complete', 'version': 1, 'release': 'airship-maas'} 2018-11-26 04:41:37.537 8 INFO armada.handlers.armada [-] Running sequenced test, timeout remaining: 385s. 2018-11-26 04:41:37.537 8 INFO armada.handlers.tiller [-] Running Helm test: release=airship-maas, timeout=385 2018-11-26 04:41:37.556 8 INFO armada.handlers.tiller [-] RUNNING: airship-maas-api-test 2018-11-26 04:41:42.078 8 INFO armada.handlers.tiller [-] FAILED: airship-maas-api-test, run `kubectl logs airship-maas-api-test --namespace ucp` for more info 2018-11-26 04:41:42.136 8 INFO armada.handlers.tiller [-] 1 test(s) failed 2018-11-26 04:41:42.275 8 INFO armada.handlers.armada [-] Test failed for release: airship-maas 2018-11-26 04:41:42.275 8 ERROR armada.cli [-] Caught internal exception: armada.exceptions.tiller_exceptions.TestFailedException: Test failed for release: airship-maas 2018-11-26 04:41:42.275 8 ERROR armada.cli Traceback (most recent call last): 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2018-11-26 04:41:42.275 8 ERROR armada.cli self.invoke() 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/apply.py", line 217, in invoke 2018-11-26 04:41:42.275 8 ERROR armada.cli resp = armada.sync() 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 494, in sync 2018-11-26 04:41:42.275 8 ERROR armada.cli self._test_chart(*test_chart_args) 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 591, in _test_chart 2018-11-26 04:41:42.275 8 ERROR armada.cli raise tiller_exceptions.TestFailedException(release_name) 2018-11-26 04:41:42.275 8 ERROR armada.cli armada.exceptions.tiller_exceptions.TestFailedException: Test failed for release: airship-maas 2018-11-26 04:41:42.275 8 ERROR armada.cli ************************************************************** ************************************** **** Issue 2: During installation While deploying site using bash execute_shipyard_action.sh 'deploy_site‘, the output of the installation log said the site deployment has been completed however there are some steps which has failed status. Here is the snippet of the installation log. ************************************************************** ************************************** **** + describe_action='Name: deploy_site Action: action/01CX75ZXFH9RRREF3AQSS8NVXS Lifecycle: Complete Parameters: {} Datetime: 2018-11-26 04:56:10.993587+00:00 Dag Status: success Context Marker: e552d7bb-a345-43bf-8875-139ec02ba715 User: shipyard Steps Index State Footnotes step/01CX75ZXFH9RRREF3AQSS8NVXS/action_xcom 1 success step/01CX75ZXFH9RRREF3AQSS8NVXS/dag_concurrency_check 2 success step/01CX75ZXFH9RRREF3AQSS8NVXS/preflight 3 success step/01CX75ZXFH9RRREF3AQSS8NVXS/get_rendered_doc 4 success step/01CX75ZXFH9RRREF3AQSS8NVXS/deployment_configuration 5 success step/01CX75ZXFH9RRREF3AQSS8NVXS/validate_site_design 6 success step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_build 7 success step/01CX75ZXFH9RRREF3AQSS8NVXS/verify_site 8 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_status 9 success step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_site 10 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_build 11 failed step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_post_apply 12 failed step/01CX75ZXFH9RRREF3AQSS8NVXS/ucp_preflight_check 13 success step/01CX75ZXFH9RRREF3AQSS8NVXS/deckhand_validate_site_design 14 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_validate_site_design 15 success step/01CX75ZXFH9RRREF3AQSS8NVXS/promenade_validate_site_design 16 success step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_validate_site_design 17 success step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_and_deploy_nodes 18 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_releases 19 upstream_failed step/01CX75ZXFH9RRREF3AQSS8NVXS/create_action_tag 20 success Commands User Datetime invoke shipyard 2018-11-26 04:56:12.256963+00:00 Validations: None ' ++ awk '{print $8}' ++ echo Name: deploy_site $'\r' Action: action/01CX75ZXFH9RRREF3AQSS8NVXS $'\r' Lifecycle: Complete $'\r' Parameters: '{}' $'\r' Datetime: 2018-11-26 04:56:10.993587+00:00 $'\r' Dag Status: success $'\r' Context Marker: e552d7bb-a345-43bf-8875- 139ec02ba715 $'\r' User: shipyard $'\r' $'\r' Steps Index State Footnotes $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/action_xcom 1 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/dag_concurrency_check 2 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/preflight 3 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/get_rendered_doc 4 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/deployment_configuration 5 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/validate_site_design 6 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_build 7 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/verify_site 8 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_status 9 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_site 10 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_build 11 failed $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_post_apply 12 failed $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/ucp_preflight_check 13 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/deckhand_validate_site_design 14 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_validate_site_design 15 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/promenade_validate_site_design 16 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_validate_site_design 17 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_and_deploy_nodes 18 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_releases 19 upstream_failed $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/create_action_tag 20 success $'\r' $'\r' $'\r' $'\r' Commands User Datetime $'\r' invoke shipyard 2018-11-26 04:56:12.256963+00:00 $'\r' $'\r' Validations: $'None\r' $'\r' $'\r' $'\r' + action_lifecycle=Complete + [[ Complete == \C\o\m\p\l\e\t\e ]] + echo -e '\nSuccessfully performed' deploy_site Successfully performed deploy_site + echo -e '\n' + break + [[ Complete == \C\o\m\p\l\e\t\e ]] + exit 0 + [[ 30 -ge 40 ]] + clean + set +x To remove files generated during this script's execution, delete /root/deploy. This VM is disposable. Re-deployment in this same VM will lead to unpredictable results. ************************************************************** ************************************** **** I have been stuck in these issues. I would be very greatful if I can get answers regarding these issues. Best Regards, Santosh Thapa Magar From MM9745 at att.com Thu Nov 29 13:59:32 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Thu, 29 Nov 2018 13:59:32 +0000 Subject: [Airship-discuss] =?utf-8?q?Airship-in-a-bottle=EF=BC=9Adev=5Fmin?= =?utf-8?q?imal=2Cdev=5Fsingle=5Fnode_behind_proxy?= In-Reply-To: <001e01d487ac$d6963580$83c2a080$@as.ntt-at.co.jp> References: <001e01d487ac$d6963580$83c2a080$@as.ntt-at.co.jp> Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896CE31033@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hi Santosh! I heard yesterday that there may be an issue with the Airship-in-a-Bottle multinode gate behind the proxy as well. I suspect it's the same issue - I'll try to replicate it today. Will let you know. Thanks, Matt -----Original Message----- From: santosh.thapamager Sent: Thursday, November 29, 2018 12:29 AM To: airship-discuss at lists.airshipit.org Subject: [Airship-discuss] Airship-in-a-bottle:dev_minimal,dev_single_node behind proxy Hi all!! I am trying to install dev_minimal and dev_single_node behind the proxy. I could find the installation instruction of dev_minimal behind proxy however I could not find instruction for installing dev_single_node behind proxy. I would be thankful if you could answer me if we can deploy dev_single_node behind proxy. Currently I have be trying to install dev_minimal behind proxy. During installation I encountered following issues. Issue 1: During installation I found that helm test for airship-maas has failed. This is the snippet of installation log. ************************************************************** ************************************** **** 2018-11-26 04:41:37.536 8 INFO armada.handlers.armada [-] Install completed with results from Tiller: {'namespace': 'ucp', 'status': 'DEPLOYED', 'description': 'Install complete', 'version': 1, 'release': 'airship-maas'} 2018-11-26 04:41:37.537 8 INFO armada.handlers.armada [-] Running sequenced test, timeout remaining: 385s. 2018-11-26 04:41:37.537 8 INFO armada.handlers.tiller [-] Running Helm test: release=airship-maas, timeout=385 2018-11-26 04:41:37.556 8 INFO armada.handlers.tiller [-] RUNNING: airship-maas-api-test 2018-11-26 04:41:42.078 8 INFO armada.handlers.tiller [-] FAILED: airship-maas-api-test, run `kubectl logs airship-maas-api-test --namespace ucp` for more info 2018-11-26 04:41:42.136 8 INFO armada.handlers.tiller [-] 1 test(s) failed 2018-11-26 04:41:42.275 8 INFO armada.handlers.armada [-] Test failed for release: airship-maas 2018-11-26 04:41:42.275 8 ERROR armada.cli [-] Caught internal exception: armada.exceptions.tiller_exceptions.TestFailedException: Test failed for release: airship-maas 2018-11-26 04:41:42.275 8 ERROR armada.cli Traceback (most recent call last): 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2018-11-26 04:41:42.275 8 ERROR armada.cli self.invoke() 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/apply.py", line 217, in invoke 2018-11-26 04:41:42.275 8 ERROR armada.cli resp = armada.sync() 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 494, in sync 2018-11-26 04:41:42.275 8 ERROR armada.cli self._test_chart(*test_chart_args) 2018-11-26 04:41:42.275 8 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 591, in _test_chart 2018-11-26 04:41:42.275 8 ERROR armada.cli raise tiller_exceptions.TestFailedException(release_name) 2018-11-26 04:41:42.275 8 ERROR armada.cli armada.exceptions.tiller_exceptions.TestFailedException: Test failed for release: airship-maas 2018-11-26 04:41:42.275 8 ERROR armada.cli ************************************************************** ************************************** **** Issue 2: During installation While deploying site using bash execute_shipyard_action.sh 'deploy_site‘, the output of the installation log said the site deployment has been completed however there are some steps which has failed status. Here is the snippet of the installation log. ************************************************************** ************************************** **** + describe_action='Name: deploy_site Action: action/01CX75ZXFH9RRREF3AQSS8NVXS Lifecycle: Complete Parameters: {} Datetime: 2018-11-26 04:56:10.993587+00:00 Dag Status: success Context Marker: e552d7bb-a345-43bf-8875-139ec02ba715 User: shipyard Steps Index State Footnotes step/01CX75ZXFH9RRREF3AQSS8NVXS/action_xcom 1 success step/01CX75ZXFH9RRREF3AQSS8NVXS/dag_concurrency_check 2 success step/01CX75ZXFH9RRREF3AQSS8NVXS/preflight 3 success step/01CX75ZXFH9RRREF3AQSS8NVXS/get_rendered_doc 4 success step/01CX75ZXFH9RRREF3AQSS8NVXS/deployment_configuration 5 success step/01CX75ZXFH9RRREF3AQSS8NVXS/validate_site_design 6 success step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_build 7 success step/01CX75ZXFH9RRREF3AQSS8NVXS/verify_site 8 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_status 9 success step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_site 10 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_build 11 failed step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_post_apply 12 failed step/01CX75ZXFH9RRREF3AQSS8NVXS/ucp_preflight_check 13 success step/01CX75ZXFH9RRREF3AQSS8NVXS/deckhand_validate_site_design 14 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_validate_site_design 15 success step/01CX75ZXFH9RRREF3AQSS8NVXS/promenade_validate_site_design 16 success step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_validate_site_design 17 success step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_and_deploy_nodes 18 success step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_releases 19 upstream_failed step/01CX75ZXFH9RRREF3AQSS8NVXS/create_action_tag 20 success Commands User Datetime invoke shipyard 2018-11-26 04:56:12.256963+00:00 Validations: None ' ++ awk '{print $8}' ++ echo Name: deploy_site $'\r' Action: action/01CX75ZXFH9RRREF3AQSS8NVXS $'\r' Lifecycle: Complete $'\r' Parameters: '{}' $'\r' Datetime: 2018-11-26 04:56:10.993587+00:00 $'\r' Dag Status: success $'\r' Context Marker: e552d7bb-a345-43bf-8875- 139ec02ba715 $'\r' User: shipyard $'\r' $'\r' Steps Index State Footnotes $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/action_xcom 1 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/dag_concurrency_check 2 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/preflight 3 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/get_rendered_doc 4 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/deployment_configuration 5 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/validate_site_design 6 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_build 7 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/verify_site 8 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_status 9 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_site 10 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_build 11 failed $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_post_apply 12 failed $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/ucp_preflight_check 13 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/deckhand_validate_site_design 14 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_validate_site_design 15 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/promenade_validate_site_design 16 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/drydock_validate_site_design 17 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/prepare_and_deploy_nodes 18 success $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/armada_get_releases 19 upstream_failed $'\r' step/01CX75ZXFH9RRREF3AQSS8NVXS/create_action_tag 20 success $'\r' $'\r' $'\r' $'\r' Commands User Datetime $'\r' invoke shipyard 2018-11-26 04:56:12.256963+00:00 $'\r' $'\r' Validations: $'None\r' $'\r' $'\r' $'\r' + action_lifecycle=Complete + [[ Complete == \C\o\m\p\l\e\t\e ]] + echo -e '\nSuccessfully performed' deploy_site Successfully performed deploy_site + echo -e '\n' + break + [[ Complete == \C\o\m\p\l\e\t\e ]] + exit 0 + [[ 30 -ge 40 ]] + clean + set +x To remove files generated during this script's execution, delete /root/deploy. This VM is disposable. Re-deployment in this same VM will lead to unpredictable results. ************************************************************** ************************************** **** I have been stuck in these issues. I would be very greatful if I can get answers regarding these issues. Best Regards, Santosh Thapa Magar _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=87Zg_ovdhJYwPJjp2ZFeXKqWHTR4JzsXSoC2NVLRfw4&s=iru3CxcNSwfW_QABEGYTVk2EdOcNNNrHwWxTFh5KIx0&e= From dd7022 at att.com Fri Nov 30 18:04:15 2018 From: dd7022 at att.com (KATARIA, DEEPAK) Date: Fri, 30 Nov 2018 18:04:15 +0000 Subject: [Airship-discuss] Airship/Promenade Issues when deploying Treasure map behind a proxy Message-ID: <90BF8249EF30DB4A83C0F4D241F3D72C27A78D3C@MISOUT7MSGUSRCC.ITServices.sbc.com> Dear Airship Team, This is our first time deploying Airship/Treasuremap behind a proxy server The first host "genesis" host comes up. The next step is to deploy the rest of the cluster. The deploy is multi step. Step One. Shipyard/Deckhand creates the configdocs to be used for each node. Step Two. Shipyard/Promenade/Deckhand commits the configdocs and validates them. This step is failing with a 401 error but it is not an authorization problem. We can successfully logon with username/password We login into each component container. The armada, deckhand, shipyard containers do not try to go through the proxy server, however, the promenade containers do try to go through the proxy server and fail regardless of all the recommended changes we have tested This is the error we are getting. Again, we can logon as shipyard user and perform commands. The credentials and images referenced are correct. + commit_configdocs ++ pwd + sudo docker run -v /root/akraino:/target -e OS_AUTH_URL=http://keystone-api.ucp.svc.cluster.local:80/v3 -e OS_PASSWORD=86db58e20de93ef55477 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_IDENTITY_API_VERSION=3 --rm --net=host quay.io/airshipit/shipyard:165c845e3e7459d2a4892ed4ca910b00675e7561 commit configdocs Error: Validations failed Reason: Validation - Error: None Message: Promenade unable to validate configdocs or an invalid response has been returned Diagnostic: HTTPError: 401 Client Error: Unauthorized for url: http://promenade-api.ucp.svc.cluster.local:80/api/v1.0/validatedesign Best Regards, Deepak Kataria -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Fri Nov 30 19:32:05 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Fri, 30 Nov 2018 20:32:05 +0100 Subject: [Airship-discuss] Airship/Promenade Issues when deploying Treasure map behind a proxy In-Reply-To: <90BF8249EF30DB4A83C0F4D241F3D72C27A78D3C@MISOUT7MSGUSRCC.ITServices.sbc.com> References: <90BF8249EF30DB4A83C0F4D241F3D72C27A78D3C@MISOUT7MSGUSRCC.ITServices.sbc.com> Message-ID: Deepak, This commit of the configdocs should not go trough the proxy. Please, verify from inside the container that it goes directly to the promenade-api.ucp.svc.cluster.local:80 (this address is local to the cluster). -- Roman Gorshunov On Fri, Nov 30, 2018 at 7:04 PM KATARIA, DEEPAK wrote: > > Dear Airship Team, > > > > This is our first time deploying Airship/Treasuremap behind a proxy server The first host “genesis” host comes up. The next step is to deploy the rest of the cluster. > The deploy is multi step. > > Step One. Shipyard/Deckhand creates the configdocs to be used for each node. > > Step Two. Shipyard/Promenade/Deckhand commits the configdocs and validates them. This step is failing with a 401 error but it is not an authorization problem. > > We can successfully logon with username/password > > We login into each component container. The armada, deckhand, shipyard containers do not try to go through the proxy server, however, the promenade containers > > do try to go through the proxy server and fail regardless of all the recommended changes we have tested > > This is the error we are getting. Again, we can logon as shipyard user and perform commands. The credentials and images referenced are correct. > > + commit_configdocs > ++ pwd > + sudo docker run -v /root/akraino:/target -e OS_AUTH_URL=http://keystone-api.ucp.svc.cluster.local:80/v3 -e OS_PASSWORD=86db58e20de93ef55477 -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=service -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_IDENTITY_API_VERSION=3 --rm --net=host quay.io/airshipit/shipyard:165c845e3e7459d2a4892ed4ca910b00675e7561 commit configdocs > Error: Validations failed > Reason: Validation > - Error: None > Message: Promenade unable to validate configdocs or an invalid response has been returned > Diagnostic: HTTPError: 401 Client Error: Unauthorized for url: http://promenade-api.ucp.svc.cluster.local:80/api/v1.0/validatedesign > > > > > > Best Regards, > > > > Deepak Kataria > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From paye600 at gmail.com Fri Nov 30 20:58:29 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Fri, 30 Nov 2018 21:58:29 +0100 Subject: [Airship-discuss] Airship/Promenade Issues when deploying Treasure map behind a proxy In-Reply-To: <90BF8249EF30DB4A83C0F4D241F3D72C27A78D3C@MISOUT7MSGUSRCC.ITServices.sbc.com> References: <90BF8249EF30DB4A83C0F4D241F3D72C27A78D3C@MISOUT7MSGUSRCC.ITServices.sbc.com> Message-ID: <1C4A6CCD-F9DA-4665-8C8B-A7D2584D471A@gmail.com> Deepak, I’ve also noticed that Shipyard image version quay.io/airshipit/shipyard:165c845e3e7459d2a4892ed4ca910b00675e7561 you use is quite outdated, 4 months old. Try to use more recent images, especially those, which are referenced in versions.yaml in the openstack/airship-treasuremap repository. Best regards, — Roman Gorshunov