From prabhjot.lists at gmail.com Thu Jun 7 09:46:22 2018 From: prabhjot.lists at gmail.com (Prabhjot Singh Sethi) Date: Thu, 7 Jun 2018 15:16:22 +0530 Subject: [Airship-discuss] Trying out Airship Message-ID: Hi, i am trying to use airship single node "airship in a bottle" example on http://www.airshipit.org/. but installation fails with following message. i observation is that /etc/resolv.conf is changed by airship script to the following cat /etc/resolv.conf options timeout:1 attempts:1 nameserver 10.96.0.10 nameserver 8.8.8.8 nameserver 8.8.4.4 where ip "10.96.0.10" is not reachable. i will try to change this for my setup, however need help in understanding if this is a bug. + DEBIAN_FRONTEND=noninteractive + apt-get install -y --no-install-recommends nfs-common docker.io socat=1.7.3.1-1 Reading package lists... Done Building dependency tree Reading state information... Done docker.io is already the newest version (1.13.1-0ubuntu1~16.04.2). The following additional packages will be installed: keyutils libnfsidmap2 libtirpc1 rpcbind Suggested packages: watchdog Recommended packages: python The following NEW packages will be installed: keyutils libnfsidmap2 libtirpc1 nfs-common rpcbind socat 0 upgraded, 6 newly installed, 0 to remove and 100 not upgraded. Need to get 700 kB of archives. After this operation, 2328 kB of additional disk space will be used. Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 libnfsidmap2 amd64 0.25-5 Temporary failure resolving 'nova.clouds.archive.ubuntu.com' Err:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 libtirpc1 amd64 0.2.5-1 Temporary failure resolving 'nova.clouds.archive.ubuntu.com' Err:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 keyutils amd64 1.5.9-8ubuntu1 Temporary failure resolving 'nova.clouds.archive.ubuntu.com' Err:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 rpcbind amd64 0.2.3-0.2 Temporary failure resolving 'nova.clouds.archive.ubuntu.com' Err:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 nfs-common amd64 1:1.2.8-9ubuntu12.1 Temporary failure resolving 'nova.clouds.archive.ubuntu.com' Err:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/libn/libnfsidmap/libnfsidmap2_0.25-5_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/libt/libtirpc/libtirpc1_0.2.5-1_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/keyutils/keyutils_1.5.9-8ubuntu1_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/r/rpcbind/rpcbind_0.2.3-0.2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/n/nfs-utils/nfs-common_1.2.8-9ubuntu12.1_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/universe/s/socat/socat_1.7.3.1-1_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com' E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? ++ date +%s + now=1528264267 + [[ 1528264267 -gt 1528264232 ]] + log Failed to install apt packages. ++ date + echo Wed Jun 6 05:51:07 UTC 2018 Failed to install apt packages. Wed Jun 6 05:51:07 UTC 2018 Failed to install apt packages. + exit 1 + error 'running genesis' + set +x Error when running genesis. + exit 1 + clean + set +x To remove files generated during this script's execution, delete /root/deploy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhjot.lists at gmail.com Thu Jun 7 11:41:57 2018 From: prabhjot.lists at gmail.com (Prabhjot Singh Sethi) Date: Thu, 7 Jun 2018 17:11:57 +0530 Subject: [Airship-discuss] Trying out Airship Message-ID: After resolving the DNS issue, i still see artifacts-aic.atlantafoundry.com as unreachable and failing the installation. Regards, Prabhjot Setting up nmap (7.01-2ubuntu2) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Processing triggers for systemd (229-4ubuntu21.1) ... Processing triggers for ureadahead (0.100.0-19) ... + [[ 40 -ge 10 ]] + run_pegleg_collect + IMAGE= artifacts-aic.atlantafoundry.com/att-comdev/pegleg:ef47933903047339bd63fcfa265dfe4296e8a322 + /root/deploy/airship-pegleg/tools/pegleg.sh site -p /workspace/airship-in-a-bottle/deployment_files collect demo -s /workspace/collected == NOTE: Workspace /root/deploy is available as /workspace in container context == Unable to find image ' artifacts-aic.atlantafoundry.com/att-comdev/pegleg:ef47933903047339bd63fcfa265dfe4296e8a322' locally docker: Error response from daemon: Get https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp 12.37.173.37:443: i/o timeout. See 'docker run --help'. + error 'running pegleg collect' + set +x Error when running pegleg collect. + exit 1 + clean + set +x To remove files generated during this script's execution, delete /root/deploy. On 7 June 2018 at 15:16, Prabhjot Singh Sethi wrote: > Hi, > i am trying to use airship single node "airship in a bottle" example > on http://www.airshipit.org/. but installation fails with following > message. > i observation is that /etc/resolv.conf is changed by airship script to the > following > cat /etc/resolv.conf > options timeout:1 attempts:1 > > nameserver 10.96.0.10 > > nameserver 8.8.8.8 > nameserver 8.8.4.4 > > where ip "10.96.0.10" is not reachable. i will try to change this for my > setup, however need help in understanding if this is a bug. > > + DEBIAN_FRONTEND=noninteractive > + apt-get install -y --no-install-recommends nfs-common docker.io > socat=1.7.3.1-1 > Reading package lists... Done > Building dependency tree > Reading state information... Done > docker.io is already the newest version (1.13.1-0ubuntu1~16.04.2). > The following additional packages will be installed: > keyutils libnfsidmap2 libtirpc1 rpcbind > Suggested packages: > watchdog > Recommended packages: > python > The following NEW packages will be installed: > keyutils libnfsidmap2 libtirpc1 nfs-common rpcbind socat > 0 upgraded, 6 newly installed, 0 to remove and 100 not upgraded. > Need to get 700 kB of archives. > After this operation, 2328 kB of additional disk space will be used. > Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 > libnfsidmap2 amd64 0.25-5 > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > Err:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 > libtirpc1 amd64 0.2.5-1 > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > Err:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 > keyutils amd64 1.5.9-8ubuntu1 > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > Err:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 > rpcbind amd64 0.2.3-0.2 > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > Err:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main > amd64 nfs-common amd64 1:1.2.8-9ubuntu12.1 > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > Err:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/universe amd64 > socat amd64 1.7.3.1-1 > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/ > libn/libnfsidmap/libnfsidmap2_0.25-5_amd64.deb Temporary failure > resolving 'nova.clouds.archive.ubuntu.com' > > E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/ > libt/libtirpc/libtirpc1_0.2.5-1_amd64.deb Temporary failure resolving ' > nova.clouds.archive.ubuntu.com' > > E: Failed to fetch http://nova.clouds.archive. > ubuntu.com/ubuntu/pool/main/k/keyutils/keyutils_1.5.9-8ubuntu1_amd64.deb > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > > E: Failed to fetch http://nova.clouds.archive. > ubuntu.com/ubuntu/pool/main/r/rpcbind/rpcbind_0.2.3-0.2_amd64.deb > Temporary failure resolving 'nova.clouds.archive.ubuntu.com' > > E: Failed to fetch http://nova.clouds.archive. > ubuntu.com/ubuntu/pool/main/n/nfs-utils/nfs-common_1.2.8- > 9ubuntu12.1_amd64.deb Temporary failure resolving ' > nova.clouds.archive.ubuntu.com' > E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/ > universe/s/socat/socat_1.7.3.1-1_amd64.deb Temporary failure resolving ' > nova.clouds.archive.ubuntu.com' > > E: Unable to fetch some archives, maybe run apt-get update or try with > --fix-missing? > ++ date +%s > + now=1528264267 > + [[ 1528264267 -gt 1528264232 ]] > + log Failed to install apt packages. > ++ date > + echo Wed Jun 6 05:51:07 UTC 2018 Failed to install apt packages. > Wed Jun 6 05:51:07 UTC 2018 Failed to install apt packages. > + exit 1 > + error 'running genesis' > + set +x > Error when running genesis. > + exit 1 > + clean > + set +x > To remove files generated during this script's execution, delete > /root/deploy. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Thu Jun 7 13:36:58 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Thu, 7 Jun 2018 15:36:58 +0200 Subject: [Airship-discuss] Trying out Airship Message-ID: Hello Prabhjot, Apologize for the inconvenience. There is a network maintenance going on, and that is the reason artifacts-aic.atlantafoundry.com is currently inaccessible. Engineers are working to get everything running soon, and in addition to that we are working on moving our docker images to publicly accessible repositories. Thank you very much for trying Airship! I will let you know once artifacts-aic.atlantafoundry.com is up again, so that you could re-try deployment. Best regards, -- Roman Gorshunov On 7 June 2018 at 11:41, Prabhjot Singh Sethi wrote: > After resolving the DNS issue, i still see artifacts-aic.atlantafoundry.com > as unreachable and failing the installation. > > Regards, > Prabhjot > ... > docker: Error response from daemon: Get > https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp 12.37.173.37:443: > i/o timeout. From prabhjot.lists at gmail.com Thu Jun 7 16:19:18 2018 From: prabhjot.lists at gmail.com (Prabhjot Singh Sethi) Date: Thu, 7 Jun 2018 21:49:18 +0530 Subject: [Airship-discuss] Trying out Airship In-Reply-To: References: Message-ID: Thanks for update, sure will wait for it to be available. Regards, Prabhjot On 7 June 2018 at 19:06, Roman Gorshunov wrote: > Hello Prabhjot, > > Apologize for the inconvenience. There is a network maintenance going > on, and that is the reason artifacts-aic.atlantafoundry.com is > currently inaccessible. Engineers are working to get everything > running soon, and in addition to that we are working on moving our > docker images to publicly accessible repositories. > Thank you very much for trying Airship! > I will let you know once artifacts-aic.atlantafoundry.com is up again, > so that you could re-try deployment. > > Best regards, > -- > Roman Gorshunov > > On 7 June 2018 at 11:41, Prabhjot Singh Sethi gmail.com> wrote: > > After resolving the DNS issue, i still see artifacts-aic.atlantafoundry. > com > > as unreachable and failing the installation. > > > > Regards, > > Prabhjot > > > ... > > docker: Error response from daemon: Get > > https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp > 12.37.173.37:443: > > i/o timeout. > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jr8586335 at gmail.com Fri Jun 8 19:18:51 2018 From: jr8586335 at gmail.com (James Devon) Date: Fri, 8 Jun 2018 21:18:51 +0200 Subject: [Airship-discuss] osh-infra ldap error with promenade + armada Message-ID: Hi, After having tried airship-in-a-bottle, I'm trying to setup a Kubernetes cluster and install openstack on it using promenade and armada. I'm using the basic example : https://github.com/openstack/airship-promenade/tree/master/examples/basic, I've changed the ip addresses to match my environment of 4 nodes (n0 for genesis and then n1, n2, n3). At this point I'm facing only one problem: promenade-api.ucp.svc.cluster.local is resolvable but the resolution is bad (192.168.150.165). So instead of promenade-api.ucp.svc.cluster.local, I used the pod address with the good port and it is working fine. Then, I am able to make n1,n2 and n3 join the cluster and I use the scripts from https://github.com/openstack/openstack-helm/tree/master/tools/deployment/armada to apply armada manifests. osh-infra is not able to start the ldap pod. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 6m (x4 over 6m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 3 times) Normal Scheduled 6m default-scheduler Successfully assigned ldap-0 to n3 Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" Normal SuccessfulMountVolume 6m kubelet, n3 MountVolume.SetUp succeeded for volume "ldap-token-txdqp" Warning FailedMount 13s (x11 over 6m) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () Warning FailedMount 12s (x11 over 6m) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () Warning FailedMount 2s (x3 over 4m) kubelet, n3 Unable to mount volumes for pod "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": timeout expired waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] I've checked the logs of ceph-osd and there's also something: *ceph-2.1 | *starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/journal/journal.2 *ceph-2.1 | **2018-06-08 18:41:03.854234 7f7cd7d58e00 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway* *ceph-2.1 | **2018-06-08 18:41:03.855499 7f7cd7d58e00 -1 journal do_read_entry(8192): bad header magic* *ceph-2.1 | **2018-06-08 18:41:03.855567 7f7cd7d58e00 -1 journal do_read_entry(8192): bad header magic* *ceph-2.1 | **2018-06-08 18:41:03.901804 7f7cd7d58e00 -1 osd.2 0 log_to_monitors {default=true}* *ceph-2.1 | **2018-06-08 18:41:04.529480 7f7cbb2c7700 -1 osd.2 0 waiting for initial osdmap* Did anybody encountered this error and how to fix it? Best, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Sat Jun 9 11:51:29 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Sat, 9 Jun 2018 13:51:29 +0200 Subject: [Airship-discuss] osh-infra ldap error with promenade + armada In-Reply-To: References: Message-ID: <25FADD30-4DA7-45A9-BFDE-ED03D3FA2705@gmail.com> Hello James, I didn’t see it before. Could you provide output of the following commands, if you still have this environment running. It would allow to get understanding of what has been deployed and what has not: kubectl get pods --all-namespaces -o wide # on n0 docker ps # on all nodes docker ps -a # on all nodes docker images # on all nodes Also note that artifacts-aic.atlantafoundry.com host where some of docker images are hosted is inaccessible at the moment, it could be that some images have not been downloaded successfully, team started mirroring images to dockerhub & quay.io. Right before the summit development was focused on making airship-in-a-bottle work, and migration from github/gerrithub to openstack foundation infrastructure is still ongoing. Apologize for the inconvenience, James. Thank you. > On 8 Jun 2018, at 21:18, James Devon wrote: > > Hi, > > After having tried airship-in-a-bottle, I'm trying to setup a Kubernetes cluster and install openstack on it using promenade and armada. > > I'm using the basic example : https://github.com/openstack/airship-promenade/tree/master/examples/basic, I've changed the ip addresses to match my environment of 4 nodes (n0 for genesis and then n1, n2, n3). > > At this point I'm facing only one problem: promenade-api.ucp.svc.cluster.local is resolvable but the resolution is bad (192.168.150.165). > So instead of promenade-api.ucp.svc.cluster.local, I used the pod address with the good port and it is working fine. > > Then, I am able to make n1,n2 and n3 join the cluster and I use the scripts from https://github.com/openstack/openstack-helm/tree/master/tools/deployment/armada to apply armada manifests. > > osh-infra is not able to start the ldap pod. > > Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Warning FailedScheduling 6m (x4 over 6m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 3 times) > Normal Scheduled 6m default-scheduler Successfully assigned ldap-0 to n3 > Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" > Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" > Normal SuccessfulMountVolume 6m kubelet, n3 MountVolume.SetUp succeeded for volume "ldap-token-txdqp" > Warning FailedMount 13s (x11 over 6m) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () > Warning FailedMount 12s (x11 over 6m) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () > Warning FailedMount 2s (x3 over 4m) kubelet, n3 Unable to mount volumes for pod "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": timeout expired waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] > > I've checked the logs of ceph-osd and there's also something: > > ceph-2.1 | starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/journal/journal.2 > ceph-2.1 | 2018-06-08 18:41:03.854234 7f7cd7d58e00 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway > ceph-2.1 | 2018-06-08 18:41:03.855499 7f7cd7d58e00 -1 journal do_read_entry(8192): bad header magic > ceph-2.1 | 2018-06-08 18:41:03.855567 7f7cd7d58e00 -1 journal do_read_entry(8192): bad header magic > ceph-2.1 | 2018-06-08 18:41:03.901804 7f7cd7d58e00 -1 osd.2 0 log_to_monitors {default=true} > ceph-2.1 | 2018-06-08 18:41:04.529480 7f7cbb2c7700 -1 osd.2 0 waiting for initial osdmap > > Did anybody encountered this error and how to fix it? > > Best, > James > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From jr8586335 at gmail.com Sat Jun 9 16:50:32 2018 From: jr8586335 at gmail.com (James Devon) Date: Sat, 9 Jun 2018 18:50:32 +0200 Subject: [Airship-discuss] osh-infra ldap error with promenade + armada In-Reply-To: <25FADD30-4DA7-45A9-BFDE-ED03D3FA2705@gmail.com> References: <25FADD30-4DA7-45A9-BFDE-ED03D3FA2705@gmail.com> Message-ID: Hello Roman, Thanks for your answer. What is the state of promenade + armada right now? We are pretty excited about it. Here is the output you required: 1) kubectl get pods --all-namespaces -o wide https://pastebin.com/b4kWGpBs The only thing I see is the pod ldap-0 from the namespace osh-infra Here is the output of kubectl describe pod -n osh-infra ldap-0 https://pastebin.com/sSBVZnch I believe the interesting part is here: 1. Events: 2. Type Reason Age From Message 3. ---- ------ ---- ---- ------- 4. Warning FailedMount 12m (x576 over 21h) kubelet, n3 Unable to mount volumes for pod "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": timeout expired waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] 5. Warning FailedMount 6m (x649 over 21h) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () 6. Warning FailedMount 2m (x651 over 21h) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () 2) docker ps on all nodes https://pastebin.com/CaqLEv70 3) docker ps -a on all nodes https://pastebin.com/wvB0qkbh 4) docker images on all nodes https://pastebin.com/H07kEwqe Best, James On Sat, Jun 9, 2018 at 1:51 PM, Roman Gorshunov wrote: > Hello James, > > I didn’t see it before. Could you provide output of the following > commands, if you still have this environment running. It would allow to get > understanding of what has been deployed and what has not: > kubectl get pods --all-namespaces -o wide # on n0 > docker ps # on all > nodes > docker ps -a # on all > nodes > docker images # on all > nodes > > Also note that artifacts-aic.atlantafoundry.com host where some of docker > images are hosted is inaccessible at the moment, it could be that some > images have not been downloaded successfully, team started mirroring images > to dockerhub & quay.io. Right before the summit development was focused > on making airship-in-a-bottle work, and migration from github/gerrithub to > openstack foundation infrastructure is still ongoing. > > Apologize for the inconvenience, James. > > Thank you. > > > On 8 Jun 2018, at 21:18, James Devon wrote: > > > > Hi, > > > > After having tried airship-in-a-bottle, I'm trying to setup a Kubernetes > cluster and install openstack on it using promenade and armada. > > > > I'm using the basic example : https://github.com/openstack/ > airship-promenade/tree/master/examples/basic, I've changed the ip > addresses to match my environment of 4 nodes (n0 for genesis and then n1, > n2, n3). > > > > At this point I'm facing only one problem: > promenade-api.ucp.svc.cluster.local is resolvable but the resolution is > bad (192.168.150.165). > > So instead of promenade-api.ucp.svc.cluster.local, I used the pod > address with the good port and it is working fine. > > > > Then, I am able to make n1,n2 and n3 join the cluster and I use the > scripts from https://github.com/openstack/openstack-helm/tree/master/ > tools/deployment/armada to apply armada manifests. > > > > osh-infra is not able to start the ldap pod. > > > > Events: > > Type Reason Age From > Message > > ---- ------ ---- ---- > ------- > > Warning FailedScheduling 6m (x4 over 6m) default-scheduler > pod has unbound PersistentVolumeClaims (repeated 3 times) > > Normal Scheduled 6m default-scheduler > Successfully assigned ldap-0 to n3 > > Normal SuccessfulAttachVolume 6m > attachdetach-controller AttachVolume.Attach succeeded for volume > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" > > Normal SuccessfulAttachVolume 6m > attachdetach-controller AttachVolume.Attach succeeded for volume > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" > > Normal SuccessfulMountVolume 6m kubelet, n3 > MountVolume.SetUp succeeded for volume "ldap-token-txdqp" > > Warning FailedMount 13s (x11 over 6m) kubelet, n3 > MountVolume.WaitForAttach failed for volume > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image > status with: (executable file not found in $PATH), rbd output: () > > Warning FailedMount 12s (x11 over 6m) kubelet, n3 > MountVolume.WaitForAttach failed for volume > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image > status with: (executable file not found in $PATH), rbd output: () > > Warning FailedMount 2s (x3 over 4m) kubelet, n3 > Unable to mount volumes for pod "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": > timeout expired waiting for volumes to attach or mount for pod > "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. > list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] > > > > I've checked the logs of ceph-osd and there's also something: > > > > ceph-2.1 | starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 > /var/lib/ceph/journal/journal.2 > > ceph-2.1 | 2018-06-08 18:41:03.854234 7f7cd7d58e00 -1 journal > FileJournal::_open: disabling aio for non-block journal. Use > journal_force_aio to force use of aio anyway > > ceph-2.1 | 2018-06-08 18:41:03.855499 7f7cd7d58e00 -1 journal > do_read_entry(8192): bad header magic > > ceph-2.1 | 2018-06-08 18:41:03.855567 7f7cd7d58e00 -1 journal > do_read_entry(8192): bad header magic > > ceph-2.1 | 2018-06-08 18:41:03.901804 7f7cd7d58e00 -1 osd.2 0 > log_to_monitors {default=true} > > ceph-2.1 | 2018-06-08 18:41:04.529480 7f7cbb2c7700 -1 osd.2 0 waiting > for initial osdmap > > > > Did anybody encountered this error and how to fix it? > > > > Best, > > James > > > > _______________________________________________ > > Airship-discuss mailing list > > Airship-discuss at lists.airshipit.org > > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syed.shah at 12techllc.com Mon Jun 11 20:34:17 2018 From: syed.shah at 12techllc.com (syed.shah at 12techllc.com) Date: Mon, 11 Jun 2018 13:34:17 -0700 Subject: [Airship-discuss] "Unable to find image error" from airship-in-a-bottle script Message-ID: <20180611133417.b0b17c89765a8fc7873cf1d6ff5e7bf6.826aaae2f9.mailapi@email02.godaddy.com> Hi, We are trying to deploy Airship in a bottle using the following commands, but getting a timeout error on some docker image download: mkdir -p /root/deploy && cd "$_" git clone https://git.openstack.org/openstack/airship-in-a-bottle cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node ./airship-in-a-bottle.sh We are seeing the following error: Unable to find image 'artifacts-aic.atlantafoundry.com/att-comdev/pegleg:ef47933903047339bd63fcfa265dfe4296e8a322' locally docker: Error response from daemon: Get https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp 12.37.173.37:443: i/o timeout. Last Thursday this url was working but not anymore. Thanks for the help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Tue Jun 12 13:44:01 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 12 Jun 2018 15:44:01 +0200 Subject: [Airship-discuss] Trying out Airship In-Reply-To: References: Message-ID: Hello Prabhjot, Please, retry deployment. Artifactory repository is up and running again, and I have successfully ran and completed airship-in-a-bottle deployment just now. It took 1h. 20min. on 4vCPU/20GB RAM VM running on a laptop with SSD disk. Here are the logs of this successful deployment: https://paste.ubuntu.com/p/MRmGrf8K3W/ The work to move images to publicly accessible docker containers repository is ongoing, but you shouldn't notice it. One more time, apologize for the inconvenience, and thank you very much for trying Airship! Best regards, -- Roman Gorshunov On Thu, Jun 7, 2018 at 6:19 PM, Prabhjot Singh Sethi wrote: > Thanks for update, sure will wait for it to be available. > > Regards, > Prabhjot > From paye600 at gmail.com Tue Jun 12 13:44:38 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 12 Jun 2018 15:44:38 +0200 Subject: [Airship-discuss] "Unable to find image error" from airship-in-a-bottle script In-Reply-To: <20180611133417.b0b17c89765a8fc7873cf1d6ff5e7bf6.826aaae2f9.mailapi@email02.godaddy.com> References: <20180611133417.b0b17c89765a8fc7873cf1d6ff5e7bf6.826aaae2f9.mailapi@email02.godaddy.com> Message-ID: Hello Prabhjot, Please, retry deployment. Artifactory repository is up and running after the downtime, and I have successfully ran and completed airship-in-a-bottle deployment just now. It took 1h. 20min. on 4vCPU/20GB RAM VM running on a laptop with SSD disk. Here are the logs of this successful deployment: https://paste.ubuntu.com/p/MRmGrf8K3W/ The work to move images to publicly accessible docker containers repository is ongoing, but you shouldn't notice it. One more time, apologize for the inconvenience, and thank you very much for trying Airship! Best regards, -- Roman Gorshunov On Mon, Jun 11, 2018 at 10:34 PM, wrote: > Hi, > > We are trying to deploy Airship in a bottle using the following commands, > but getting a timeout error on some docker image download: > > mkdir -p /root/deploy && cd "$_" > git clone https://git.openstack.org/openstack/airship-in-a-bottle > cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node > ./airship-in-a-bottle.sh > > We are seeing the following error: > > Unable to find image > 'artifacts-aic.atlantafoundry.com/att-comdev/pegleg:ef47933903047339bd63fcfa265dfe4296e8a322' > locally > docker: Error response from daemon: Get > https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp > 12.37.173.37:443: i/o timeout. > > Last Thursday this url was working but not anymore. > > Thanks for the help! > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > From paye600 at gmail.com Tue Jun 12 13:46:00 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 12 Jun 2018 15:46:00 +0200 Subject: [Airship-discuss] "Unable to find image error" from airship-in-a-bottle script In-Reply-To: <20180611133417.b0b17c89765a8fc7873cf1d6ff5e7bf6.826aaae2f9.mailapi@email02.godaddy.com> References: <20180611133417.b0b17c89765a8fc7873cf1d6ff5e7bf6.826aaae2f9.mailapi@email02.godaddy.com> Message-ID: Hello Syed, Sorry for sending e-mail with wrong name too fast. Please, retry deployment. Artifactory repository is up and running after the downtime, and I have successfully ran and completed airship-in-a-bottle deployment just now. It took 1h. 20min. on 4vCPU/20GB RAM VM running on a laptop with SSD disk. Here are the logs of this successful deployment: https://paste.ubuntu.com/p/MRmGrf8K3W/ The work to move images to publicly accessible docker containers repository is ongoing, but you shouldn't notice it. One more time, apologize for the inconvenience, and thank you very much for trying Airship! Best regards, -- Roman Gorshunov On Mon, Jun 11, 2018 at 10:34 PM, wrote: > Hi, > > We are trying to deploy Airship in a bottle using the following commands, > but getting a timeout error on some docker image download: > > mkdir -p /root/deploy && cd "$_" > git clone https://git.openstack.org/openstack/airship-in-a-bottle > cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node > ./airship-in-a-bottle.sh > > We are seeing the following error: > > Unable to find image > 'artifacts-aic.atlantafoundry.com/att-comdev/pegleg:ef47933903047339bd63fcfa265dfe4296e8a322' > locally > docker: Error response from daemon: Get > https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp > 12.37.173.37:443: i/o timeout. > > Last Thursday this url was working but not anymore. > > Thanks for the help! > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > From paye600 at gmail.com Tue Jun 12 14:46:52 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 12 Jun 2018 16:46:52 +0200 Subject: [Airship-discuss] osh-infra ldap error with promenade + armada In-Reply-To: References: <25FADD30-4DA7-45A9-BFDE-ED03D3FA2705@gmail.com> Message-ID: Hello James, Sorry, it took a little bit too long for me to reply. I have talked to devs, and Scott Hussey suggested that ceph-common package may not have been installed: https://github.com/openstack/airship-promenade/blob/master/examples/complete/HostSystem.yaml#L74 Also Pete Birley suggested that Armada multinode suggests to use 5 nodes in ceph: https://github.com/openstack/openstack-helm/blob/master/tools/deployment/armada/multinode/armada-ceph.yaml#L101 Give it a try, and let me know how it goes. Thank you! Best regards, -- Roman Gorshunov On Sat, Jun 9, 2018 at 6:50 PM, James Devon wrote: > Hello Roman, > > Thanks for your answer. > > What is the state of promenade + armada right now? > We are pretty excited about it. > > Here is the output you required: > > 1) kubectl get pods --all-namespaces -o wide > https://pastebin.com/b4kWGpBs > > The only thing I see is the pod ldap-0 from the namespace osh-infra > Here is the output of kubectl describe pod -n osh-infra ldap-0 > https://pastebin.com/sSBVZnch > > I believe the interesting part is here: > > Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Warning FailedMount 12m (x576 over 21h) kubelet, n3 Unable to mount > volumes for pod "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": > timeout expired waiting for volumes to attach or mount for pod > "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. > list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] > Warning FailedMount 6m (x649 over 21h) kubelet, n3 > MountVolume.WaitForAttach failed for volume > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status > with: (executable file not found in $PATH), rbd output: () > Warning FailedMount 2m (x651 over 21h) kubelet, n3 > MountVolume.WaitForAttach failed for volume > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status > with: (executable file not found in $PATH), rbd output: () > > > > 2) docker ps on all nodes > https://pastebin.com/CaqLEv70 > > 3) docker ps -a on all nodes > https://pastebin.com/wvB0qkbh > > 4) docker images on all nodes > https://pastebin.com/H07kEwqe > > Best, > James > > On Sat, Jun 9, 2018 at 1:51 PM, Roman Gorshunov wrote: >> >> Hello James, >> >> I didn’t see it before. Could you provide output of the following >> commands, if you still have this environment running. It would allow to get >> understanding of what has been deployed and what has not: >> kubectl get pods --all-namespaces -o wide # on n0 >> docker ps # on all >> nodes >> docker ps -a # on all >> nodes >> docker images # on all >> nodes >> >> Also note that artifacts-aic.atlantafoundry.com host where some of docker >> images are hosted is inaccessible at the moment, it could be that some >> images have not been downloaded successfully, team started mirroring images >> to dockerhub & quay.io. Right before the summit development was focused on >> making airship-in-a-bottle work, and migration from github/gerrithub to >> openstack foundation infrastructure is still ongoing. >> >> Apologize for the inconvenience, James. >> >> Thank you. >> >> > On 8 Jun 2018, at 21:18, James Devon wrote: >> > >> > Hi, >> > >> > After having tried airship-in-a-bottle, I'm trying to setup a Kubernetes >> > cluster and install openstack on it using promenade and armada. >> > >> > I'm using the basic example : >> > https://github.com/openstack/airship-promenade/tree/master/examples/basic, >> > I've changed the ip addresses to match my environment of 4 nodes (n0 for >> > genesis and then n1, n2, n3). >> > >> > At this point I'm facing only one problem: >> > promenade-api.ucp.svc.cluster.local is resolvable but the resolution is bad >> > (192.168.150.165). >> > So instead of promenade-api.ucp.svc.cluster.local, I used the pod >> > address with the good port and it is working fine. >> > >> > Then, I am able to make n1,n2 and n3 join the cluster and I use the >> > scripts from >> > https://github.com/openstack/openstack-helm/tree/master/tools/deployment/armada >> > to apply armada manifests. >> > >> > osh-infra is not able to start the ldap pod. >> > >> > Events: >> > Type Reason Age From >> > Message >> > ---- ------ ---- ---- >> > ------- >> > Warning FailedScheduling 6m (x4 over 6m) default-scheduler >> > pod has unbound PersistentVolumeClaims (repeated 3 times) >> > Normal Scheduled 6m default-scheduler >> > Successfully assigned ldap-0 to n3 >> > Normal SuccessfulAttachVolume 6m >> > attachdetach-controller AttachVolume.Attach succeeded for volume >> > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" >> > Normal SuccessfulAttachVolume 6m >> > attachdetach-controller AttachVolume.Attach succeeded for volume >> > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" >> > Normal SuccessfulMountVolume 6m kubelet, n3 >> > MountVolume.SetUp succeeded for volume "ldap-token-txdqp" >> > Warning FailedMount 13s (x11 over 6m) kubelet, n3 >> > MountVolume.WaitForAttach failed for volume >> > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status >> > with: (executable file not found in $PATH), rbd output: () >> > Warning FailedMount 12s (x11 over 6m) kubelet, n3 >> > MountVolume.WaitForAttach failed for volume >> > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status >> > with: (executable file not found in $PATH), rbd output: () >> > Warning FailedMount 2s (x3 over 4m) kubelet, n3 >> > Unable to mount volumes for pod >> > "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": timeout expired >> > waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0". list of >> > unmounted volumes=[ldap-config ldap-data]. list of unattached >> > volumes=[ldap-config ldap-data ldap-token-txdqp] >> > >> > I've checked the logs of ceph-osd and there's also something: >> > >> > ceph-2.1 | starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 >> > /var/lib/ceph/journal/journal.2 >> > ceph-2.1 | 2018-06-08 18:41:03.854234 7f7cd7d58e00 -1 journal >> > FileJournal::_open: disabling aio for non-block journal. Use >> > journal_force_aio to force use of aio anyway >> > ceph-2.1 | 2018-06-08 18:41:03.855499 7f7cd7d58e00 -1 journal >> > do_read_entry(8192): bad header magic >> > ceph-2.1 | 2018-06-08 18:41:03.855567 7f7cd7d58e00 -1 journal >> > do_read_entry(8192): bad header magic >> > ceph-2.1 | 2018-06-08 18:41:03.901804 7f7cd7d58e00 -1 osd.2 0 >> > log_to_monitors {default=true} >> > ceph-2.1 | 2018-06-08 18:41:04.529480 7f7cbb2c7700 -1 osd.2 0 waiting >> > for initial osdmap >> > >> > Did anybody encountered this error and how to fix it? >> > >> > Best, >> > James >> > >> > _______________________________________________ >> > Airship-discuss mailing list >> > Airship-discuss at lists.airshipit.org >> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> > From prabhjot.lists at gmail.com Wed Jun 13 05:43:30 2018 From: prabhjot.lists at gmail.com (Prabhjot Singh Sethi) Date: Wed, 13 Jun 2018 11:13:30 +0530 Subject: [Airship-discuss] Trying out Airship In-Reply-To: References: Message-ID: It works, thanks for the support Regards Prabhjot On Tue, Jun 12, 2018, 19:14 Roman Gorshunov wrote: > Hello Prabhjot, > > Please, retry deployment. > > Artifactory repository is up and running again, and I have > successfully ran and completed airship-in-a-bottle deployment just > now. > It took 1h. 20min. on 4vCPU/20GB RAM VM running on a laptop with SSD disk. > Here are the logs of this successful deployment: > https://paste.ubuntu.com/p/MRmGrf8K3W/ > > The work to move images to publicly accessible docker containers > repository is ongoing, but you shouldn't notice it. > One more time, apologize for the inconvenience, and thank you very > much for trying Airship! > > Best regards, > -- > Roman Gorshunov > > > On Thu, Jun 7, 2018 at 6:19 PM, Prabhjot Singh Sethi > wrote: > > Thanks for update, sure will wait for it to be available. > > > > Regards, > > Prabhjot > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgu at suse.com Wed Jun 13 06:50:42 2018 From: jgu at suse.com (James Gu) Date: Wed, 13 Jun 2018 00:50:42 -0600 Subject: [Airship-discuss] Trying out Airship In-Reply-To: References: Message-ID: <5B20BEC20200006C0002F803@prv-mh.provo.novell.com> It also worked for me the first time after a few retries across about a week! Thank you so much, Roman. Cheers James Gu >>> Prabhjot Singh Sethi 6/12/2018 10:43 PM >>> It works, thanks for the support Regards Prabhjot On Tue, Jun 12, 2018, 19:14 Roman Gorshunov wrote: Hello Prabhjot, Please, retry deployment. Artifactory repository is up and running again, and I have successfully ran and completed airship-in-a-bottle deployment just now. It took 1h. 20min. on 4vCPU/20GB RAM VM running on a laptop with SSD disk. Here are the logs of this successful deployment: https://paste.ubuntu.com/p/MRmGrf8K3W/ The work to move images to publicly accessible docker containers repository is ongoing, but you shouldn't notice it. One more time, apologize for the inconvenience, and thank you very much for trying Airship! Best regards, -- Roman Gorshunov On Thu, Jun 7, 2018 at 6:19 PM, Prabhjot Singh Sethi wrote: > Thanks for update, sure will wait for it to be available. > > Regards, > Prabhjot > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Wed Jun 13 09:42:58 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Wed, 13 Jun 2018 11:42:58 +0200 Subject: [Airship-discuss] Trying out Airship In-Reply-To: <5B20BEC20200006C0002F803@prv-mh.provo.novell.com> References: <5B20BEC20200006C0002F803@prv-mh.provo.novell.com> Message-ID: James, > It also worked for me the first time after a few retries across about a > week! Thank you so much, Roman. Great to hear that it worked well! Thanks to engineering and development teams. Feel free to test, explore, and ask questions. -- Roman Gorshunov From syed.shah at 12techllc.com Thu Jun 14 01:50:16 2018 From: syed.shah at 12techllc.com (syed.shah at 12techllc.com) Date: Wed, 13 Jun 2018 18:50:16 -0700 Subject: [Airship-discuss] "Unable to find image error" from airship-in-a-bottle script In-Reply-To: Message-ID: <20180613185016.b0b17c89765a8fc7873cf1d6ff5e7bf6.ce803dfcbb.mailapi@email02.godaddy.com> Thanks for the reply Roman. Looks like we have moved on to the next error :-) Seems to be failing on airship-ucp-postgresql now 2018-06-13 10:09:16.004 749 INFO armada.handlers.armada [-] Checking Pre/Post Actions 2018-06-13 10:09:16.004 749 INFO armada.handlers.armada [-] Checking upgrade chart diffs. 2018-06-13 10:09:16.036 749 INFO armada.handlers.armada [-] There are no updates found in this chart 2018-06-13 10:09:16.036 749 INFO armada.handlers.chartbuilder [-] Building dependency chart postgres-htk for release ucp-postgresql. 2018-06-13 10:09:16.058 749 INFO armada.handlers.armada [-] Installing release airship-ucp-postgresql in namespace ucp 2018-06-13 10:09:16.059 749 INFO armada.handlers.armada [-] Beginning Install, wait=True, timeout=600s 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller [-] Error while installing release airship-ucp-postgresql: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release airship-ucp-postgresql failed: timed out waiting for the condition)> 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller Traceback (most recent call last): 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 404, in install_release 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller metadata=self.metadata) 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, deadline) 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release airship-ucp-postgresql failed: timed out waiting for the condition)> 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller 2018-06-13 10:19:17.237 749 ERROR armada.cli [-] Caught internal exception: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: airship-ucp-postgresql - Tiller Message: b'Release "airship-ucp-postgresql" failed: timed out waiting for the condition' 2018-06-13 10:19:17.237 749 ERROR armada.cli Traceback (most recent call last): 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 404, in install_release 2018-06-13 10:19:17.237 749 ERROR armada.cli metadata=self.metadata) 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ 2018-06-13 10:19:17.237 749 ERROR armada.cli return _end_unary_response_blocking(state, call, False, deadline) 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2018-06-13 10:19:17.237 749 ERROR armada.cli raise _Rendezvous(state, None, None, deadline) 2018-06-13 10:19:17.237 749 ERROR armada.cli grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release airship-ucp-postgresql failed: timed out waiting for the condition)> 2018-06-13 10:19:17.237 749 ERROR armada.cli 2018-06-13 10:19:17.237 749 ERROR armada.cli During handling of the above exception, another exception occurred: 2018-06-13 10:19:17.237 749 ERROR armada.cli 2018-06-13 10:19:17.237 749 ERROR armada.cli Traceback (most recent call last): 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/__init__.py", line 40, in safe_invoke 2018-06-13 10:19:17.237 749 ERROR armada.cli self.invoke() 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/apply.py", line 221, in invoke 2018-06-13 10:19:17.237 749 ERROR armada.cli resp = armada.sync() 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 407, in sync 2018-06-13 10:19:17.237 749 ERROR armada.cli timeout=timer) 2018-06-13 10:19:17.237 749 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 418, in install_release 2018-06-13 10:19:17.237 749 ERROR armada.cli raise ex.ReleaseException(release, status, 'Install') 2018-06-13 10:19:17.237 749 ERROR armada.cli armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: airship-ucp-postgresql - Tiller Message: b'Release "airship-ucp-postgresql" failed: timed out waiting for the condition' 2018-06-13 10:19:17.237 749 ERROR armada.cli 2018-06-13 10:19:30.996 1306 INFO armada.handlers.armada [-] Performing pre-flight operations. 2018-06-13 10:19:31.229 1306 INFO armada.handlers.armada [-] Purging failed release airship-ucp-postgresql before deployment 2018-06-13 10:19:31.230 1306 INFO armada.handlers.tiller [-] Uninstall airship-ucp-postgresql release with disable_hooks=False, purge=True flags 2018-06-13 10:20:15.684 1306 INFO armada.handlers.armada [-] Cloning repo: https://github.com/openstack/airship-promenade from branch: 6e87b82f514cc4160b09cf394646410a9d9fe2a7 2018-06-13 10:20:32.940 1306 INFO armada.handlers.armada [-] Cloning repo: https://git.openstack.org/openstack/openstack-helm from branch: f902cd14fac7de4c4c9f7d019191268a6b4e9601 2018-06-13 10:21:20.744 1306 INFO armada.handlers.armada [-] Cloning repo: https://git.openstack.org/openstack/openstack-helm-infra from branch: f402171e42356bc1e805782f1d7f090ce1f6ab17 2018-06-13 10:21:35.151 1306 INFO armada.handlers.armada [-] Cloning repo: https://git.openstack.org/openstack/openstack-helm from branch: refs/changes/80/569480/2 --------- Original Message --------- Subject: Re: [Airship-discuss] "Unable to find image error" from airship-in-a-bottle script From: "Roman Gorshunov" Date: 6/12/18 9:46 am To: syed.shah at 12techllc.com Cc: airship-discuss at lists.airshipit.org Hello Syed, Sorry for sending e-mail with wrong name too fast. Please, retry deployment. Artifactory repository is up and running after the downtime, and I have successfully ran and completed airship-in-a-bottle deployment just now. It took 1h. 20min. on 4vCPU/20GB RAM VM running on a laptop with SSD disk. Here are the logs of this successful deployment: https://paste.ubuntu.com/p/MRmGrf8K3W/ The work to move images to publicly accessible docker containers repository is ongoing, but you shouldn't notice it. One more time, apologize for the inconvenience, and thank you very much for trying Airship! Best regards, -- Roman Gorshunov On Mon, Jun 11, 2018 at 10:34 PM, wrote: > Hi, > > We are trying to deploy Airship in a bottle using the following commands, > but getting a timeout error on some docker image download: > > mkdir -p /root/deploy && cd "$_" > git clone https://git.openstack.org/openstack/airship-in-a-bottle > cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node > ./airship-in-a-bottle.sh > > We are seeing the following error: > > Unable to find image > 'artifacts-aic.atlantafoundry.com/att-comdev/pegleg:ef47933903047339bd63fcfa265dfe4296e8a322' > locally > docker: Error response from daemon: Get > https://artifacts-aic.atlantafoundry.com/v1/_ping: dial tcp > 12.37.173.37:443: i/o timeout. > > Last Thursday this url was working but not anymore. > > Thanks for the help! > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jr8586335 at gmail.com Thu Jun 14 08:26:13 2018 From: jr8586335 at gmail.com (James Devon) Date: Thu, 14 Jun 2018 10:26:13 +0200 Subject: [Airship-discuss] osh-infra ldap error with promenade + armada In-Reply-To: References: <25FADD30-4DA7-45A9-BFDE-ED03D3FA2705@gmail.com> Message-ID: Hi Roman, Thanks for your help. It seems that the ceph-common package solved everything. I have a working 7 nodes cluster now! Best, James On Tue, Jun 12, 2018 at 4:46 PM, Roman Gorshunov wrote: > Hello James, > > Sorry, it took a little bit too long for me to reply. I have talked to > devs, and Scott Hussey suggested that ceph-common package may not have > been installed: > https://github.com/openstack/airship-promenade/blob/master/ > examples/complete/HostSystem.yaml#L74 > > Also Pete Birley suggested that Armada multinode suggests to use 5 > nodes in ceph: > https://github.com/openstack/openstack-helm/blob/master/ > tools/deployment/armada/multinode/armada-ceph.yaml#L101 > > Give it a try, and let me know how it goes. > > Thank you! > > Best regards, > -- > Roman Gorshunov > > On Sat, Jun 9, 2018 at 6:50 PM, James Devon wrote: > > Hello Roman, > > > > Thanks for your answer. > > > > What is the state of promenade + armada right now? > > We are pretty excited about it. > > > > Here is the output you required: > > > > 1) kubectl get pods --all-namespaces -o wide > > https://pastebin.com/b4kWGpBs > > > > The only thing I see is the pod ldap-0 from the namespace osh-infra > > Here is the output of kubectl describe pod -n osh-infra ldap-0 > > https://pastebin.com/sSBVZnch > > > > I believe the interesting part is here: > > > > Events: > > Type Reason Age From Message > > ---- ------ ---- ---- ------- > > Warning FailedMount 12m (x576 over 21h) kubelet, n3 Unable to mount > > volumes for pod "ldap-0_osh-infra(85e73018- > 6b4c-11e8-b6e5-080027ee1df7)": > > timeout expired waiting for volumes to attach or mount for pod > > "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. > > list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] > > Warning FailedMount 6m (x649 over 21h) kubelet, n3 > > MountVolume.WaitForAttach failed for volume > > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image > status > > with: (executable file not found in $PATH), rbd output: () > > Warning FailedMount 2m (x651 over 21h) kubelet, n3 > > MountVolume.WaitForAttach failed for volume > > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image > status > > with: (executable file not found in $PATH), rbd output: () > > > > > > > > 2) docker ps on all nodes > > https://pastebin.com/CaqLEv70 > > > > 3) docker ps -a on all nodes > > https://pastebin.com/wvB0qkbh > > > > 4) docker images on all nodes > > https://pastebin.com/H07kEwqe > > > > Best, > > James > > > > On Sat, Jun 9, 2018 at 1:51 PM, Roman Gorshunov > wrote: > >> > >> Hello James, > >> > >> I didn’t see it before. Could you provide output of the following > >> commands, if you still have this environment running. It would allow to > get > >> understanding of what has been deployed and what has not: > >> kubectl get pods --all-namespaces -o wide # on n0 > >> docker ps # on all > >> nodes > >> docker ps -a # on all > >> nodes > >> docker images # on all > >> nodes > >> > >> Also note that artifacts-aic.atlantafoundry.com host where some of > docker > >> images are hosted is inaccessible at the moment, it could be that some > >> images have not been downloaded successfully, team started mirroring > images > >> to dockerhub & quay.io. Right before the summit development was > focused on > >> making airship-in-a-bottle work, and migration from github/gerrithub to > >> openstack foundation infrastructure is still ongoing. > >> > >> Apologize for the inconvenience, James. > >> > >> Thank you. > >> > >> > On 8 Jun 2018, at 21:18, James Devon wrote: > >> > > >> > Hi, > >> > > >> > After having tried airship-in-a-bottle, I'm trying to setup a > Kubernetes > >> > cluster and install openstack on it using promenade and armada. > >> > > >> > I'm using the basic example : > >> > https://github.com/openstack/airship-promenade/tree/master/ > examples/basic, > >> > I've changed the ip addresses to match my environment of 4 nodes (n0 > for > >> > genesis and then n1, n2, n3). > >> > > >> > At this point I'm facing only one problem: > >> > promenade-api.ucp.svc.cluster.local is resolvable but the resolution > is bad > >> > (192.168.150.165). > >> > So instead of promenade-api.ucp.svc.cluster.local, I used the pod > >> > address with the good port and it is working fine. > >> > > >> > Then, I am able to make n1,n2 and n3 join the cluster and I use the > >> > scripts from > >> > https://github.com/openstack/openstack-helm/tree/master/ > tools/deployment/armada > >> > to apply armada manifests. > >> > > >> > osh-infra is not able to start the ldap pod. > >> > > >> > Events: > >> > Type Reason Age From > >> > Message > >> > ---- ------ ---- ---- > >> > ------- > >> > Warning FailedScheduling 6m (x4 over 6m) > default-scheduler > >> > pod has unbound PersistentVolumeClaims (repeated 3 times) > >> > Normal Scheduled 6m > default-scheduler > >> > Successfully assigned ldap-0 to n3 > >> > Normal SuccessfulAttachVolume 6m > >> > attachdetach-controller AttachVolume.Attach succeeded for volume > >> > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" > >> > Normal SuccessfulAttachVolume 6m > >> > attachdetach-controller AttachVolume.Attach succeeded for volume > >> > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" > >> > Normal SuccessfulMountVolume 6m kubelet, n3 > >> > MountVolume.SetUp succeeded for volume "ldap-token-txdqp" > >> > Warning FailedMount 13s (x11 over 6m) kubelet, n3 > >> > MountVolume.WaitForAttach failed for volume > >> > "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image > status > >> > with: (executable file not found in $PATH), rbd output: () > >> > Warning FailedMount 12s (x11 over 6m) kubelet, n3 > >> > MountVolume.WaitForAttach failed for volume > >> > "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image > status > >> > with: (executable file not found in $PATH), rbd output: () > >> > Warning FailedMount 2s (x3 over 4m) kubelet, n3 > >> > Unable to mount volumes for pod > >> > "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": timeout > expired > >> > waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0". > list of > >> > unmounted volumes=[ldap-config ldap-data]. list of unattached > >> > volumes=[ldap-config ldap-data ldap-token-txdqp] > >> > > >> > I've checked the logs of ceph-osd and there's also something: > >> > > >> > ceph-2.1 | starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 > >> > /var/lib/ceph/journal/journal.2 > >> > ceph-2.1 | 2018-06-08 18:41:03.854234 7f7cd7d58e00 -1 journal > >> > FileJournal::_open: disabling aio for non-block journal. Use > >> > journal_force_aio to force use of aio anyway > >> > ceph-2.1 | 2018-06-08 18:41:03.855499 7f7cd7d58e00 -1 journal > >> > do_read_entry(8192): bad header magic > >> > ceph-2.1 | 2018-06-08 18:41:03.855567 7f7cd7d58e00 -1 journal > >> > do_read_entry(8192): bad header magic > >> > ceph-2.1 | 2018-06-08 18:41:03.901804 7f7cd7d58e00 -1 osd.2 0 > >> > log_to_monitors {default=true} > >> > ceph-2.1 | 2018-06-08 18:41:04.529480 7f7cbb2c7700 -1 osd.2 0 waiting > >> > for initial osdmap > >> > > >> > Did anybody encountered this error and how to fix it? > >> > > >> > Best, > >> > James > >> > > >> > _______________________________________________ > >> > Airship-discuss mailing list > >> > Airship-discuss at lists.airshipit.org > >> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jr8586335 at gmail.com Thu Jun 14 08:36:45 2018 From: jr8586335 at gmail.com (James Devon) Date: Thu, 14 Jun 2018 10:36:45 +0200 Subject: [Airship-discuss] Deployment documents Message-ID: Hello, 1) Is there any existing example of deployment documents (site, global? There is some documentation here : https://treasuremap.readthedocs.io/en/latest/deployment.html#building-site-documents and https://pegleg.readthedocs.io/en/latest/artifacts.html but a concrete example would really help people outside of ATT to get started and understand it. I see that there are Jenkins pipelines downloading some files here : https://github.com/att-comdev/cicd/blob/master/integration/genesis-full/Jenkinsfile I have a 7 nodes cluster that I deployed using the default values here : https://github.com/openstack/openstack-helm/tree/master/tools/deployment/armada and I'm really happy with Airship but I would like to go further :) 2) Also, how to add another node in an existing cluster after genesis and openstack deployed on it? Best, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Thu Jun 14 08:52:23 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Thu, 14 Jun 2018 10:52:23 +0200 Subject: [Airship-discuss] osh-infra ldap error with promenade + armada In-Reply-To: References: <25FADD30-4DA7-45A9-BFDE-ED03D3FA2705@gmail.com> Message-ID: James, On Thu, Jun 14, 2018 at 10:26 AM, James Devon wrote: > It seems that the ceph-common package solved everything. > I have a working 7 nodes cluster now! Great to hear that! Best regards, -- Roman Gorshunov From paye600 at gmail.com Thu Jun 14 09:34:12 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Thu, 14 Jun 2018 11:34:12 +0200 Subject: [Airship-discuss] "Unable to find image error" from airship-in-a-bottle script In-Reply-To: <20180613185016.b0b17c89765a8fc7873cf1d6ff5e7bf6.ce803dfcbb.mailapi@email02.godaddy.com> References: <20180613185016.b0b17c89765a8fc7873cf1d6ff5e7bf6.ce803dfcbb.mailapi@email02.godaddy.com> Message-ID: Hello Syed, 1. Have you been running on a completely clean server/VM? 2. Please, post full airship-in-a-bottle installation log to https://paste.ubuntu.com/ 3. How much CPU/RAM capacity does this server/VM has? Best regards, -- Roman Gorshunov On Thu, Jun 14, 2018 at 3:50 AM, wrote: > ... > 2018-06-13 10:09:16.059 749 INFO armada.handlers.armada [-] Beginning > Install, wait=True, timeout=600s > 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller [-] Error while > installing release airship-ucp-postgresql: grpc._channel._Rendezvous: > <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release > airship-ucp-postgresql failed: timed out waiting for the condition)> ... > 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller File > "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line > 404, in install_release > 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller > metadata=self.metadata) > 2018-06-13 10:19:16.844 749 ERROR armada.handlers.tiller File > "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in > __call__ From mark.m.burnett at gmail.com Thu Jun 14 13:28:22 2018 From: mark.m.burnett at gmail.com (Mark Burnett) Date: Thu, 14 Jun 2018 08:28:22 -0500 Subject: [Airship-discuss] Deployment documents In-Reply-To: References: Message-ID: Hey James, Thanks for giving it a try :) You should be able to find a decent single node example here: https://github.com/openstack/airship-in-a-bottle/tree/master/deployment_files There is also a slightly dated, but still functional multi-node example here: https://github.com/att-comdev/treasuremap/tree/master/deployment_files Also, please feel free to drop by #airshipit on irc for help too. Best, Mark On Thu, Jun 14, 2018 at 3:36 AM James Devon wrote: > Hello, > > 1) Is there any existing example of deployment documents (site, global? > There is some documentation here : > https://treasuremap.readthedocs.io/en/latest/deployment.html#building-site-documents > and https://pegleg.readthedocs.io/en/latest/artifacts.html but a concrete > example would really help people outside of ATT to get started and > understand it. > > I see that there are Jenkins pipelines downloading some files here : > https://github.com/att-comdev/cicd/blob/master/integration/genesis-full/Jenkinsfile > > I have a 7 nodes cluster that I deployed using the default values here : > https://github.com/openstack/openstack-helm/tree/master/tools/deployment/armada > and I'm really happy with Airship but I would like to go further :) > > 2) Also, how to add another node in an existing cluster after genesis and > openstack deployed on it? > > Best, > James > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manoj.meka at imaginea.com Mon Jun 4 12:43:56 2018 From: manoj.meka at imaginea.com (Manoj Meka) Date: Mon, 04 Jun 2018 12:43:56 -0000 Subject: [Airship-discuss] Looking to integrate drydock with AWS Message-ID: Hi, I'm trying to integrate drydock with AWS. From Drydock i would like to provision resources in AWS. Any suggestions as to where to begin within drydock, would be helpful. Regards, Manoj. -- Disclaimer: The contents of this email and any attachments are confidential. They are intended for the named recipient(s) only. If you have received this email by mistake, please notify the sender immediately and do not disclose the contents to anyone or make copies thereof.  -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at openstack.org Tue Jun 19 18:33:36 2018 From: james at openstack.org (James Cole) Date: Tue, 19 Jun 2018 11:33:36 -0700 Subject: [Airship-discuss] Airship Logo Message-ID: <195AB6DB-62B4-4C4E-B90A-AECF743F2B2A@openstack.org> Hi Airship Crew, I’m James, a graphic designer with the OpenStack Foundation, and I’ve been working with the Airship leadership team to design a new logo. The concept is pretty accessible, with an icon of a stylized airship silhouette flying across a capital “A” shape. It works well both stacked and horizontally and in single or multiple colors. The color palette incorporates the teal and sky tones from the current landing page. I’ve attached a PDF showing the logo in a few different formats, as well as a couple of mockups to show you some practical applications. In case the attached document does not make it do you, you can view it on Dropbox here.  We’d love to hear your thoughts on this design or if you have any other feedback you think might be important for the Airship brand. Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Airship_Logo.pdf Type: application/pdf Size: 523313 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Tue Jun 19 19:31:19 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 19 Jun 2018 21:31:19 +0200 Subject: [Airship-discuss] Looking to integrate drydock with AWS In-Reply-To: References: Message-ID: Hello Manoj, Thank you for your interest in Drydock. This is currently not supported. Drydock uses a topology definition to configure downstream drivers (at the moment – MAAS, developed by Canonical) to provision baremetal nodes: network attachment, network addressing, local storage, kernel selection and configuration and metadata. Here is a link to the Drydock MAAS driver: https://github.com/openstack/airship-drydock/tree/master/drydock_provisioner/drivers/node/maasdriver . You may check the code and think of implementing similar functionality for AWS. I hope it would help you. Infrastructure built on AWS significantly differs from infrastructure built on bare metal servers. AWS mostly uses pre-built OS images, and networking concepts also differ from data center networking. Best regards, -- Roman Gorshunov On Mon, Jun 4, 2018 at 2:43 PM, Manoj Meka wrote: > Hi, > > I'm trying to integrate drydock with AWS. From Drydock i would like to > provision resources in AWS. Any suggestions as to where to begin within > drydock, would be helpful. > > Regards, > Manoj. > > > Disclaimer: > The contents of this email and any attachments are confidential. They are > intended for the named recipient(s) only. If you have received this email by > mistake, please notify the sender immediately and do not disclose the > contents to anyone or make copies thereof. > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From fzdarsky at redhat.com Tue Jun 19 19:32:13 2018 From: fzdarsky at redhat.com (Frank Zdarsky) Date: Tue, 19 Jun 2018 21:32:13 +0200 Subject: [Airship-discuss] Airship Logo In-Reply-To: <195AB6DB-62B4-4C4E-B90A-AECF743F2B2A@openstack.org> References: <195AB6DB-62B4-4C4E-B90A-AECF743F2B2A@openstack.org> Message-ID: Hi James, I like the idea of an airship flying across the capital "A". TBH, though, if you hadn't mentioned the airship or added the "airship" wordmark, I wouldn't have recognized it. The logo is also a bit busy for my taste. Would it make sense to fill the whole airship in the foreground color (of the "A") and just do its outline in the background color? Maybe also leaving out the "jet"? Cheers -- Frank On Tue, Jun 19, 2018 at 8:48 PM James Cole wrote: > Hi Airship Crew, > > I’m James, a graphic designer with the OpenStack Foundation, and I’ve been > working with the Airship leadership team to design a new logo. > > The concept is pretty accessible, with an icon of a stylized airship > silhouette flying across a capital “A” shape. It works well both stacked > and horizontally and in single or multiple colors. The color palette > incorporates the teal and sky tones from the current landing page. > > I’ve attached a PDF showing the logo in a few different formats, as well > as a couple of mockups to show you some practical applications. In case the > attached document does not make it do you, you can view it on Dropbox > here. > > We’d love to hear your thoughts on this design or if you have any other > feedback you think might be important for the Airship brand. > > Thank you! > > *James Cole* > Graphic Designer > OpenStack Foundation > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- Frank Zdarsky | NFV&SDN Technology Strategy, Office of the CTO | Red Hat e: fzdarsky at redhat.com | irc: fzdarsky at freenode | m: +49 175 82 11 64 4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at openstack.org Tue Jun 19 21:16:07 2018 From: james at openstack.org (James Cole) Date: Tue, 19 Jun 2018 14:16:07 -0700 Subject: [Airship-discuss] Airship Logo In-Reply-To: References: <195AB6DB-62B4-4C4E-B90A-AECF743F2B2A@openstack.org> Message-ID: Thanks for the feedback, Frank! I understand your point about not recognizing the shape without the word mark—that was something we struggled with when designing since “airships” or blimps don’t tend to be very recognizable as shapes on their own without being more illustrative than something that works well as a logo. They could easily be mistaken for a torpedo or an unusual American football. Encapsulating the airship in an “A” shape helps give it some visual interest without leaning too hard on the airship shape alone. Also, we wanted to avoid using the airship shape on its own as to avoid brand confusion with Urban Airship. In terms of simplifying the mark, that is certainly something we can explore. Although, the limited complexity in the mark as it stands gives the airship some dimension, both to suggest the airship shape is round and to reference the negative space you might find in the cutout of a capital “A” letter. The jet was to help give the airship shape some perspective, but we can certainly show it without the jet if thats something people would like to see. -James > On Jun 19, 2018, at 12:32 PM, Frank Zdarsky wrote: > > Hi James, > > I like the idea of an airship flying across the capital "A". TBH, though, if you hadn't mentioned the airship or added the "airship" wordmark, I wouldn't have recognized it. The logo is also a bit busy for my taste. > Would it make sense to fill the whole airship in the foreground color (of the "A") and just do its outline in the background color? Maybe also leaving out the "jet"? > > Cheers -- Frank > > On Tue, Jun 19, 2018 at 8:48 PM James Cole > wrote: > Hi Airship Crew, > > I’m James, a graphic designer with the OpenStack Foundation, and I’ve been working with the Airship leadership team to design a new logo. > > The concept is pretty accessible, with an icon of a stylized airship silhouette flying across a capital “A” shape. It works well both stacked and horizontally and in single or multiple colors. The color palette incorporates the teal and sky tones from the current landing page. > > I’ve attached a PDF showing the logo in a few different formats, as well as a couple of mockups to show you some practical applications. In case the attached document does not make it do you, you can view it on Dropbox here.  > > We’d love to hear your thoughts on this design or if you have any other feedback you think might be important for the Airship brand. > > Thank you! > > James Cole > Graphic Designer > OpenStack Foundation > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > > -- > Frank Zdarsky | NFV&SDN Technology Strategy, Office of the CTO | Red Hat > e: fzdarsky at redhat.com | irc: fzdarsky at freenode | m: +49 175 82 11 64 4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dd7022 at att.com Wed Jun 20 17:01:51 2018 From: dd7022 at att.com (KATARIA, DEEPAK) Date: Wed, 20 Jun 2018 17:01:51 +0000 Subject: [Airship-discuss] Libvirt Issue with AirShip Message-ID: <90BF8249EF30DB4A83C0F4D241F3D72C279D3718@MISOUT7MSGUSRCC.ITServices.sbc.com> Hello, I had good run of the airship code today. Unfortunately libvirt container failed to come up and makes nova resources unavailable. So my test-stack deployment failed to create a VM. I am sharing the output from my deployment. I need some to fix this issue. Here are the status of cluster Successfully performed deploy_site + echo -e '\n' + break + [[ Complete == \C\o\m\p\l\e\t\e ]] + exit 0 + [[ 40 -ge 40 ]] + execute_create_heat_stack + set +x Performing basic sanity checks by creating heat stacks + cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node + bash test_create_heat_stack.sh Creating KeyPair Downloading heat-public-net-deployment.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 895 100 895 0 0 3670 0 --:--:-- --:--:-- --:--:-- 3683 Creating public-net Heat Stack 2018-06-20 00:59:09Z [public-net]: CREATE_IN_PROGRESS Stack CREATE started 2018-06-20 00:59:09Z [public-net.public_net]: CREATE_IN_PROGRESS state changed 2018-06-20 00:59:09Z [public-net.public_net]: CREATE_COMPLETE state changed 2018-06-20 00:59:09Z [public-net.private_subnet]: CREATE_IN_PROGRESS state changed 2018-06-20 00:59:09Z [public-net.private_subnet]: CREATE_COMPLETE state changed 2018-06-20 00:59:09Z [public-net]: CREATE_COMPLETE Stack CREATE completed successfully +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | id | a2c34493-1f93-46bf-8483-a6ec8ca097db | | stack_name | public-net | | description | No description | | creation_time | 2018-06-20T00:59:08Z | | updated_time | None | | stack_status | CREATE_COMPLETE | | stack_status_reason | Stack CREATE completed successfully | +---------------------+--------------------------------------+ Downloading heat-basic-vm-deployment.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2361 100 2361 0 0 8147 0 --:--:-- --:--:-- --:--:-- 8141 Errors: 2018-06-20 01:00:46Z [test-stack-01.server]: CREATE_FAILED ResourceInError: resources.server: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" 2018-06-20 01:00:46Z [test-stack-01]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.server: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" Stack test-stack-01 CREATE_FAILED + error 'creating heat stack' + set +x Error when creating heat stack. libvirt-nf45j 0/1 CrashLoopBackOff 39 2h nova-compute-default-59zkh 0/1 CrashLoopBackOff 25 2h -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Wed Jun 20 17:30:39 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Wed, 20 Jun 2018 19:30:39 +0200 Subject: [Airship-discuss] Libvirt Issue with AirShip In-Reply-To: <90BF8249EF30DB4A83C0F4D241F3D72C279D3718@MISOUT7MSGUSRCC.ITServices.sbc.com> References: <90BF8249EF30DB4A83C0F4D241F3D72C279D3718@MISOUT7MSGUSRCC.ITServices.sbc.com> Message-ID: Hello Kataria, Please, provide 'kubectl logs' and 'kubectl describe' from pods failing with CrashLoopBackOff. Thank you. Best regards, -- Roman Gorshunov On Wed, Jun 20, 2018 at 7:01 PM, KATARIA, DEEPAK wrote: > > > Hello, > > I had good run of the airship code today. Unfortunately libvirt container > failed to come up and makes nova resources unavailable. So my test-stack > deployment failed to create a VM. > > I am sharing the output from my deployment. I need some to fix this issue. > Here are the status of cluster > > > > Successfully performed deploy_site > > + echo -e '\n' > > > > > > + break > > + [[ Complete == \C\o\m\p\l\e\t\e ]] > > + exit 0 > > + [[ 40 -ge 40 ]] > > + execute_create_heat_stack > > + set +x > > > > Performing basic sanity checks by creating heat stacks > > > > + cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node > > + bash test_create_heat_stack.sh > > > > Creating KeyPair > > Downloading heat-public-net-deployment.yaml > > % Total % Received % Xferd Average Speed Time Time Time > Current > > Dload Upload Total Spent Left > Speed > > 100 895 100 895 0 0 3670 0 --:--:-- --:--:-- --:--:-- > 3683 > > Creating public-net Heat Stack > > 2018-06-20 00:59:09Z [public-net]: CREATE_IN_PROGRESS Stack CREATE started > > 2018-06-20 00:59:09Z [public-net.public_net]: CREATE_IN_PROGRESS state > changed > > 2018-06-20 00:59:09Z [public-net.public_net]: CREATE_COMPLETE state changed > > 2018-06-20 00:59:09Z [public-net.private_subnet]: CREATE_IN_PROGRESS state > changed > > 2018-06-20 00:59:09Z [public-net.private_subnet]: CREATE_COMPLETE state > changed > > 2018-06-20 00:59:09Z [public-net]: CREATE_COMPLETE Stack CREATE completed > successfully > > +---------------------+--------------------------------------+ > > | Field | Value | > > +---------------------+--------------------------------------+ > > | id | a2c34493-1f93-46bf-8483-a6ec8ca097db | > > | stack_name | public-net | > > | description | No description | > > | creation_time | 2018-06-20T00:59:08Z | > > | updated_time | None | > > | stack_status | CREATE_COMPLETE | > > | stack_status_reason | Stack CREATE completed successfully | > > +---------------------+--------------------------------------+ > > Downloading heat-basic-vm-deployment.yaml > > % Total % Received % Xferd Average Speed Time Time Time > Current > > Dload Upload Total Spent Left > Speed > > 100 2361 100 2361 0 0 8147 0 --:--:-- --:--:-- --:--:-- > 8141 > > > > Errors: > > > > 2018-06-20 01:00:46Z [test-stack-01.server]: CREATE_FAILED ResourceInError: > resources.server: Went to status ERROR due to "Message: No valid host was > found. There are not enough hosts available., Code: 500" > 2018-06-20 01:00:46Z [test-stack-01]: CREATE_FAILED Resource CREATE failed: > ResourceInError: resources.server: Went to status ERROR due to "Message: No > valid host was found. There are not enough hosts available., Code: 500" > > Stack test-stack-01 CREATE_FAILED > > + error 'creating heat stack' > + set +x > Error when creating heat stack. > > > > libvirt-nf45j 0/1 > CrashLoopBackOff 39 2h > > nova-compute-default-59zkh 0/1 > CrashLoopBackOff 25 2h > > > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From dd7022 at att.com Wed Jun 20 20:21:52 2018 From: dd7022 at att.com (KATARIA, DEEPAK) Date: Wed, 20 Jun 2018 20:21:52 +0000 Subject: [Airship-discuss] Libvirt Issue with AirShip In-Reply-To: References: <90BF8249EF30DB4A83C0F4D241F3D72C279D3718@MISOUT7MSGUSRCC.ITServices.sbc.com> Message-ID: <90BF8249EF30DB4A83C0F4D241F3D72C279D3814@MISOUT7MSGUSRCC.ITServices.sbc.com> Hi Roman: Please see requested logs below. Regards, Deepak 'kubectl logs' root at csonjrsv45:~/deploy# kubectl logs -n openstack libvirt-nf45j ++ grep libvirtd ++ cat /proc/1/comm /proc/10/comm /proc/100/comm /proc/10002/comm /proc/10003/comm /proc/10025/comm /proc/10042/comm /proc/10057/comm /proc/10066/comm /proc/10085/comm /proc/101/comm /proc/10109/comm /proc/10172/comm /proc/10188/comm /proc/102/comm /proc/10205/comm /proc/10217/comm /proc/103/comm /proc/10303/comm /proc/10323/comm /proc/10363/comm /proc/104/comm /proc/1043/comm /proc/1044/comm /proc/10447/comm /proc/10466/comm /proc/10476/comm /proc/10495/comm /proc/105/comm /proc/106/comm /proc/10632/comm /proc/10635/comm /proc/10637/comm /proc/10640/comm /proc/10642/comm /proc/10644/comm /proc/10646/comm /proc/10648/comm /proc/10650/comm /proc/10652/comm /proc/10654/comm /proc/10656/comm /proc/10664/comm /proc/10682/comm /proc/10694/comm /proc/10696/comm /proc/107/comm /proc/10711/comm /proc/10726/comm /proc/10734/comm /proc/10736/comm /proc/10738/comm /proc/10740/comm /proc/10743/comm /proc/10745/comm /proc/10748/comm /proc/10750/comm /proc/10752/comm /proc/10754/comm /proc/10756/comm /proc/10758/comm /proc/10760/comm /proc/10763/comm /proc/10765/comm /proc/10767/comm /proc/10769/comm /proc/10771/comm /proc/10774/comm /proc/108/comm /proc/10806/comm /proc/10821/comm /proc/10824/comm /proc/10826/comm /proc/10828/comm /proc/10830/comm /proc/10832/comm /proc/10834/comm /proc/10836/comm /proc/10838/comm /proc/10849/comm /proc/10875/comm /proc/10878/comm /proc/10881/comm /proc/10883/comm /proc/10886/comm /proc/10888/comm /proc/109/comm /proc/10956/comm /proc/10981/comm /proc/10996/comm /proc/11/comm /proc/110/comm /proc/11021/comm /proc/11028/comm /proc/11030/comm /proc/11033/comm /proc/11075/comm /proc/11083/comm /proc/111/comm /proc/1110/comm /proc/11127/comm /proc/11147/comm /proc/11175/comm /proc/112/comm /proc/11209/comm /proc/11221/comm /proc/11257/comm /proc/1126/comm /proc/113/comm /proc/11312/comm /proc/11324/comm /proc/11351/comm /proc/11376/comm /proc/11378/comm /proc/1138/comm /proc/11380/comm /proc/114/comm /proc/1140/comm /proc/1142/comm /proc/11421/comm /proc/1146/comm /proc/1147/comm /proc/115/comm /proc/11504/comm /proc/1152/comm /proc/1154/comm /proc/1157/comm /proc/11572/comm /proc/1158/comm /proc/116/comm /proc/11612/comm /proc/11671/comm /proc/1169/comm /proc/117/comm /proc/11734/comm /proc/11771/comm /proc/1178/comm /proc/11789/comm /proc/118/comm /proc/1181/comm /proc/11830/comm /proc/1187/comm /proc/119/comm /proc/1190/comm /proc/1192/comm /proc/11923/comm /proc/11936/comm /proc/1195/comm /proc/11952/comm /proc/11970/comm /proc/12/comm /proc/120/comm /proc/1201/comm /proc/1202/comm /proc/1203/comm /proc/1205/comm /proc/1206/comm /proc/121/comm /proc/1216/comm /proc/1217/comm /proc/122/comm /proc/1221/comm /proc/1222/comm /proc/1223/comm /proc/123/comm /proc/1232/comm /proc/1233/comm /proc/1234/comm /proc/1235/comm /proc/1239/comm /proc/124/comm /proc/1241/comm /proc/1244/comm /proc/1246/comm /proc/1247/comm /proc/125/comm /proc/126/comm /proc/1261/comm /proc/1262/comm /proc/1263/comm /proc/1264/comm /proc/1265/comm /proc/1266/comm /proc/1267/comm /proc/1268/comm /proc/1269/comm /proc/127/comm /proc/1275/comm /proc/128/comm /proc/1280/comm /proc/1281/comm /proc/1282/comm /proc/1283/comm /proc/1286/comm /proc/1288/comm /proc/129/comm /proc/1290/comm /proc/1291/comm /proc/1295/comm /proc/1297/comm /proc/1299/comm /proc/13/comm /proc/130/comm /proc/1300/comm /proc/1302/comm /proc/1303/comm /proc/1304/comm /proc/1305/comm /proc/1308/comm /proc/131/comm /proc/1310/comm /proc/1311/comm /proc/1312/comm /proc/1313/comm /proc/1314/comm /proc/1315/comm /proc/1317/comm /proc/1319/comm /proc/132/comm /proc/1321/comm /proc/1323/comm /proc/1324/comm /proc/1325/comm /proc/1326/comm /proc/1327/comm /proc/1328/comm /proc/1329/comm /proc/133/comm /proc/1330/comm /proc/1339/comm /proc/134/comm /proc/135/comm /proc/136/comm /proc/137/comm /proc/138/comm /proc/139/comm /proc/1391/comm /proc/14/comm /proc/140/comm /proc/141/comm /proc/1419/comm /proc/142/comm /proc/1420/comm /proc/1421/comm /proc/143/comm /proc/14322/comm /proc/1435/comm /proc/144/comm /proc/1447/comm /proc/1449/comm /proc/145/comm /proc/146/comm /proc/147/comm /proc/148/comm /proc/14823/comm /proc/149/comm /proc/15/comm /proc/150/comm /proc/151/comm /proc/15145/comm /proc/1516/comm /proc/152/comm /proc/153/comm /proc/154/comm /proc/155/comm /proc/156/comm /proc/157/comm /proc/1577/comm /proc/158/comm /proc/1582/comm /proc/159/comm /proc/1591/comm /proc/1595/comm /proc/16/comm /proc/160/comm /proc/161/comm /proc/162/comm /proc/163/comm /proc/1633/comm /proc/1634/comm /proc/1637/comm /proc/164/comm /proc/165/comm /proc/1653/comm /proc/16534/comm /proc/16542/comm /proc/16560/comm /proc/16575/comm /proc/166/comm /proc/16600/comm /proc/16608/comm /proc/16663/comm /proc/16683/comm /proc/167/comm /proc/16720/comm /proc/16735/comm /proc/16742/comm /proc/16764/comm /proc/16767/comm /proc/168/comm /proc/16815/comm /proc/169/comm /proc/17/comm /proc/170/comm /proc/17068/comm /proc/17085/comm /proc/171/comm /proc/17188/comm /proc/17189/comm /proc/172/comm /proc/173/comm /proc/17323/comm /proc/17394/comm /proc/174/comm /proc/17412/comm /proc/175/comm /proc/176/comm /proc/17619/comm /proc/17657/comm /proc/177/comm /proc/17704/comm /proc/17721/comm /proc/17782/comm /proc/178/comm /proc/17810/comm /proc/179/comm /proc/18/comm /proc/180/comm /proc/181/comm /proc/1811/comm /proc/1815/comm /proc/182/comm /proc/1820/comm /proc/18201/comm /proc/1822/comm /proc/1824/comm /proc/1825/comm /proc/1827/comm /proc/18270/comm /proc/18287/comm /proc/183/comm /proc/18301/comm /proc/18376/comm /proc/18397/comm /proc/184/comm /proc/185/comm /proc/18548/comm /proc/1855/comm /proc/18550/comm /proc/18552/comm /proc/18553/comm /proc/18554/comm /proc/18559/comm /proc/186/comm /proc/18609/comm /proc/18611/comm /proc/18612/comm /proc/18613/comm /proc/18615/comm /proc/18616/comm /proc/18617/comm /proc/18647/comm /proc/18651/comm /proc/18669/comm /proc/18687/comm /proc/18688/comm /proc/187/comm /proc/18705/comm /proc/18724/comm /proc/18757/comm /proc/18794/comm /proc/188/comm /proc/18823/comm /proc/18863/comm /proc/18893/comm /proc/189/comm /proc/18906/comm /proc/18939/comm /proc/18963/comm /proc/18964/comm /proc/18974/comm /proc/19/comm /proc/190/comm /proc/19006/comm /proc/19077/comm /proc/191/comm /proc/19111/comm /proc/19167/comm /proc/19195/comm /proc/192/comm /proc/19241/comm /proc/19279/comm /proc/193/comm /proc/19356/comm /proc/194/comm /proc/19480/comm /proc/195/comm /proc/19510/comm /proc/19551/comm /proc/196/comm /proc/19620/comm /proc/19648/comm /proc/19679/comm /proc/19691/comm /proc/19692/comm /proc/19693/comm /proc/19694/comm /proc/19695/comm /proc/19696/comm /proc/197/comm /proc/19710/comm /proc/19766/comm /proc/19777/comm /proc/198/comm /proc/19800/comm /proc/19835/comm /proc/19886/comm /proc/199/comm /proc/19957/comm /proc/1997/comm /proc/2/comm /proc/20/comm /proc/200/comm /proc/20034/comm /proc/20082/comm /proc/201/comm /proc/20150/comm /proc/20190/comm /proc/202/comm /proc/20206/comm /proc/20217/comm /proc/20247/comm /proc/20248/comm /proc/20281/comm /proc/203/comm /proc/20309/comm /proc/20310/comm /proc/20322/comm /proc/204/comm /proc/20406/comm /proc/20481/comm /proc/205/comm /proc/20566/comm /proc/206/comm /proc/20601/comm /proc/20635/comm /proc/20648/comm /proc/207/comm /proc/20716/comm /proc/20745/comm /proc/208/comm /proc/20845/comm /proc/209/comm /proc/20929/comm /proc/20964/comm /proc/21/comm /proc/210/comm /proc/21007/comm /proc/21028/comm /proc/21053/comm /proc/21084/comm /proc/211/comm /proc/21127/comm /proc/21162/comm /proc/21163/comm /proc/21164/comm /proc/21165/comm /proc/212/comm /proc/21277/comm /proc/213/comm /proc/21310/comm /proc/21339/comm /proc/214/comm /proc/21425/comm /proc/21461/comm /proc/215/comm /proc/216/comm /proc/21616/comm /proc/217/comm /proc/218/comm /proc/21816/comm /proc/21840/comm /proc/219/comm /proc/21908/comm /proc/21926/comm /proc/2199/comm /proc/22/comm /proc/220/comm /proc/221/comm /proc/22110/comm /proc/22111/comm /proc/22112/comm /proc/22132/comm /proc/22133/comm /proc/22156/comm /proc/222/comm /proc/22205/comm /proc/223/comm /proc/2235/comm /proc/224/comm /proc/22458/comm /proc/22480/comm /proc/225/comm /proc/22542/comm /proc/226/comm /proc/22606/comm /proc/227/comm /proc/228/comm /proc/229/comm /proc/23/comm /proc/230/comm /proc/2307/comm /proc/23074/comm /proc/231/comm /proc/2311/comm /proc/232/comm /proc/2320/comm /proc/23269/comm /proc/23276/comm /proc/23292/comm /proc/233/comm /proc/2336/comm /proc/234/comm /proc/23425/comm /proc/23473/comm /proc/235/comm /proc/23520/comm /proc/23538/comm /proc/236/comm /proc/23601/comm /proc/2362/comm /proc/237/comm /proc/238/comm /proc/23824/comm /proc/23843/comm /proc/239/comm /proc/23903/comm /proc/23922/comm /proc/24/comm /proc/240/comm /proc/24037/comm /proc/24038/comm /proc/2407/comm /proc/24072/comm /proc/24077/comm /proc/2408/comm /proc/241/comm /proc/24159/comm /proc/242/comm /proc/24220/comm /proc/24221/comm /proc/24222/comm /proc/24279/comm /proc/24280/comm /proc/24290/comm /proc/243/comm /proc/244/comm /proc/24404/comm /proc/24438/comm /proc/24439/comm /proc/24441/comm /proc/24444/comm /proc/24445/comm /proc/24446/comm /proc/24447/comm /proc/24449/comm /proc/24450/comm /proc/24451/comm /proc/24452/comm /proc/24453/comm /proc/24454/comm /proc/24455/comm /proc/24465/comm /proc/24483/comm /proc/24490/comm /proc/24491/comm /proc/24494/comm /proc/24497/comm /proc/245/comm /proc/24505/comm /proc/24521/comm /proc/24533/comm /proc/24559/comm /proc/2457/comm /proc/24573/comm /proc/24598/comm /proc/246/comm /proc/24622/comm /proc/24637/comm /proc/2465/comm /proc/24654/comm /proc/2467/comm /proc/24671/comm /proc/24680/comm /proc/247/comm /proc/24703/comm /proc/24732/comm /proc/24748/comm /proc/24755/comm /proc/24784/comm /proc/2479/comm /proc/24797/comm /proc/24799/comm /proc/248/comm /proc/24800/comm /proc/24805/comm /proc/24828/comm /proc/24871/comm /proc/249/comm /proc/24955/comm /proc/25/comm /proc/250/comm /proc/25049/comm /proc/2508/comm /proc/25082/comm /proc/251/comm /proc/25109/comm /proc/25152/comm /proc/25176/comm /proc/25187/comm /proc/252/comm /proc/25220/comm /proc/25288/comm /proc/253/comm /proc/2530/comm /proc/2534/comm /proc/25341/comm /proc/25353/comm /proc/254/comm /proc/25417/comm /proc/25451/comm /proc/25464/comm /proc/255/comm /proc/25513/comm /proc/256/comm /proc/25603/comm /proc/25648/comm /proc/25688/comm /proc/257/comm /proc/25730/comm /proc/25737/comm /proc/25753/comm /proc/25771/comm /proc/25789/comm /proc/258/comm /proc/25840/comm /proc/25854/comm /proc/25855/comm /proc/25856/comm /proc/25874/comm /proc/25899/comm /proc/259/comm /proc/25967/comm /proc/26/comm /proc/260/comm /proc/26018/comm /proc/26055/comm /proc/261/comm /proc/26101/comm /proc/26138/comm /proc/262/comm /proc/26201/comm /proc/26216/comm /proc/26248/comm /proc/26261/comm /proc/26262/comm /proc/26263/comm /proc/26270/comm /proc/263/comm /proc/26302/comm /proc/26360/comm /proc/26386/comm /proc/264/comm /proc/26440/comm /proc/26469/comm /proc/265/comm /proc/266/comm /proc/2665/comm /proc/267/comm /proc/2674/comm /proc/268/comm /proc/269/comm /proc/27/comm /proc/270/comm /proc/271/comm /proc/272/comm /proc/273/comm /proc/2734/comm /proc/2739/comm /proc/274/comm /proc/2748/comm /proc/275/comm /proc/2753/comm /proc/2757/comm /proc/276/comm /proc/2763/comm /proc/2768/comm /proc/277/comm /proc/2773/comm /proc/27751/comm /proc/2778/comm /proc/278/comm /proc/27803/comm /proc/27818/comm /proc/279/comm /proc/27993/comm /proc/28/comm /proc/280/comm /proc/281/comm /proc/28185/comm /proc/282/comm /proc/28208/comm /proc/28264/comm /proc/283/comm /proc/284/comm /proc/285/comm /proc/28551/comm /proc/28570/comm /proc/286/comm /proc/28687/comm /proc/287/comm /proc/28705/comm /proc/28786/comm /proc/288/comm /proc/28800/comm /proc/28820/comm /proc/28835/comm /proc/28865/comm /proc/28866/comm /proc/28870/comm /proc/28872/comm /proc/28873/comm /proc/28874/comm /proc/28875/comm /proc/28876/comm /proc/28877/comm /proc/28878/comm /proc/28882/comm /proc/28883/comm /proc/28895/comm /proc/28896/comm /proc/289/comm /proc/28975/comm /proc/28978/comm /proc/28995/comm /proc/28999/comm /proc/29/comm /proc/290/comm /proc/29018/comm /proc/29034/comm /proc/29035/comm /proc/29036/comm /proc/29037/comm /proc/29042/comm /proc/29043/comm /proc/29044/comm /proc/29045/comm /proc/29046/comm /proc/29047/comm /proc/29048/comm /proc/29049/comm /proc/29055/comm /proc/29068/comm /proc/291/comm /proc/292/comm /proc/2927/comm /proc/293/comm /proc/2932/comm /proc/294/comm /proc/295/comm /proc/296/comm /proc/29664/comm /proc/297/comm /proc/2971/comm /proc/29728/comm /proc/2977/comm /proc/298/comm /proc/29806/comm /proc/29831/comm /proc/29865/comm /proc/29883/comm /proc/299/comm /proc/3/comm /proc/30/comm /proc/300/comm /proc/301/comm /proc/302/comm /proc/3025/comm /proc/30259/comm /proc/30261/comm /proc/30264/comm /proc/30265/comm /proc/303/comm /proc/3033/comm /proc/3034/comm /proc/304/comm /proc/305/comm /proc/30552/comm /proc/30572/comm /proc/30588/comm /proc/306/comm /proc/30607/comm /proc/30695/comm /proc/307/comm /proc/30713/comm /proc/30787/comm /proc/308/comm /proc/30804/comm /proc/309/comm /proc/30922/comm /proc/30923/comm /proc/30942/comm /proc/31/comm /proc/310/comm /proc/31017/comm /proc/31030/comm /proc/31083/comm /proc/311/comm /proc/31195/comm /proc/31198/comm /proc/312/comm /proc/31200/comm /proc/31271/comm /proc/31289/comm /proc/313/comm /proc/31345/comm /proc/31346/comm /proc/31347/comm /proc/31365/comm /proc/31385/comm /proc/314/comm /proc/315/comm /proc/31504/comm /proc/31509/comm /proc/31513/comm /proc/31522/comm /proc/31580/comm /proc/31584/comm /proc/31590/comm /proc/31596/comm /proc/316/comm /proc/317/comm /proc/31716/comm /proc/31749/comm /proc/318/comm /proc/319/comm /proc/32/comm /proc/320/comm /proc/32063/comm /proc/32089/comm /proc/321/comm /proc/32138/comm /proc/322/comm /proc/323/comm /proc/32309/comm /proc/32327/comm /proc/324/comm /proc/32467/comm /proc/32487/comm /proc/32493/comm /proc/325/comm /proc/32554/comm /proc/32572/comm /proc/32591/comm /proc/326/comm /proc/32679/comm /proc/327/comm /proc/32707/comm /proc/328/comm /proc/329/comm /proc/33/comm /proc/330/comm /proc/331/comm /proc/332/comm /proc/33276/comm /proc/333/comm /proc/33300/comm /proc/33313/comm /proc/33337/comm /proc/33375/comm /proc/334/comm /proc/33453/comm /proc/33482/comm /proc/33498/comm /proc/335/comm /proc/33507/comm /proc/33508/comm /proc/33535/comm /proc/33557/comm /proc/33577/comm /proc/33590/comm /proc/33597/comm /proc/33598/comm /proc/336/comm /proc/33617/comm /proc/33646/comm /proc/33650/comm /proc/33651/comm /proc/337/comm /proc/338/comm /proc/339/comm /proc/34/comm /proc/340/comm /proc/341/comm /proc/342/comm /proc/343/comm /proc/344/comm /proc/345/comm /proc/346/comm /proc/347/comm /proc/3471/comm /proc/3472/comm /proc/348/comm /proc/349/comm /proc/35/comm /proc/350/comm /proc/3505/comm /proc/3508/comm /proc/351/comm /proc/3511/comm /proc/352/comm /proc/353/comm /proc/354/comm /proc/355/comm /proc/356/comm /proc/3565/comm /proc/3566/comm /proc/357/comm /proc/3577/comm /proc/358/comm /proc/3589/comm /proc/359/comm /proc/3597/comm /proc/36/comm /proc/360/comm /proc/3600/comm /proc/361/comm /proc/3612/comm /proc/362/comm /proc/3622/comm /proc/363/comm /proc/364/comm /proc/365/comm /proc/3652/comm /proc/366/comm /proc/367/comm /proc/368/comm /proc/369/comm /proc/37/comm /proc/370/comm /proc/371/comm /proc/372/comm /proc/373/comm /proc/374/comm /proc/375/comm /proc/376/comm /proc/3760/comm /proc/377/comm /proc/378/comm /proc/379/comm /proc/38/comm /proc/380/comm /proc/381/comm /proc/382/comm /proc/3822/comm /proc/383/comm /proc/3832/comm /proc/3838/comm /proc/384/comm /proc/3840/comm /proc/3843/comm /proc/3845/comm /proc/385/comm /proc/3854/comm /proc/386/comm /proc/387/comm /proc/3878/comm /proc/388/comm /proc/3881/comm /proc/389/comm /proc/39/comm /proc/390/comm /proc/3908/comm /proc/391/comm /proc/392/comm /proc/393/comm /proc/394/comm /proc/395/comm /proc/396/comm /proc/397/comm /proc/398/comm /proc/399/comm /proc/4/comm /proc/40/comm /proc/400/comm /proc/401/comm /proc/402/comm /proc/403/comm /proc/404/comm /proc/405/comm /proc/406/comm /proc/407/comm /proc/408/comm /proc/409/comm /proc/41/comm /proc/410/comm /proc/411/comm /proc/412/comm /proc/413/comm /proc/414/comm /proc/415/comm /proc/416/comm /proc/417/comm /proc/418/comm /proc/419/comm /proc/42/comm /proc/420/comm /proc/421/comm /proc/422/comm /proc/423/comm /proc/424/comm /proc/425/comm /proc/426/comm /proc/427/comm /proc/428/comm /proc/429/comm /proc/43/comm /proc/430/comm /proc/4308/comm /proc/431/comm /proc/432/comm /proc/433/comm /proc/434/comm /proc/435/comm /proc/436/comm /proc/437/comm /proc/438/comm /proc/439/comm /proc/44/comm /proc/440/comm /proc/441/comm /proc/442/comm /proc/443/comm /proc/444/comm /proc/445/comm /proc/446/comm /proc/447/comm /proc/448/comm /proc/449/comm /proc/45/comm /proc/450/comm /proc/451/comm /proc/452/comm /proc/453/comm /proc/454/comm /proc/455/comm /proc/456/comm /proc/457/comm /proc/458/comm /proc/459/comm /proc/46/comm /proc/460/comm /proc/461/comm /proc/462/comm /proc/463/comm /proc/464/comm /proc/465/comm /proc/466/comm /proc/467/comm /proc/468/comm /proc/469/comm /proc/47/comm /proc/470/comm /proc/471/comm /proc/472/comm /proc/473/comm /proc/474/comm /proc/475/comm /proc/476/comm /proc/477/comm /proc/478/comm /proc/479/comm /proc/48/comm /proc/480/comm /proc/481/comm /proc/482/comm /proc/483/comm /proc/484/comm /proc/485/comm /proc/486/comm /proc/487/comm /proc/488/comm /proc/489/comm /proc/49/comm /proc/490/comm /proc/491/comm /proc/492/comm /proc/493/comm /proc/494/comm /proc/495/comm /proc/496/comm /proc/497/comm /proc/498/comm /proc/499/comm /proc/5/comm /proc/50/comm /proc/500/comm /proc/501/comm /proc/502/comm /proc/503/comm /proc/504/comm /proc/505/comm /proc/506/comm /proc/507/comm /proc/508/comm /proc/509/comm /proc/51/comm /proc/510/comm /proc/511/comm /proc/512/comm /proc/513/comm /proc/514/comm /proc/515/comm /proc/516/comm /proc/517/comm /proc/518/comm /proc/519/comm /proc/52/comm /proc/520/comm /proc/521/comm /proc/522/comm /proc/523/comm /proc/524/comm /proc/525/comm /proc/526/comm /proc/527/comm /proc/528/comm /proc/529/comm /proc/53/comm /proc/530/comm /proc/531/comm /proc/532/comm /proc/533/comm /proc/534/comm /proc/535/comm /proc/536/comm /proc/537/comm /proc/538/comm /proc/5380/comm /proc/5381/comm /proc/539/comm /proc/54/comm /proc/540/comm /proc/541/comm /proc/5414/comm /proc/5416/comm /proc/542/comm /proc/543/comm /proc/5433/comm /proc/544/comm /proc/5449/comm /proc/545/comm /proc/5451/comm /proc/546/comm /proc/547/comm /proc/548/comm /proc/5481/comm /proc/549/comm /proc/55/comm /proc/550/comm /proc/551/comm /proc/552/comm /proc/553/comm /proc/554/comm /proc/555/comm /proc/556/comm /proc/5563/comm /proc/557/comm /proc/558/comm /proc/5581/comm /proc/559/comm /proc/5598/comm /proc/56/comm /proc/560/comm /proc/561/comm /proc/5617/comm /proc/562/comm /proc/563/comm /proc/5635/comm /proc/564/comm /proc/565/comm /proc/5653/comm /proc/566/comm /proc/567/comm /proc/5674/comm /proc/568/comm /proc/569/comm /proc/5698/comm /proc/57/comm /proc/570/comm /proc/571/comm /proc/572/comm /proc/574/comm /proc/575/comm /proc/576/comm /proc/5761/comm /proc/5768/comm /proc/577/comm /proc/5787/comm /proc/58/comm /proc/5805/comm /proc/5844/comm /proc/5861/comm /proc/59/comm /proc/5909/comm /proc/5927/comm /proc/6/comm /proc/60/comm /proc/6085/comm /proc/61/comm /proc/6102/comm /proc/619/comm /proc/62/comm /proc/620/comm /proc/6200/comm /proc/621/comm /proc/6218/comm /proc/622/comm /proc/6245/comm /proc/626/comm /proc/6263/comm /proc/63/comm /proc/6327/comm /proc/6346/comm /proc/635/comm /proc/636/comm /proc/6379/comm /proc/6397/comm /proc/64/comm /proc/643/comm /proc/647/comm /proc/649/comm /proc/65/comm /proc/658/comm /proc/6587/comm /proc/66/comm /proc/661/comm /proc/6648/comm /proc/67/comm /proc/6734/comm /proc/675/comm /proc/6752/comm /proc/68/comm /proc/6811/comm /proc/6829/comm /proc/683/comm /proc/6853/comm /proc/6880/comm /proc/69/comm /proc/695/comm /proc/697/comm /proc/6970/comm /proc/698/comm /proc/6987/comm /proc/7/comm /proc/70/comm /proc/701/comm /proc/7027/comm /proc/7044/comm /proc/71/comm /proc/72/comm /proc/723/comm /proc/724/comm /proc/725/comm /proc/726/comm /proc/727/comm /proc/728/comm /proc/729/comm /proc/7296/comm /proc/73/comm /proc/730/comm /proc/731/comm /proc/7315/comm /proc/732/comm /proc/733/comm /proc/734/comm /proc/735/comm /proc/736/comm /proc/737/comm /proc/738/comm /proc/739/comm /proc/74/comm /proc/740/comm /proc/741/comm /proc/742/comm /proc/743/comm /proc/744/comm /proc/745/comm /proc/7456/comm /proc/746/comm /proc/747/comm /proc/7474/comm /proc/748/comm /proc/749/comm /proc/75/comm /proc/750/comm /proc/751/comm /proc/7519/comm /proc/752/comm /proc/753/comm /proc/7539/comm /proc/754/comm /proc/755/comm /proc/756/comm /proc/757/comm /proc/7572/comm /proc/758/comm /proc/759/comm /proc/7590/comm /proc/76/comm /proc/760/comm /proc/762/comm /proc/763/comm /proc/764/comm /proc/765/comm /proc/766/comm /proc/767/comm /proc/768/comm /proc/769/comm /proc/77/comm /proc/770/comm /proc/771/comm /proc/772/comm /proc/773/comm /proc/774/comm /proc/775/comm /proc/776/comm /proc/777/comm /proc/778/comm /proc/779/comm /proc/78/comm /proc/780/comm /proc/781/comm /proc/7818/comm /proc/782/comm /proc/783/comm /proc/784/comm /proc/7844/comm /proc/785/comm /proc/786/comm /proc/787/comm /proc/788/comm /proc/79/comm /proc/7968/comm /proc/7987/comm /proc/8/comm /proc/80/comm /proc/81/comm /proc/82/comm /proc/8204/comm /proc/83/comm /proc/8330/comm /proc/8348/comm /proc/8364/comm /proc/8365/comm /proc/84/comm /proc/8420/comm /proc/8433/comm /proc/8434/comm /proc/8452/comm /proc/8496/comm /proc/85/comm /proc/8507/comm /proc/8529/comm /proc/8537/comm /proc/86/comm /proc/8613/comm /proc/8624/comm /proc/8640/comm /proc/8647/comm /proc/8664/comm /proc/8665/comm /proc/8681/comm /proc/87/comm /proc/8707/comm /proc/8779/comm /proc/8780/comm /proc/88/comm /proc/8854/comm /proc/8896/comm /proc/89/comm /proc/893/comm /proc/894/comm /proc/9/comm /proc/90/comm /proc/9038/comm /proc/9060/comm /proc/91/comm /proc/9140/comm /proc/9157/comm /proc/92/comm /proc/923/comm /proc/924/comm /proc/925/comm /proc/926/comm /proc/927/comm /proc/9271/comm /proc/928/comm /proc/929/comm /proc/9291/comm /proc/93/comm /proc/930/comm /proc/931/comm /proc/932/comm /proc/9322/comm /proc/933/comm /proc/934/comm /proc/9342/comm /proc/935/comm /proc/936/comm /proc/9364/comm /proc/937/comm /proc/938/comm /proc/939/comm /proc/94/comm /proc/940/comm /proc/942/comm /proc/943/comm /proc/9433/comm /proc/944/comm /proc/945/comm /proc/9455/comm /proc/946/comm /proc/947/comm /proc/948/comm /proc/949/comm /proc/95/comm /proc/950/comm /proc/9503/comm /proc/9508/comm /proc/951/comm /proc/952/comm /proc/9527/comm /proc/953/comm /proc/954/comm /proc/955/comm /proc/9556/comm /proc/956/comm /proc/957/comm /proc/958/comm /proc/959/comm /proc/96/comm /proc/960/comm /proc/961/comm /proc/962/comm /proc/963/comm /proc/966/comm /proc/967/comm /proc/97/comm /proc/970/comm /proc/972/comm /proc/973/comm /proc/974/comm /proc/975/comm /proc/976/comm /proc/977/comm /proc/98/comm /proc/99/comm /proc/self/comm /proc/thread-self/comm + '[' -n '' ']' + rm -f /var/run/libvirtd.pid + [[ -c /dev/kvm ]] + chmod 660 /dev/kvm + chown root:kvm /dev/kvm + '[' -d /sys/kernel/mm/hugepages ']' ++ grep KVM_HUGEPAGES=0 /etc/default/qemu-kvm + '[' -n KVM_HUGEPAGES=0 ']' + sed -i 's/.*KVM_HUGEPAGES=0.*/KVM_HUGEPAGES=1/g' /etc/default/qemu-kvm + '[' -n '' ']' + exec libvirtd --listen root at csonjrsv45:~/deploy# 'kubectl describe' Name: libvirt-nf45j Namespace: openstack Node: csonjrsv45/192.168.2.45 Start Time: Tue, 19 Jun 2018 23:41:18 +0000 Labels: application=libvirt component=libvirt controller-revision-hash=1513478708 pod-template-generation=1 release_group=airship-openstack-libvirt Annotations: configmap-bin-hash=3e7239502aced35f903fd1f9dbafcdf161ec29da214a8cfaa4cc949e502e9b70 configmap-etc-hash=b2f6ee1e2d5e131ac3bafaa83b4fa613df58c98ccebc2638332425430ab502de Status: Running IP: 192.168.2.45 Controlled By: DaemonSet/libvirt Init Containers: init: Container ID: docker://17b0d61203eddd72749f79f8ea6acf6f0b1bb2ae862c247978de0c84e0255d1f Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 Port: Host Port: Command: kubernetes-entrypoint State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 19 Jun 2018 23:41:19 +0000 Finished: Tue, 19 Jun 2018 23:41:19 +0000 Ready: True Restart Count: 0 Environment: POD_NAME: libvirt-nf45j (v1:metadata.name) NAMESPACE: openstack (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin: DEPENDENCY_SERVICE: DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from libvirt-token-vzl8l (ro) Containers: libvirt: Container ID: docker://4732f11f52816376354cd6f7b2fe8b19020ee37992b91c7f291b1be7b26d3412 Image: docker.io/openstackhelm/libvirt:ubuntu-xenial-1.3.1 Image ID: docker-pullable://openstackhelm/libvirt at sha256:cb6a3612e1a7adab6c0ffd62ee8f4f9ef3ac1050e6c9576039709d0c2271118e Port: Host Port: Command: /tmp/libvirt.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 20 Jun 2018 15:09:49 +0000 Finished: Wed, 20 Jun 2018 15:09:49 +0000 Ready: False Restart Count: 186 Environment: Mounts: /dev from dev (rw) /etc/libvirt/libvirtd.conf from libvirt-etc (ro) /etc/libvirt/qemu from etc-libvirt-qemu (rw) /etc/libvirt/qemu.conf from libvirt-etc (ro) /etc/machine-id from machine-id (ro) /lib/modules from libmodules (ro) /run from run (rw) /sys/fs/cgroup from cgroup (rw) /tmp/libvirt.sh from libvirt-bin (ro) /var/lib/libvirt from var-lib-libvirt (rw) /var/lib/nova from var-lib-nova (rw) /var/run/secrets/kubernetes.io/serviceaccount from libvirt-token-vzl8l (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: libvirt-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: libvirt-bin Optional: false libvirt-etc: Type: ConfigMap (a volume populated by a ConfigMap) Name: libvirt-etc Optional: false libmodules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType: var-lib-libvirt: Type: HostPath (bare host directory volume) Path: /var/lib/libvirt HostPathType: var-lib-nova: Type: HostPath (bare host directory volume) Path: /var/lib/nova HostPathType: run: Type: HostPath (bare host directory volume) Path: /run HostPathType: dev: Type: HostPath (bare host directory volume) Path: /dev HostPathType: cgroup: Type: HostPath (bare host directory volume) Path: /sys/fs/cgroup HostPathType: machine-id: Type: HostPath (bare host directory volume) Path: /etc/machine-id HostPathType: etc-libvirt-qemu: Type: HostPath (bare host directory volume) Path: /etc/libvirt/qemu HostPathType: libvirt-token-vzl8l: Type: Secret (a volume populated by a Secret) SecretName: libvirt-token-vzl8l Optional: false QoS Class: BestEffort Node-Selectors: openstack-compute-node=enabled Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 4m (x4164 over 15h) kubelet, csonjrsv45 Back-off restarting failed container root at csonjrsv45:~# kubectl describe pod -n openstack libvirt-nf45j | moe No command 'moe' found, did you mean: Command 'mog' from package 'mazeofgalious' (universe) Command 'mne' from package 'python-mne' (universe) Command 'mon' from package 'mon' (universe) Command 'more' from package 'util-linux' (main) Command 'joe' from package 'joe-jupp' (universe) Command 'joe' from package 'joe' (universe) Command 'toe' from package 'ncurses-bin' (main) Command 'moc' from package 'qtchooser' (main) Command 'mod' from package 'monodoc-base' (universe) Command 'm2e' from package 'alliance' (universe) moe: command not found [1]+ Stopped kubectl describe pod -n openstack libvirt-nf45j | moe root at csonjrsv45:~# kubectl describe pod -n openstack libvirt-nf45j | more Name: libvirt-nf45j Namespace: openstack Node: csonjrsv45/192.168.2.45 Start Time: Tue, 19 Jun 2018 23:41:18 +0000 Labels: application=libvirt component=libvirt controller-revision-hash=1513478708 pod-template-generation=1 release_group=airship-openstack-libvirt Annotations: configmap-bin-hash=3e7239502aced35f903fd1f9dbafcdf161ec29da214a8cfaa4cc949e502e9b70 configmap-etc-hash=b2f6ee1e2d5e131ac3bafaa83b4fa613df58c98ccebc2638332425430ab502de Status: Running IP: 192.168.2.45 Controlled By: DaemonSet/libvirt Init Containers: init: Container ID: docker://17b0d61203eddd72749f79f8ea6acf6f0b1bb2ae862c247978de0c84e0255d1f Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 Port: Host Port: Command: kubernetes-entrypoint State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 19 Jun 2018 23:41:19 +0000 Finished: Tue, 19 Jun 2018 23:41:19 +0000 Ready: True Restart Count: 0 Environment: POD_NAME: libvirt-nf45j (v1:metadata.name) NAMESPACE: openstack (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin: DEPENDENCY_SERVICE: DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: COMMAND: echo done Mounts: /var/run/secrets/kubernetes.io/serviceaccount from libvirt-token-vzl8l (ro) Containers: libvirt: Container ID: docker://4732f11f52816376354cd6f7b2fe8b19020ee37992b91c7f291b1be7b26d3412 Image: docker.io/openstackhelm/libvirt:ubuntu-xenial-1.3.1 Image ID: docker-pullable://openstackhelm/libvirt at sha256:cb6a3612e1a7adab6c0ffd62ee8f4f9ef3ac1050e6c9576039709d0c2271118e Port: Host Port: Command: /tmp/libvirt.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 20 Jun 2018 15:09:49 +0000 Finished: Wed, 20 Jun 2018 15:09:49 +0000 Ready: False Restart Count: 186 Environment: Mounts: /dev from dev (rw) /etc/libvirt/libvirtd.conf from libvirt-etc (ro) /etc/libvirt/qemu from etc-libvirt-qemu (rw) /etc/libvirt/qemu.conf from libvirt-etc (ro) /etc/machine-id from machine-id (ro) /lib/modules from libmodules (ro) /run from run (rw) /sys/fs/cgroup from cgroup (rw) /tmp/libvirt.sh from libvirt-bin (ro) /var/lib/libvirt from var-lib-libvirt (rw) /var/lib/nova from var-lib-nova (rw) /var/run/secrets/kubernetes.io/serviceaccount from libvirt-token-vzl8l (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: libvirt-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: libvirt-bin Optional: false -----Original Message----- From: Roman Gorshunov [mailto:paye600 at gmail.com] Sent: Wednesday, June 20, 2018 1:31 PM To: KATARIA, DEEPAK Cc: airship-discuss at lists.airshipit.org Subject: Re: [Airship-discuss] Libvirt Issue with AirShip Hello Kataria, Please, provide 'kubectl logs' and 'kubectl describe' from pods failing with CrashLoopBackOff. Thank you. Best regards, -- Roman Gorshunov On Wed, Jun 20, 2018 at 7:01 PM, KATARIA, DEEPAK > wrote: > > > Hello, > > I had good run of the airship code today. Unfortunately libvirt container > failed to come up and makes nova resources unavailable. So my test-stack > deployment failed to create a VM. > > I am sharing the output from my deployment. I need some to fix this issue. > Here are the status of cluster > > > > Successfully performed deploy_site > > + echo -e '\n' > > > > > > + break > > + [[ Complete == \C\o\m\p\l\e\t\e ]] > > + exit 0 > > + [[ 40 -ge 40 ]] > > + execute_create_heat_stack > > + set +x > > > > Performing basic sanity checks by creating heat stacks > > > > + cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node > > + bash test_create_heat_stack.sh > > > > Creating KeyPair > > Downloading heat-public-net-deployment.yaml > > % Total % Received % Xferd Average Speed Time Time Time > Current > > Dload Upload Total Spent Left > Speed > > 100 895 100 895 0 0 3670 0 --:--:-- --:--:-- --:--:-- > 3683 > > Creating public-net Heat Stack > > 2018-06-20 00:59:09Z [public-net]: CREATE_IN_PROGRESS Stack CREATE started > > 2018-06-20 00:59:09Z [public-net.public_net]: CREATE_IN_PROGRESS state > changed > > 2018-06-20 00:59:09Z [public-net.public_net]: CREATE_COMPLETE state changed > > 2018-06-20 00:59:09Z [public-net.private_subnet]: CREATE_IN_PROGRESS state > changed > > 2018-06-20 00:59:09Z [public-net.private_subnet]: CREATE_COMPLETE state > changed > > 2018-06-20 00:59:09Z [public-net]: CREATE_COMPLETE Stack CREATE completed > successfully > > +---------------------+--------------------------------------+ > > | Field | Value | > > +---------------------+--------------------------------------+ > > | id | a2c34493-1f93-46bf-8483-a6ec8ca097db | > > | stack_name | public-net | > > | description | No description | > > | creation_time | 2018-06-20T00:59:08Z | > > | updated_time | None | > > | stack_status | CREATE_COMPLETE | > > | stack_status_reason | Stack CREATE completed successfully | > > +---------------------+--------------------------------------+ > > Downloading heat-basic-vm-deployment.yaml > > % Total % Received % Xferd Average Speed Time Time Time > Current > > Dload Upload Total Spent Left > Speed > > 100 2361 100 2361 0 0 8147 0 --:--:-- --:--:-- --:--:-- > 8141 > > > > Errors: > > > > 2018-06-20 01:00:46Z [test-stack-01.server]: CREATE_FAILED ResourceInError: > resources.server: Went to status ERROR due to "Message: No valid host was > found. There are not enough hosts available., Code: 500" > 2018-06-20 01:00:46Z [test-stack-01]: CREATE_FAILED Resource CREATE failed: > ResourceInError: resources.server: Went to status ERROR due to "Message: No > valid host was found. There are not enough hosts available., Code: 500" > > Stack test-stack-01 CREATE_FAILED > > + error 'creating heat stack' > + set +x > Error when creating heat stack. > > > > libvirt-nf45j 0/1 > CrashLoopBackOff 39 2h > > nova-compute-default-59zkh 0/1 > CrashLoopBackOff 25 2h > > > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIBaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=6Al5lKW05-i6ZAWfiKlRIQ&m=VVZRKGWnX0VXE9h0isLdKFb6j9MJy5mtsBnTxgyyQTE&s=E4RC6VdDRU_Zqk7HsM2ZxCshuDf6kXzPC8w9m1eawEY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Thu Jun 21 14:05:40 2018 From: claire at openstack.org (Claire Massey) Date: Thu, 21 Jun 2018 09:05:40 -0500 Subject: [Airship-discuss] Airship F2F Meeting at PTG Denver - September 2018 Message-ID: Hello Airship team, Myself and other members of the OpenStack Foundation will be working closely with you to provide support and guidance over the coming weeks and months as we work together to organize the project's structure and processes and build out the community. Stay tuned for resources to be shared soon. In the meantime I want to make sure you’re aware of and plan to participate in the PTG (https://www.openstack.org/ptg/ ). It's a 5-day long meeting series that provides face to face time for project teams under the OpenStack Foundation umbrella. The next PTG will be held in Denver, Colorado the week of September 10. This is a great opportunity for the Airship team to meet in person to discuss both project specific items and schedule cross-project collaborative sessions with other participating teams. Airship will have a dedicated spaced reserved to hold a two-day meeting at the PTG, the exact schedule will be announced later. Below you'll find some information from my colleague Kendall on participating teams and administrative info on the PTG. Please make sure you register for the event early (as prices will increase) and have your hotel booked so you don’t miss the discounted rate if you’re planning to attend. Add your name to the following etherpad if you plan on attending: https://etherpad.openstack.org/p/AirshipPTG4 and from there we can build out the Airship agenda. Feel free to ping me if you have any questions about the information below or about the PTG itself. Thanks, Claire Massey OpenStack Foundation ----------------------------------------------- Hello Everyone! Wanted to give you some updates on PTG4 planning. We have finalized the list of SIGs/ Groups/WGs/Teams that are attending. They are as follows: * Airship * API SIG * Barbican/Security SIG * Blazar * Chef OpenStack * Cinder * Cyborg * Designate * Documentation * Edge Computing Group * First Contact SIG * Glance * Heat * Horizon * Infrastructure * Interop WG * Ironic * Kata * Keystone * Kolla * LOCI * Manila * Masakari * Mistral * Monasca * Neutron * Nova * Octavia * OpenStack Ansible * OpenStack Charms * OpenStack Helm * OpenStackClient * Operator Meetup * Puppet OpenStack * QA * Oslo * Public Cloud WG * Release Management * Requirements * Sahara * Scientific SIG * Self-Healing SIG * SIG-K8s * StarlingX * Swift * OpenStack TC * TripleO * Upgrades SIG * Watcher * Zuul (pending confirmation) Thierry and I are working on placing them into a strawman schedule to reduce conflicts between related or overlapping groups. We should have more on what that will look like and a draft for you all to review in the next few weeks. We also wanted to remind you all of the Travel Support Program. We are again doing a two phase selection. The first deadline is approaching: July 1st. At this point we have less than a dozen applicants so if you need it or even think you need it, I urge you to apply here[1]. Also! Reminder that we have a finite number of rooms in the hotel block so please book early to make sure you get the discounted rate before they run out. You can book those rooms here[2] (pardon the ugly URL). Can't wait to see you all there! -Kendall Nelson (diablo_rojo) P.S. Gonna try to do a game night again since you all seemed to enjoy it so much last time :) [1] https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 [2] https://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%2C%20Openstack%5Edensa%60opnopna%7Copnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes -------------- next part -------------- An HTML attachment was scrubbed... URL: From fzdarsky at redhat.com Thu Jun 21 16:52:47 2018 From: fzdarsky at redhat.com (Frank Zdarsky) Date: Thu, 21 Jun 2018 18:52:47 +0200 Subject: [Airship-discuss] [airship-in-a-bottle] DNS configurable? Message-ID: Hi all, I'm trying to deploy Airship-in-a-Bottle in a VM on libvirt/KVM. During the "generate genesis" step, the script overwrites /etc/resolve.conf to use the Google DNS servers (8.8.8.8 and 8.8.4.4), but those servers are blocked by our firewall, so FQDNs don't resolve and the script fails. Is there a way to configure the DNS server used or to simply keep the DNS server configured via DHCP? BTW, where are issues/RFEs filed against Airship? In Launchpad? Thanks, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Thu Jun 21 17:15:16 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Thu, 21 Jun 2018 19:15:16 +0200 Subject: [Airship-discuss] [airship-in-a-bottle] DNS configurable? In-Reply-To: References: Message-ID: Hello Frank, Issues/RFEs should be filled in StoryBoard: https://storyboard.openstack.org/#!/project/1006 - this one is for Airship-in-a-bottle. Airship StoryBoard Group: https://storyboard.openstack.org/#!/project_group/85 I will check the code in regards to DNS issue tomorrow. Should be fairly easy to fix. Best regards, -- Roman Gorshunov On Thu, Jun 21, 2018 at 6:52 PM, Frank Zdarsky wrote: > Hi all, > > I'm trying to deploy Airship-in-a-Bottle in a VM on libvirt/KVM. > > During the "generate genesis" step, the script overwrites /etc/resolve.conf > to use the Google DNS servers (8.8.8.8 and 8.8.4.4), but those servers are > blocked by our firewall, so FQDNs don't resolve and the script fails. > > Is there a way to configure the DNS server used or to simply keep the DNS > server configured via DHCP? > > BTW, where are issues/RFEs filed against Airship? In Launchpad? > > Thanks, > > Frank > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From fzdarsky at redhat.com Thu Jun 21 17:27:05 2018 From: fzdarsky at redhat.com (Frank Zdarsky) Date: Thu, 21 Jun 2018 19:27:05 +0200 Subject: [Airship-discuss] [airship-in-a-bottle] DNS configurable? In-Reply-To: References: Message-ID: Hi Roman, Thanks a lot for the very fast response! And sorry for having been lazy: I should have investigated this a bit more myself. It's possible to configure the DNS server by editing data.dns.upstream_servers and data.dns.upstream_servers_joined in deploy/airship-in-a-bottle/deployment_files/site/demo/networks/common-addresses.yaml Maybe something worth documenting? Best regards, Frank On Thu, Jun 21, 2018 at 7:15 PM Roman Gorshunov wrote: > Hello Frank, > > Issues/RFEs should be filled in StoryBoard: > https://storyboard.openstack.org/#!/project/1006 - this one is for > Airship-in-a-bottle. > Airship StoryBoard Group: > https://storyboard.openstack.org/#!/project_group/85 > > I will check the code in regards to DNS issue tomorrow. Should be > fairly easy to fix. > > Best regards, > -- > Roman Gorshunov > > On Thu, Jun 21, 2018 at 6:52 PM, Frank Zdarsky > wrote: > > Hi all, > > > > I'm trying to deploy Airship-in-a-Bottle in a VM on libvirt/KVM. > > > > During the "generate genesis" step, the script overwrites > /etc/resolve.conf > > to use the Google DNS servers (8.8.8.8 and 8.8.4.4), but those servers > are > > blocked by our firewall, so FQDNs don't resolve and the script fails. > > > > Is there a way to configure the DNS server used or to simply keep the DNS > > server configured via DHCP? > > > > BTW, where are issues/RFEs filed against Airship? In Launchpad? > > > > Thanks, > > > > Frank > > > > > > _______________________________________________ > > Airship-discuss mailing list > > Airship-discuss at lists.airshipit.org > > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > -- Frank Zdarsky | NFV&SDN Technology Strategy, Office of the CTO | Red Hat e: fzdarsky at redhat.com | irc: fzdarsky at freenode | m: +49 175 82 11 64 4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Fri Jun 22 13:28:12 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Fri, 22 Jun 2018 15:28:12 +0200 Subject: [Airship-discuss] [airship-in-a-bottle] DNS configurable? In-Reply-To: References: Message-ID: Hello Frank, > It's possible to configure the DNS server by editing > data.dns.upstream_servers > and > data.dns.upstream_servers_joined > in > deploy/airship-in-a-bottle/deployment_files/site/demo/networks/common-addresses.yaml Yes, but airship-in-a-bottle uses deploy/airship-in-a-bottle/deployment_files/site/dev/networks/common-addresses.yaml - 'dev' site, instead of 'demo', see https://github.com/openstack/airship-in-a-bottle/blob/master/manifests/common/deploy-airship.sh#L53 script. > Maybe something worth documenting? Sure, it is. I'm thinking of hot to get it properly done. Thank you for the patch! I will comment it later. Best regards, -- Roman Gorshunov From paye600 at gmail.com Fri Jun 22 13:43:12 2018 From: paye600 at gmail.com (Roman Gorshunov) Date: Fri, 22 Jun 2018 15:43:12 +0200 Subject: [Airship-discuss] Libvirt Issue with AirShip In-Reply-To: <90BF8249EF30DB4A83C0F4D241F3D72C279D3814@MISOUT7MSGUSRCC.ITServices.sbc.com> References: <90BF8249EF30DB4A83C0F4D241F3D72C279D3718@MISOUT7MSGUSRCC.ITServices.sbc.com> <90BF8249EF30DB4A83C0F4D241F3D72C279D3814@MISOUT7MSGUSRCC.ITServices.sbc.com> Message-ID: Hello Kataria, I see `kubectl logs` from libvirt-nf45j, but they don't say why libvirtd stops, and I see `kubectl describe` kubectl describe` from libvirt-nf45j twice, and no nova-compute-default-59zkh logs/describe. Could it be that you have other pods failing? Can you have a look? If you have long logs, paste them to https://paste.ubuntu.com/, please. Does the issue persist if you create new Ubuntu 16.04 VM and try to run airship-in-a-bottle from scratch? And thank you for trying Airship! Best regards, -- Roman Gorshunov From fzdarsky at redhat.com Fri Jun 22 14:50:52 2018 From: fzdarsky at redhat.com (Frank Zdarsky) Date: Fri, 22 Jun 2018 16:50:52 +0200 Subject: [Airship-discuss] [airship-in-a-bottle] DNS configurable? In-Reply-To: References: Message-ID: On Fri, Jun 22, 2018 at 3:28 PM Roman Gorshunov wrote: > Hello Frank, > > > It's possible to configure the DNS server by editing > > data.dns.upstream_servers > > and > > data.dns.upstream_servers_joined > > in > > > deploy/airship-in-a-bottle/deployment_files/site/demo/networks/common-addresses.yaml > Yes, but airship-in-a-bottle uses > > deploy/airship-in-a-bottle/deployment_files/site/dev/networks/common-addresses.yaml > - 'dev' site, instead of 'demo', see > > https://github.com/openstack/airship-in-a-bottle/blob/master/manifests/common/deploy-airship.sh#L53 > script. > Good catch, I changed that in patchset 2. > > > Maybe something worth documenting? > Sure, it is. I'm thinking of hot to get it properly done. > > Thank you for the patch! I will comment it later. > > Best regards, > -- > Roman Gorshunov > -------------- next part -------------- An HTML attachment was scrubbed... URL: