Hi Roman, Thanks for your help. It seems that the ceph-common package solved everything. I have a working 7 nodes cluster now! Best, James On Tue, Jun 12, 2018 at 4:46 PM, Roman Gorshunov <paye600@gmail.com> wrote:
Hello James,
Sorry, it took a little bit too long for me to reply. I have talked to devs, and Scott Hussey suggested that ceph-common package may not have been installed: https://github.com/openstack/airship-promenade/blob/master/ examples/complete/HostSystem.yaml#L74
Also Pete Birley suggested that Armada multinode suggests to use 5 nodes in ceph: https://github.com/openstack/openstack-helm/blob/master/ tools/deployment/armada/multinode/armada-ceph.yaml#L101
Give it a try, and let me know how it goes.
Thank you!
Best regards, -- Roman Gorshunov
Hello Roman,
Thanks for your answer.
What is the state of promenade + armada right now? We are pretty excited about it.
Here is the output you required:
1) kubectl get pods --all-namespaces -o wide https://pastebin.com/b4kWGpBs
The only thing I see is the pod ldap-0 from the namespace osh-infra Here is the output of kubectl describe pod -n osh-infra ldap-0 https://pastebin.com/sSBVZnch
I believe the interesting part is here:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 12m (x576 over 21h) kubelet, n3 Unable to mount volumes for pod "ldap-0_osh-infra(85e73018- 6b4c-11e8-b6e5-080027ee1df7)": timeout expired waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0". list of unmounted volumes=[ldap-config ldap-data]. list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp] Warning FailedMount 6m (x649 over 21h) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () Warning FailedMount 2m (x651 over 21h) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: ()
2) docker ps on all nodes https://pastebin.com/CaqLEv70
3) docker ps -a on all nodes https://pastebin.com/wvB0qkbh
4) docker images on all nodes https://pastebin.com/H07kEwqe
Best, James
On Sat, Jun 9, 2018 at 1:51 PM, Roman Gorshunov <paye600@gmail.com> wrote:
Hello James,
I didn’t see it before. Could you provide output of the following commands, if you still have this environment running. It would allow to
get
understanding of what has been deployed and what has not: kubectl get pods --all-namespaces -o wide # on n0 docker ps # on all nodes docker ps -a # on all nodes docker images # on all nodes
Also note that artifacts-aic.atlantafoundry.com host where some of docker images are hosted is inaccessible at the moment, it could be that some images have not been downloaded successfully, team started mirroring images to dockerhub & quay.io. Right before the summit development was focused on making airship-in-a-bottle work, and migration from github/gerrithub to openstack foundation infrastructure is still ongoing.
Apologize for the inconvenience, James.
Thank you.
On 8 Jun 2018, at 21:18, James Devon <jr8586335@gmail.com> wrote:
Hi,
After having tried airship-in-a-bottle, I'm trying to setup a Kubernetes cluster and install openstack on it using promenade and armada.
I'm using the basic example : https://github.com/openstack/airship-promenade/tree/master/ examples/basic, I've changed the ip addresses to match my environment of 4 nodes (n0 for genesis and then n1, n2, n3).
At this point I'm facing only one problem: promenade-api.ucp.svc.cluster.local is resolvable but the resolution is bad (192.168.150.165). So instead of promenade-api.ucp.svc.cluster.local, I used the pod address with the good port and it is working fine.
Then, I am able to make n1,n2 and n3 join the cluster and I use the scripts from https://github.com/openstack/openstack-helm/tree/master/ tools/deployment/armada to apply armada manifests.
osh-infra is not able to start the ldap pod.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 6m (x4 over 6m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 3 times) Normal Scheduled 6m default-scheduler Successfully assigned ldap-0 to n3 Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" Normal SuccessfulMountVolume 6m kubelet, n3 MountVolume.SetUp succeeded for volume "ldap-token-txdqp" Warning FailedMount 13s (x11 over 6m) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85dc46a5-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () Warning FailedMount 12s (x11 over 6m) kubelet, n3 MountVolume.WaitForAttach failed for volume "pvc-85d6770b-6b4c-11e8-b6e5-080027ee1df7" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: () Warning FailedMount 2s (x3 over 4m) kubelet, n3 Unable to mount volumes for pod "ldap-0_osh-infra(85e73018-6b4c-11e8-b6e5-080027ee1df7)": timeout expired waiting for volumes to attach or mount for pod "osh-infra"/"ldap-0".
On Sat, Jun 9, 2018 at 6:50 PM, James Devon <jr8586335@gmail.com> wrote: list of
unmounted volumes=[ldap-config ldap-data]. list of unattached volumes=[ldap-config ldap-data ldap-token-txdqp]
I've checked the logs of ceph-osd and there's also something:
ceph-2.1 | starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/journal/journal.2 ceph-2.1 | 2018-06-08 18:41:03.854234 7f7cd7d58e00 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway ceph-2.1 | 2018-06-08 18:41:03.855499 7f7cd7d58e00 -1 journal do_read_entry(8192): bad header magic ceph-2.1 | 2018-06-08 18:41:03.855567 7f7cd7d58e00 -1 journal do_read_entry(8192): bad header magic ceph-2.1 | 2018-06-08 18:41:03.901804 7f7cd7d58e00 -1 osd.2 0 log_to_monitors {default=true} ceph-2.1 | 2018-06-08 18:41:04.529480 7f7cbb2c7700 -1 osd.2 0 waiting for initial osdmap
Did anybody encountered this error and how to fix it?
Best, James
_______________________________________________ Airship-discuss mailing list Airship-discuss@lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss