Hi all,

 

We are trying to install Airship, but ran into some issues I’d like to bring up here. Specifically, we base our config on treasuremap tag v19.02.04 and see the following two problems:

 

i) The ceph-rbd-provisioner pod tries to use service kube-dns although only coredns is deployed by Airship.

Log:

kubectl logs -n ceph ceph-rbd-provisioner-85f5b5bb8b-8vzc7

I0207 11:21:51.007104       1 leaderelection.go:156] attempting to acquire leader lease...

I0207 11:21:51.014035       1 leaderelection.go:178] successfully acquired lease to provision for pvc ucp/mysql-data-mariadb-server-0

I0207 11:21:51.048705       1 event.go:218] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ucp", Name:"mysql-data-mariadb-server-0", UID:"6e479917-2ac7-11e9-ab70-246e9657cd40", APIVersion:"v1", ResourceVersion:"10655", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "ucp/mysql-data-mariadb-server-0"

E0207 11:21:51.249161       1 provision.go:199] error getting kube-dns service: services "kube-dns" not found

I0207 11:22:16.379111       1 leaderelection.go:204] stopped trying to renew lease to provision for pvc ucp/mysql-data-mariadb-server-1, timeout reached

Related change: https://github.com/kubernetes-incubator/external-storage/pull/861/commits/99098beac2ff91e788b027f0e27d9e7f5ec5b769

 

We are wondering if that is a version selection problem (non-matching Kubernetes and DNS service), but we haven’t touched the versions.yaml.

 

 

ii) The ceph-rgw pod is not started on the Genesis node (nor on any other node; this is during an early phase before other nodes are provisioned); no related error in logs; all other ceph pods seem to be there. As a result, PVCs are not bound and database pods cannot start.

site/airship-seaworthy/profiles/host/cp_r630.yaml:      ceph-rgw: enabled

site/airship-seaworthy/profiles/host/cp_r630.yaml:      tenant-ceph-rgw: enabled

 

# kubectl exec -n ceph ceph-mon-p7k6d -- ceph -s

  cluster:

    id:     7b7576f4-3358-4668-9112-100440079807

    health: HEALTH_WARN

            1 MDSs report slow metadata IOs

            Reduced data availability: 143 pgs inactive

            Degraded data redundancy: 143 pgs undersized

 

  services:

    mon: 1 daemons, quorum salmon-11

    mgr: salmon-11(active)

    mds: cephfs-1/1/1 up  {0=mds-ceph-mds-6774766c6-2n9tm=up:creating}

    osd: 1 osds: 1 up, 1 in

 

  data:

    pools:   18 pools, 143 pgs

    objects: 0  objects, 0 B

    usage:   111 MiB used, 372 GiB / 372 GiB avail

    pgs:     100.000% pgs not active

             143 undersized+peered

 

Did anybody run into this issue before or could point us in the right direction?

 

Thanks a lot!

 

Best regards

Georg