DNS and Ceph issues during deployment
Hi all, We are trying to install Airship, but ran into some issues I'd like to bring up here. Specifically, we base our config on treasuremap tag v19.02.04 and see the following two problems: i) The ceph-rbd-provisioner pod tries to use service kube-dns although only coredns is deployed by Airship. Log: kubectl logs -n ceph ceph-rbd-provisioner-85f5b5bb8b-8vzc7 I0207 11:21:51.007104 1 leaderelection.go:156] attempting to acquire leader lease... I0207 11:21:51.014035 1 leaderelection.go:178] successfully acquired lease to provision for pvc ucp/mysql-data-mariadb-server-0 I0207 11:21:51.048705 1 event.go:218] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ucp", Name:"mysql-data-mariadb-server-0", UID:"6e479917-2ac7-11e9-ab70-246e9657cd40", APIVersion:"v1", ResourceVersion:"10655", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "ucp/mysql-data-mariadb-server-0" E0207 11:21:51.249161 1 provision.go:199] error getting kube-dns service: services "kube-dns" not found I0207 11:22:16.379111 1 leaderelection.go:204] stopped trying to renew lease to provision for pvc ucp/mysql-data-mariadb-server-1, timeout reached Related change: https://github.com/kubernetes-incubator/external-storage/pull/861/commits/99... We are wondering if that is a version selection problem (non-matching Kubernetes and DNS service), but we haven't touched the versions.yaml. ii) The ceph-rgw pod is not started on the Genesis node (nor on any other node; this is during an early phase before other nodes are provisioned); no related error in logs; all other ceph pods seem to be there. As a result, PVCs are not bound and database pods cannot start. site/airship-seaworthy/profiles/host/cp_r630.yaml: ceph-rgw: enabled site/airship-seaworthy/profiles/host/cp_r630.yaml: tenant-ceph-rgw: enabled # kubectl exec -n ceph ceph-mon-p7k6d -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN 1 MDSs report slow metadata IOs Reduced data availability: 143 pgs inactive Degraded data redundancy: 143 pgs undersized services: mon: 1 daemons, quorum salmon-11 mgr: salmon-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6774766c6-2n9tm=up:creating} osd: 1 osds: 1 up, 1 in data: pools: 18 pools, 143 pgs objects: 0 objects, 0 B usage: 111 MiB used, 372 GiB / 372 GiB avail pgs: 100.000% pgs not active 143 undersized+peered Did anybody run into this issue before or could point us in the right direction? Thanks a lot! Best regards Georg
Hi Georg, the kube-dns seems an issue but I wonder if this could be the failing issue as the treasuremap reference (airship-seaworthy) site is being tested on frequent bases especially the tags released (here some info on it, https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html). I believe that current ceph configuration in globals require at least 3 OSDs (to fulfill 3x replication). Normally each OSD represents a disk - but you may use 3 folders on the same disk that could lead to a success. There is also possibility to fiddle with replication settings and turn them all to 1 https://github.com/openstack/openstack-helm-infra/blob/master/ceph-client/va... Still - would very much recommend using more disks for production like bare-metal deployments. FYI - the kube-dns item is likely something that should be fixed. Kindly, Kaspars ________________________________ From: Georg Kunz [georg.kunz@ericsson.com] Sent: Thursday, February 07, 2019 7:37 AM To: airship-discuss@lists.airshipit.org Cc: Stefan Behrens; Rihab Banday Subject: [Airship-discuss] DNS and Ceph issues during deployment Hi all, We are trying to install Airship, but ran into some issues I’d like to bring up here. Specifically, we base our config on treasuremap tag v19.02.04 and see the following two problems: i) The ceph-rbd-provisioner pod tries to use service kube-dns although only coredns is deployed by Airship. Log: kubectl logs -n ceph ceph-rbd-provisioner-85f5b5bb8b-8vzc7 I0207 11:21:51.007104 1 leaderelection.go:156] attempting to acquire leader lease... I0207 11:21:51.014035 1 leaderelection.go:178] successfully acquired lease to provision for pvc ucp/mysql-data-mariadb-server-0 I0207 11:21:51.048705 1 event.go:218] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ucp", Name:"mysql-data-mariadb-server-0", UID:"6e479917-2ac7-11e9-ab70-246e9657cd40", APIVersion:"v1", ResourceVersion:"10655", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "ucp/mysql-data-mariadb-server-0" E0207 11:21:51.249161 1 provision.go:199] error getting kube-dns service: services "kube-dns" not found I0207 11:22:16.379111 1 leaderelection.go:204] stopped trying to renew lease to provision for pvc ucp/mysql-data-mariadb-server-1, timeout reached Related change: https://github.com/kubernetes-incubator/external-storage/pull/861/commits/99098beac2ff91e788b027f0e27d9e7f5ec5b769<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kubernetes-2Dincubator_external-2Dstorage_pull_861_commits_99098beac2ff91e788b027f0e27d9e7f5ec5b769&d=DwMFAg&c=LFYZ-o9_HUMeMTSQicvjIg&r=L2WcX1Ge33cpIHXX6krAgw&m=rgoV-81VfVoElaVyHhCe0EhKF_bYMlpjF9rTMvG-gBc&s=OQ_twK6I4cvTerkUhyowBcmxM9j6aXqJGj4HLQ7TfVU&e=> We are wondering if that is a version selection problem (non-matching Kubernetes and DNS service), but we haven’t touched the versions.yaml. ii) The ceph-rgw pod is not started on the Genesis node (nor on any other node; this is during an early phase before other nodes are provisioned); no related error in logs; all other ceph pods seem to be there. As a result, PVCs are not bound and database pods cannot start. site/airship-seaworthy/profiles/host/cp_r630.yaml: ceph-rgw: enabled site/airship-seaworthy/profiles/host/cp_r630.yaml: tenant-ceph-rgw: enabled # kubectl exec -n ceph ceph-mon-p7k6d -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN 1 MDSs report slow metadata IOs Reduced data availability: 143 pgs inactive Degraded data redundancy: 143 pgs undersized services: mon: 1 daemons, quorum salmon-11 mgr: salmon-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6774766c6-2n9tm=up:creating} osd: 1 osds: 1 up, 1 in data: pools: 18 pools, 143 pgs objects: 0 objects, 0 B usage: 111 MiB used, 372 GiB / 372 GiB avail pgs: 100.000% pgs not active 143 undersized+peered Did anybody run into this issue before or could point us in the right direction? Thanks a lot! Best regards Georg
Hi Kaspars, Thank you for the response and the hint regarding the ceph issue. We will revisit our ceph config and either go for 3 directories or a lower replication level. Best regards Georg From: SKELS, KASPARS <ks3019@att.com> Sent: Thursday, February 7, 2019 4:55 PM To: Georg Kunz <georg.kunz@ericsson.com>; airship-discuss@lists.airshipit.org Cc: Stefan Behrens <stefan.behrens@ericsson.com>; Rihab Banday <rihab.banday@ericsson.com>; BIRLEY, PETE <pb269f@att.com>; MCEUEN, MATT <MM9745@att.com>; REDDY, CHINASUBBA <cr3938@att.com>; HELER, MATTHEW <mh935s@att.com> Subject: RE: DNS and Ceph issues during deployment Hi Georg, the kube-dns seems an issue but I wonder if this could be the failing issue as the treasuremap reference (airship-seaworthy) site is being tested on frequent bases especially the tags released (here some info on it, https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html). I believe that current ceph configuration in globals require at least 3 OSDs (to fulfill 3x replication). Normally each OSD represents a disk - but you may use 3 folders on the same disk that could lead to a success. There is also possibility to fiddle with replication settings and turn them all to 1 https://github.com/openstack/openstack-helm-infra/blob/master/ceph-client/va... Still - would very much recommend using more disks for production like bare-metal deployments. FYI - the kube-dns item is likely something that should be fixed. Kindly, Kaspars ________________________________ From: Georg Kunz [georg.kunz@ericsson.com] Sent: Thursday, February 07, 2019 7:37 AM To: airship-discuss@lists.airshipit.org<mailto:airship-discuss@lists.airshipit.org> Cc: Stefan Behrens; Rihab Banday Subject: [Airship-discuss] DNS and Ceph issues during deployment Hi all, We are trying to install Airship, but ran into some issues I'd like to bring up here. Specifically, we base our config on treasuremap tag v19.02.04 and see the following two problems: i) The ceph-rbd-provisioner pod tries to use service kube-dns although only coredns is deployed by Airship. Log: kubectl logs -n ceph ceph-rbd-provisioner-85f5b5bb8b-8vzc7 I0207 11:21:51.007104 1 leaderelection.go:156] attempting to acquire leader lease... I0207 11:21:51.014035 1 leaderelection.go:178] successfully acquired lease to provision for pvc ucp/mysql-data-mariadb-server-0 I0207 11:21:51.048705 1 event.go:218] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ucp", Name:"mysql-data-mariadb-server-0", UID:"6e479917-2ac7-11e9-ab70-246e9657cd40", APIVersion:"v1", ResourceVersion:"10655", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "ucp/mysql-data-mariadb-server-0" E0207 11:21:51.249161 1 provision.go:199] error getting kube-dns service: services "kube-dns" not found I0207 11:22:16.379111 1 leaderelection.go:204] stopped trying to renew lease to provision for pvc ucp/mysql-data-mariadb-server-1, timeout reached Related change: https://github.com/kubernetes-incubator/external-storage/pull/861/commits/99098beac2ff91e788b027f0e27d9e7f5ec5b769<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kubernetes-2Dincubator_external-2Dstorage_pull_861_commits_99098beac2ff91e788b027f0e27d9e7f5ec5b769&d=DwMFAg&c=LFYZ-o9_HUMeMTSQicvjIg&r=L2WcX1Ge33cpIHXX6krAgw&m=rgoV-81VfVoElaVyHhCe0EhKF_bYMlpjF9rTMvG-gBc&s=OQ_twK6I4cvTerkUhyowBcmxM9j6aXqJGj4HLQ7TfVU&e=> We are wondering if that is a version selection problem (non-matching Kubernetes and DNS service), but we haven't touched the versions.yaml. ii) The ceph-rgw pod is not started on the Genesis node (nor on any other node; this is during an early phase before other nodes are provisioned); no related error in logs; all other ceph pods seem to be there. As a result, PVCs are not bound and database pods cannot start. site/airship-seaworthy/profiles/host/cp_r630.yaml: ceph-rgw: enabled site/airship-seaworthy/profiles/host/cp_r630.yaml: tenant-ceph-rgw: enabled # kubectl exec -n ceph ceph-mon-p7k6d -- ceph -s cluster: id: 7b7576f4-3358-4668-9112-100440079807 health: HEALTH_WARN 1 MDSs report slow metadata IOs Reduced data availability: 143 pgs inactive Degraded data redundancy: 143 pgs undersized services: mon: 1 daemons, quorum salmon-11 mgr: salmon-11(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-6774766c6-2n9tm=up:creating} osd: 1 osds: 1 up, 1 in data: pools: 18 pools, 143 pgs objects: 0 objects, 0 B usage: 111 MiB used, 372 GiB / 372 GiB avail pgs: 100.000% pgs not active 143 undersized+peered Did anybody run into this issue before or could point us in the right direction? Thanks a lot! Best regards Georg
participants (2)
-
Georg Kunz
-
SKELS, KASPARS