Hi,
I can resolve ceph-min in Genesis node:
root@cab23-r720-11:~# dig ceph-mon.ceph.svc.cluster.local @10.96.0.10
; <<>> DiG 9.10.3-P4-Ubuntu <<>> ceph-mon.ceph.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7595
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ceph-mon.ceph.svc.cluster.local. IN   A
;; ANSWER SECTION:
ceph-mon.ceph.svc.cluster.local. 5 IN A          10.23.22.11
;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Nov 11 05:03:22 CST 2018
;; MSG SIZE  rcvd: 76
By the way, I found that mariadb-ingress pod is running, but DO NOT BE READY.
root@cab23-r720-11:~# kubectl get pods -n ucp
NAME                                                  READY     STATUS      RESTARTS   AGE
airship-ucp-ceph-config-ceph-ns-key-generator-qxj6r   0/1       Completed   0          1d
airship-ucp-rabbitmq-rabbitmq-0                       0/1       Init:0/2    0          1d
ingress-6cd5b89d5d-98kbj                              1/1       Running     0          1d
ingress-6cd5b89d5d-s7qsz                              1/1       Running     0          1d
ingress-error-pages-5c97bb46bb-2bdlx                  1/1       Running     0          1d
ingress-error-pages-5c97bb46bb-g62b8                  1/1       Running     0          1d
mariadb-ingress-85b8556fbc-8v47h                      0/1       Running     0          1d
mariadb-ingress-85b8556fbc-hk9qr                      0/1       Running     0          1d
mariadb-ingress-error-pages-64f89dc697-g2l4g          1/1       Running     0          1d
mariadb-server-0                                      0/1       Init:0/2    0          1d
mariadb-server-1                                      0/1       Init:0/2    0          1d
mariadb-server-2                                      0/1       Init:0/2    0          1d
postgresql-0                                          0/1       Init:0/1    0          1d
root@cab23-r720-11:~# kubectl logs -f mariadb-ingress-85b8556fbc-hk9qr -n ucp
+ COMMAND=start
+ start
+ exec /usr/bin/dumb-init /nginx-ingress-controller --force-namespace-isolation --watch-namespace ucp --election-id=airship-ucp-mariadb --ingress-class=airship-ucp-mariadb-mariadb-ingress
 --default-backend-service=ucp/mariadb-ingress-error-pages --tcp-services-configmap=ucp/mariadb-services-tcp
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.9.0
  Build:      git-6816630
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
I1109 00:05:30.619177       7 flags.go:151] Watching for ingress class: airship-ucp-mariadb-mariadb-ingress
W1109 00:05:30.619260       7 flags.go:154] only Ingress with class "airship-ucp-mariadb-mariadb-ingress" will be processed by this ingress controller
I1109 00:05:30.620028       7 main.go:227] Creating API client for https://10.96.0.1:443
I1109 00:05:30.641766       7 main.go:239] Running in Kubernetes Cluster version v1.10 (v1.10.2) - git (clean) commit 81753b10df112992bf51bbc2c2f85208aad78335 - platform linux/amd64
I1109 00:05:30.646318       7 main.go:83] validated ucp/mariadb-ingress-error-pages as the default backend
I1109 00:05:30.989789       7 stat_collector.go:77] starting new nginx stats collector for Ingress controller running in namespace ucp (class airship-ucp-mariadb-mariadb-ingress)
I1109 00:05:30.989841       7 stat_collector.go:78] collector extracting information from port 18080
I1109 00:05:31.006073       7 nginx.go:250] starting Ingress controller
I1109 00:05:31.009821       7 listers.go:69] ignoring add for ingress airship-ucp--mgr-8e72c0 based on annotation kubernetes.io/ingress.class with value nginx
I1109 00:05:31.009861       7 listers.go:69] ignoring add for ingress ucp-airship-ingress based on annotation kubernetes.io/ingress.class with value nginx-cluster
I1109 00:05:31.106538       7 nginx.go:255] running initial sync of secrets
I1109 00:05:31.106894       7 nginx.go:261] ignoring add for ingress airship-ucp--mgr-8e72c0 based on annotation kubernetes.io/ingress.class with value nginx
I1109 00:05:31.106989       7 nginx.go:261] ignoring add for ingress ucp-airship-ingress based on annotation kubernetes.io/ingress.class with value nginx-cluster
I1109 00:05:31.107145       7 nginx.go:288] starting NGINX process...
I1109 00:05:31.107192       7 leaderelection.go:174] attempting to acquire leader lease...
I1109 00:05:31.120826       7 status.go:196] new leader elected: mariadb-ingress-85b8556fbc-4r7vb
W1109 00:05:31.137271       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
I1109 00:05:31.137361       7 controller.go:211] backend reload required
I1109 00:05:31.137591       7 stat_collector.go:34] changing prometheus collector from  to default
I1109 00:05:31.232578       7 controller.go:220] ingress backend successfully reloaded...
W1109 00:05:40.746197       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 00:05:55.991850       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
I1109 00:06:01.847400       7 leaderelection.go:184] successfully acquired lease ucp/airship-ucp-mariadb-airship-ucp-mariadb-mariadb-ingress
I1109 00:06:01.847433       7 status.go:196] new leader elected: mariadb-ingress-85b8556fbc-hk9qr
W1109 00:15:31.010319       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 00:22:36.016006       7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource
 version: 25309 (25334)
W1109 00:25:31.010617       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 00:35:31.011230       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 00:44:28.051272       7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource
 version: 27637 (28265)
W1109 00:45:31.011379       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 00:55:31.012008       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 00:55:34.323748       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:02:04.084532       7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource
 version: 30554 (30646)
W1109 01:05:31.012255       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:15:31.012528       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:15:34.324431       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:20:54.121121       7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource
 version: 32819 (33059)
W1109 01:25:31.012743       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:35:31.013183       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:39:48.142735       7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource
 version: 35215 (35473)
W1109 01:45:31.013443       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 01:55:31.013670       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 02:05:29.154286       7 reflector.go:334] k8s.io/ingress-nginx/internal/ingress/controller/listers.go:46: watch of *v1.Endpoints ended with: too old resource
 version: 37485 (38558)
W1109 02:05:31.014193       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 02:15:31.014413       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
W1109 02:25:31.015156       7 controller.go:342] service ucp/mariadb-server does not have any active endpoints for port 3306 and protocol TCP
Does it health?
Best Regards!
Maxwell Li
From: Matthew H <matthew.heler@hotmail.com> 
Sent: Tuesday, November 13, 2018 11:59 PM
To: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>; airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: Re: [Airship-discuss] Airship installation Questions
Greetings,
Can you resolve ceph-mon.ceph.svc.cluster.local from your genesis node?
dig ceph-mon.ceph.svc.cluster.local @10.96.0.10
From: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>
Sent: Tuesday, November 13, 2018 4:15 AM
To: Matthew H; airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou)
Subject: RE: [Airship-discuss] Airship installation Questions 
 
Hi,
I checked ceph-mon and ucp mariadb-ingress resolv.conf. It seems ceph namespaces related pod use ceph.svc.cluster.local svc.cluster.local cluster.local but ucp namespaces related pod
 only use ucp.svc.cluster.local svc.cluster.local cluster.local. Thanks very much!
 
root@cab23-r720-11:~# kubectl exec -it  ceph-mon-qqjzz -n ceph -- /bin/sh
# cat resolv.conf
nameserver 10.96.0.10
search ceph.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
# cat hosts
# This file is controlled by Promenade.  Do not modify.
#
127.0.0.1       cab23-r720-11.local cab23-r720-11
127.0.0.1       localhost
 
 
root@cab23-r720-11:~# kubectl exec -it mariadb-ingress-85b8556fbc-7hg9b  -n ucp -- /bin/sh
# cat resolv.conf
nameserver 10.96.0.10
search ucp.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
# cat hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1   localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
fe00::0    ip6-mcastprefix
fe00::1    ip6-allnodes
fe00::2    ip6-allrouters
10.97.38.118  mariadb-ingress-85b8556fbc-7hg9b
 
 
Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:
 +86 138 057 59684
E-Mail:  
qiaolin.tu@nokia-sbell.com
 
 
From: Tu, Qiaolin (NSB - CN/Hangzhou) 
Sent: Tuesday, November 13, 2018 4:45 PM
To: 'Matthew H' <matthew.heler@hotmail.com>;
airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: RE: [Airship-discuss] Airship installation Questions
 
Hi,
We just deploy genesis node and haven’t deploy master node yet.
 
root@cab23-r720-11:~# cat /etc/resolv.conf
options timeout:1 attempts:1
domain local
 
nameserver 10.96.0.10
 
nameserver 10.56.126.31
nameserver 10.96.0.10
nameserver
8.8.8.8
 
 
root@cab23-r720-11:~# vi /etc/hosts
# This file is controlled by Promenade.  Do not modify.
#
127.0.0.1       cab23-r720-11.local cab23-r720-11
127.0.0.1       localhost
 
 
root@cab23-r720-11:~# kubectl get pod --all-namespaces
NAMESPACE     NAME             
                                          READY     STATUS     
 RESTARTS   AGE
ceph          airship-ucp-ceph-provisioners-ceph-ns-key-generator-npzgl  
 0/1       Completed   0         
 23h
ceph          ceph-bootstrap-mtlgj                               
        0/1       Completed  
 0          23h
ceph          ceph-cephfs-client-key-generator-zbp65                     
 0/1       Completed   0         
 23h
ceph          ceph-cephfs-provisioner-676684f6bd-n5hjm                   
 1/1       Running     0  
       23h
ceph          ceph-mds-5f547b6fd7-sg49g                                  
 1/1       Running     0         
 23h
ceph          ceph-mds-keyring-generator-pzvs2                           
 0/1       Completed   0         
 23h
ceph          ceph-mgr-69d599864b-dqkzv                                  
 1/1       Running     0         
 23h
ceph          ceph-mgr-keyring-generator-4b6ww                           
 0/1       Completed   0         
 23h
ceph          ceph-mon-check-6db6b569b6-w5kjk           
                 1/1       Running    
 0          23h
ceph          ceph-mon-keyring-generator-bpm8q                           
 0/1       Completed   0         
 23h
ceph          ceph-mon-qqjzz                                             
 1/1       Running     0         
 23h
ceph          ceph-osd-default-83945928-qqz4c                            
 1/1       Running     0         
 23h
ceph          ceph-osd-keyring-generator-wc4rg                           
 0/1       Completed   0         
 23h
ceph          ceph-rbd-pool-9gqx4                                        
 0/1       Completed   0         
 23h
ceph          ceph-rbd-provisioner-84bc5c88c7-jstt8                      
 1/1       Running     0         
 23h
ceph          ceph-rgw-5b6645c456-tpqsq                                  
 1/1       Running     0         
 23h
ceph          ceph-rgw-storage-init-45zkv                                
 0/1       Completed   0         
 23h
ceph          ceph-storage-keys-generator-9kxd9                    
      0/1       Completed  
 0          23h
ceph          ingress-65dc849968-96k57                                   
 1/1       Running     0         
 23h
ceph          ingress-error-pages-796b76c856-dfk5w                       
 1/1       Running     0    
     23h
kube-system   auxiliary-etcd-cab23-r720-11                               
 3/3       Running     0         
 1d
kube-system   bootstrap-armada-cab23-r720-11                             
 4/4       Running     0         
 1d
kube-system   calico-etcd-anchor-tnqrq                                   
 1/1       Running     0         
 1d
kube-system   calico-etcd-cab23-r720-11                                  
 1/1       Running     0         
 1d
kube-system   calico-kube-controllers-68f5b99d47-zh84k        
           1/1       Running    
 0          1d
kube-system   calico-node-kbpns                                          
 2/2       Running     0         
 1d
kube-system   calico-settings-p76tq                                      
 0/1       Completed   0 
        1d
kube-system   coredns-69bc679c6f-8qxr2                                   
 1/1       Running     0         
 1d
kube-system   coredns-69bc679c6f-klts5                                   
 1/1       Running     0         
 1d
kube-system   coredns-69bc679c6f-wk27v                                   
 1/1       Running     0         
 1d
kube-system   haproxy-anchor-2bdjd                                       
 1/1       Running     0         
 1d
kube-system   haproxy-cab23-r720-11                         
             1/1       Running    
 1          1d
kube-system   ingress-error-pages-5ccf96bf7d-42lq9                       
 1/1       Running     0         
 1d
kube-system   ingress-lrjr4                                              
 1/1       Running     0         
 1d
kube-system   kubernetes-apiserver-anchor-vbghn                          
 1/1       Running     0         
 1d
kube-system   kubernetes-apiserver-cab23-r720-11                         
 1/1       Running     0         
 23h
kube-system   kubernetes-controller-manager-anchor-kbckh                 
 1/1       Running     0         
 1d
kube-system   kubernetes-controller-manager-cab23-r720-11                
 1/1       Running     0         
 23h
kube-system   kubernetes-etcd-anchor-x78wh              
                 1/1       Running    
 0          1d
kube-system   kubernetes-etcd-cab23-r720-11                              
 1/1       Running     0         
 1d
kube-system   kubernetes-proxy-w7tc6                                     
 1/1       Running 
   0          1d
kube-system   kubernetes-scheduler-anchor-wdqn8                          
 1/1       Running     0         
 1d
kube-system   kubernetes-scheduler-cab23-r720-11                         
 1/1       Running     0         
 23h
ucp           airship-ucp-ceph-config-ceph-ns-key-generator-xh6rj        
 0/1       Completed   0         
 23h
ucp           airship-ucp-rabbitmq-rabbitmq-0                            
 0/1       Init:0/2    0         
 5h
ucp           ingress-6cd5b89d5d-6r6q9                                   
 1/1       Running     0         
 23h
ucp           ingress-error-pages-5c97bb46bb-lnxzp                       
 1/1       Running     0         
 23h
ucp           mariadb-ingress-85b8556fbc-7hg9b                      
     0/1       Running    
 0          5h
ucp           mariadb-ingress-85b8556fbc-mrv6k                           
 0/1       Running     0         
 5h
ucp           mariadb-ingress-error-pages-64f89dc697-p47gg               
 1/1       Running     0       
  5h
ucp           mariadb-server-0                                           
 0/1       Init:0/2    0         
 5h
ucp           postgresql-0                                               
 0/1       Init:0/1    0         
 5h
 
Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:
 +86 138 057 59684
E-Mail:  
qiaolin.tu@nokia-sbell.com
 
 
From: Matthew H <matthew.heler@hotmail.com>
Sent: Tuesday, November 13, 2018 2:33 AM
To: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>;
airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: Re: [Airship-discuss] Airship installation Questions
 
Greetings,
 
From your master k8s node can you resolve ceph-mon.ceph.svc.cluster.local?
 
Please also send the output of 'cat /etc/resolv.conf' from your k8s nodes (genesis and master node).
 
Thxs
 
From: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>
Sent: Monday, November 12, 2018 4:39 AM
To: Matthew H; airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou)
Subject: RE: [Airship-discuss] Airship installation Questions 
 
Hi,
Add ceph rbd image related logs.
 
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd ls
kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd
kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd
kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd
rbd image 'kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd':
       size 5120 MB in 1280 objects
       order 22 (4096 kB objects)
       block_name_prefix: rbd_data.113b74b0dc51
       format: 2
       features: layering
       flags:
       create_timestamp: Mon Nov 12 09:06:39 2018
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd
rbd image 'kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd':
       size 256 MB in 64 objects
       order 22 (4096 kB objects)
       block_name_prefix: rbd_data.113c74b0dc51
       format: 2
       features: layering
       flags:
       create_timestamp: Mon Nov 12 09:06:40 2018
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd info kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd
rbd image 'kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd':
       size 5120 MB in 1280 objects
       order 22 (4096 kB objects)
       block_name_prefix: rbd_data.113d74b0dc51
       format: 2
       features: layering
       flags:
       create_timestamp: Mon Nov 12 09:06:40 2018
root@cab23-r720-11:/var/lib/kubelet# 
 kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-43083980-e65a-11e8-bc31-2aa831f6c6fd
Watchers: none
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-4396e109-e65a-11e8-bc31-2aa831f6c6fd
Watchers: none
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- rbd status kubernetes-dynamic-pvc-443ecaac-e65a-11e8-bc31-2aa831f6c6fd
Watchers: none
 
 
Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:
 +86 138 057 59684
E-Mail:  
qiaolin.tu@nokia-sbell.com
 
 
From: Tu, Qiaolin (NSB - CN/Hangzhou) 
Sent: Monday, November 12, 2018 5:27 PM
To: Matthew H <matthew.heler@hotmail.com>;
airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: RE: [Airship-discuss] Airship installation Questions
 
Hi,
Add ceph-mod logs and yaml files.
 
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph -s
  cluster:
    id:     7b7576f4-3358-4668-9112-100440079807
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cab23-r720-11
    mgr: cab23-r720-11(active)
    mds: cephfs-1/1/1 up  {0=mds-ceph-mds-5f547b6fd7-sg49g=up:active}
    osd: 1 osds: 1 up, 1 in
    rgw: 1 daemon active
 
  data:
    pools:   18 pools, 93 pgs
    objects: 1164 objects, 3407 bytes
    usage:   374 MB used, 1023 GB / 1023 GB avail
    pgs:     93 active+clean
 
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd tree
ID CLASS WEIGHT  TYPE NAME             
 STATUS REWEIGHT PRI-AFF 
-1       1.00000 root default                                  
-2   
   1.00000     host cab23-r720-11                        
 0   hdd 1.00000        
 osd.0              up  1.00000 1.00000
root@cab23-r720-11:/var/lib/kubelet# kubectl exec -it ceph-mon-qqjzz -n ceph -- ceph osd dump
epoch 219
fsid 7b7576f4-3358-4668-9112-100440079807
created 2018-11-12 08:53:17.281208
modified 2018-11-12 09:06:40.314892
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 6
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client hammer
require_osd_release luminous
pool 1 'rbd' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 219 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application
 rbd
       removed_snaps [1~5]
pool 2 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application cephfs
pool 3 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application cephfs
pool 4 '.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 56 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application
 rgw
pool 5 'default.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 68 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 6 'default.rgw.data.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 79 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 7 'default.rgw.gc' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 89 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 8 'default.rgw.log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 102 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 9 'default.rgw.intent-log' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 113 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 11 'default.rgw.usage' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 12 'default.rgw.users.keys' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 145 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 13 'default.rgw.users.email' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 155 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 14 'default.rgw.users.swift' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 168 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 15 'default.rgw.users.uid' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 179 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 16 'default.rgw.buckets.extra' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 last_change 191 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 17 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 4 pgp_num 4 last_change 202 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 18 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 last_change 214 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
max_osd 1
osd.0 up   in 
 weight 1 up_from 5 up_thru 209 down_at 0 last_clean_interval [0,0) 10.23.23.11:6800/6766 10.23.23.11:6801/6766 10.23.23.11:6802/6766 10.23.23.11:6803/6766 exists,up 02d8f692-709a-45ea-9f2c-75486e16e82b
 
Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:
 +86 138 057 59684
E-Mail:  
qiaolin.tu@nokia-sbell.com
 
 
From: Tu, Qiaolin (NSB - CN/Hangzhou) 
Sent: Monday, November 12, 2018 4:25 PM
To: 'Matthew H' <matthew.heler@hotmail.com>;
airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: RE: [Airship-discuss] Airship installation Questions
 
Hi,
Thanks very much for your help. After modify ceph replication parameters, ceph pods deployed successfully. It then deploy ucp related pods and it have below errors. Please check attachment
 log for detail, thanks very much!
 
ucp           airship-ucp-rabbitmq-rabbitmq-0                            
 0/1       Init:0/2   
 0          1m
ucp           ingress-6cd5b89d5d-nmwpt                                   
 1/1       Running     0         
 18m
ucp   
       ingress-6cd5b89d5d-nr65b                                   
 1/1       Running     0         
 18m
ucp           ingress-error-pages-5c97bb46bb-2mvgm                       
 1/1       Running     0         
 18m
ucp           ingress-error-pages-5c97bb46bb-wzzdz                       
 1/1       Running     0         
 18m
ucp           mariadb-ingress-85b8556fbc-xpvwc                           
 0/1       Running     0         
 1m
ucp           mariadb-ingress-85b8556fbc-zv72k                           
 0/1       Running     0         
 1m
ucp           mariadb-ingress-error-pages-64f89dc697-2trh9               
 1/1       Running     0         
 1m
ucp           mariadb-server-0                                           
 0/1       Init:0/2   
 0          1m
ucp           postgresql-0                                               
 0/1       Init:0/1   
 0          1m
 
 
root@cab23-r720-11:~# kubectl describe pod mariadb-ingress-85b8556fbc-xpvwc -n ucp
Name:           mariadb-ingress-85b8556fbc-xpvwc
Namespace:      ucp
Node:           cab23-r720-11/10.23.22.11
Start Time:     Mon, 12 Nov 2018 08:05:00 +0000
Labels:         application=mariadb
                component=ingress
                pod-template-hash=4164112967
                release_group=airship-ucp-mariadb
Annotations:    configmap-bin-hash=eb36d47d8f7d7097cf6d488a61145f76dbfe5e558edf5b802153a00fc3389f0b
                configmap-etc-hash=3f45f1d8d3ddf5a09fbcd3036cb23bffb939cfa1225f8f1a0d79b390877710c1
Status:         Running
IP:             10.97.38.125
Events:
  Type     Reason                
 Age                From                    Message
  ----     ------             
   ----               ----                   
 -------
  Normal   Scheduled             
 3m                 default-scheduler       Successfully assigned
 mariadb-ingress-85b8556fbc-xpvwc to cab23-r720-11
  Normal   SuccessfulMountVolume 
 3m                 kubelet, cab23-r720-11  MountVolume.SetUp succeeded
 for volume "airship-ucp-mariadb-ingress-token-htf82"
  Normal   SuccessfulMountVolume 
 3m                 kubelet, cab23-r720-11  MountVolume.SetUp succeeded
 for volume "mariadb-etc"
  Normal   SuccessfulMountVolume 
 3m                 kubelet, cab23-r720-11  MountVolume.SetUp succeeded
 for volume "mariadb-bin"
  Normal   Pulled                
 3m                 kubelet, cab23-r720-11  Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1"
 already present on machine
  Normal   Created               
 3m                 kubelet, cab23-r720-11  Created container
  Normal   Started               
 3m                 kubelet, cab23-r720-11  Started container
  Normal   Pulled                
 3m     
           kubelet, cab23-r720-11  Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0"
 already present on machine
  Normal   Created               
 3m                 kubelet, cab23-r720-11  Created container
  Normal   Started               
 3m                 kubelet, cab23-r720-11  Started container
  Warning 
 Unhealthy              26s (x16 over 2m) 
 kubelet, cab23-r720-11  Readiness probe failed: dial tcp 10.97.38.125:3306: getsockopt: connection refused
 
 
 
 
root@cab23-r720-11:~# kubectl describe pod postgresql-0 -n ucp
Name:           postgresql-0
Namespace:      ucp
Node:           cab23-r720-11/10.23.22.11
Start Time:     Mon, 12 Nov 2018 08:04:56 +0000
Labels:         application=postgresql
                component=server
                controller-revision-hash=postgresql-566fd45fd7
                release_group=airship-ucp-postgresql
                statefulset.kubernetes.io/pod-name=postgresql-0
Events:
  Type     Reason                 
 Age   From                     Message
  ----     ------                 
 ----  ----                     -------
  Normal   SuccessfulAttachVolume 
 4m    attachdetach-controller  AttachVolume.Attach succeeded for
 volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a"
 Normal   Scheduled              
 4m    default-scheduler        Successfully assigned postgresql-0
 to cab23-r720-11
  Normal   SuccessfulMountVolume  
 4m    kubelet, cab23-r720-11   MountVolume.SetUp succeeded for
 volume "postgresql-bin"
  Normal   SuccessfulMountVolume  
 4m    kubelet, cab23-r720-11   MountVolume.SetUp succeeded for
 volume "postgresql-token-rmkq9"
  Warning 
 FailedMount             2m   
 kubelet, cab23-r720-11   MountVolume.WaitForAttach failed for volume "pvc-c66d1b6f-e64a-11e8-bb43-080027f45d2a" : fail to check
 rbd image status with: (exit status 22), rbd output: (2018-11-12 16:07:01.400015 7fcc31018100 -1 did not load config file, using default settings.
server name not found: ceph-mon.ceph.svc.cluster.local (Name or service not known)
unable to parse addrs in 'ceph-mon.ceph.svc.cluster.local:6789'
rbd: couldn't connect to the cluster!
)
  Warning 
 FailedMount  19s (x2 over 2m) 
 kubelet, cab23-r720-11  Unable to mount volumes for pod "postgresql-0_ucp(a46bc160-e651-11e8-bb43-080027f45d2a)": timeout expired
 waiting for volumes to attach or mount for pod "ucp"/"postgresql-0". list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data postgresql-bin postgresql-token-rmkq9]
 
 
Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:
 +86 138 057 59684
E-Mail:  
qiaolin.tu@nokia-sbell.com
 
 
From: Matthew H <matthew.heler@hotmail.com>
Sent: Friday, November 09, 2018 10:43 PM
To: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>;
airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: Re: [Airship-discuss] Airship installation Questions
 
Thanks,
 
From what I can see you need additional overrides set to run Ceph on a single node. The overrides you need are here [1].
 
Let me know if this helps get you in the right direction.
 
[1]
 
 
From: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>
Sent: Friday, November 9, 2018 4:51 AM
To: Matthew H; airship-discuss@lists.airshipit.org
Cc: Li, Maxwell (NSB - CN/Hangzhou)
Subject: RE: [Airship-discuss] Airship installation Questions 
 
Hi,
I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much!
 
root@cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree
ID CLASS WEIGHT  TYPE NAME             
 STATUS REWEIGHT PRI-AFF 
-1       8.00000 root default    
                              
-2       8.00000    
 host cab23-r720-11                        
 0   hdd 1.00000        
 osd.0              up  1.00000 1.00000
 1   hdd 1.00000        
 osd.1              up  1.00000 1.00000
 2   hdd 1.00000        
 osd.2              up  1.00000 1.00000
 3   hdd 1.00000        
 osd.3              up  1.00000 1.00000
 4   hdd 1.00000        
 osd.4              up  1.00000 1.00000
 5   hdd 1.00000        
 osd.5              up  1.00000 1.00000
 6   hdd 1.00000        
 osd.6              up  1.00000 1.00000
 7   hdd 1.00000        
 osd.7              up  1.00000 1.00000
 
root@cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump
epoch 231
fsid 7b7576f4-3358-4668-9112-100440079807
created 2018-11-07 09:08:39.208517
modified 2018-11-09 09:40:10.639284
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 21
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client hammer
require_osd_release luminous
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application
 rbd
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application cephfs
pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application cephfs
pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0
 application rgw
pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width
 0 application rgw
pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange
 stripe_width 0 application rgw
max_osd 8
osd.0 up   in 
 weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1
osd.1 up   in 
 weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4
osd.2 up   in 
 weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3
osd.3 up   in 
 weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4
osd.4 up   in 
 weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7
osd.5 up   in 
 weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43
osd.6 up   in 
 weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d
osd.7 up   in 
 weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9
 
root@cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail
HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized
PG_AVAILABILITY Reduced data availability: 338 pgs inactive
    pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0]
    pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4]
    pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4]
    pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6]
    pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0]
    pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2]
    pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6]
    pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4]
 
  pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2]
    pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0]
    pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6]
    pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4]
    pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0]
    pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4]
    pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0]
    pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4]
    pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
PG_DEGRADED Degraded data redundancy: 338 pgs undersized
    pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0]
    pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5]
    pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3]
    pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4]
    pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5]
    pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7]
    pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7]
    pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4]
    pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6]
    pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0]
    pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2]
    pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5]
    pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6]
    pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4]
    pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3]
    pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2]
    pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5]
    pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3]
    pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7]
    pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5]
    pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3]
    pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7]
    pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3]
    pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0]
    pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6]
    pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1]
    pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6]
    pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5]
    pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6]
    pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7]
    pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7]
    pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4]
    pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1]
    pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1]
    pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7]
    pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0]
    pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7]
    pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6]
    pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4]
    pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0]
    pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6]
    pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5]
    pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6]
    pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6]
    pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1]
    pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5]
    pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6]
    pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6]
    pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5]
    pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4]
    pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7]
 
root@cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s
  cluster:
    id:     7b7576f4-3358-4668-9112-100440079807
    health: HEALTH_WARN
            Reduced data availability: 338 pgs inactive
            Degraded data redundancy: 338 pgs undersized
  services:
    mon: 1 daemons, quorum cab23-r720-11
    mgr: cab23-r720-11(active)
    mds: cephfs-1/1/1 up  {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating},
 1 up:standby
    osd: 8 osds: 8 up, 8 in
  data:
    pools:   18 pools, 338 pgs
    objects: 0 objects, 0 bytes
    usage:   3229 MB used, 8184 GB / 8187 GB avail
    pgs:     100.000% pgs not active
             338 undersized+peered
 
 
Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:
 +86 138 057 59684
E-Mail:  
qiaolin.tu@nokia-sbell.com
 
 
From: Matthew H <matthew.heler@hotmail.com>
Sent: Friday, November 09, 2018 6:18 AM
To: airship-discuss@lists.airshipit.org
Cc: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>
Subject: Re: [Airship-discuss] Airship installation Questions
 
Greetings,
 
Could you run the following commands from a MON pod:
 
ceph osd tree
ceph osd dump
 
Also how many nodes did you deploy on? one or one or more nodes?
 
Thanks,