[Airship-discuss] Airship installation Questions

Tu, Qiaolin (NSB - CN/Hangzhou) qiaolin.tu at nokia-sbell.com
Fri Nov 9 09:51:46 UTC 2018


Hi,
I deployed only 1 master node(1 genesis node + 1 master node), attachment is my yaml files. Thanks very much!

root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd tree
ID CLASS WEIGHT  TYPE NAME              STATUS REWEIGHT PRI-AFF
-1       8.00000 root default
-2       8.00000     host cab23-r720-11
 0   hdd 1.00000         osd.0              up  1.00000 1.00000
 1   hdd 1.00000         osd.1              up  1.00000 1.00000
 2   hdd 1.00000         osd.2              up  1.00000 1.00000
 3   hdd 1.00000         osd.3              up  1.00000 1.00000
 4   hdd 1.00000         osd.4              up  1.00000 1.00000
 5   hdd 1.00000         osd.5              up  1.00000 1.00000
 6   hdd 1.00000         osd.6              up  1.00000 1.00000
 7   hdd 1.00000         osd.7              up  1.00000 1.00000

root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph osd dump
epoch 231
fsid 7b7576f4-3358-4668-9112-100440079807
created 2018-11-07 09:08:39.208517
modified 2018-11-09 09:40:10.639284
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 21
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client hammer
require_osd_release luminous
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 40 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rbd
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs
pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 223 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application cephfs
pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 72 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 83 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 6 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 7 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 104 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 8 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 114 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 9 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 124 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 10 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 135 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 11 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 146 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 12 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 156 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 13 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 167 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 14 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 177 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 188 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 16 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 199 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 17 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 211 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
pool 18 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 221 flags hashpspool,nodelete,nopgchange,nosizechange stripe_width 0 application rgw
max_osd 8
osd.0 up   in  weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [8,228) 10.23.23.11:6800/15964 10.23.23.11:6814/2015964 10.23.23.11:6822/2015964 10.23.23.11:6823/2015964 exists,up fea47975-0810-47c9-ad43-e76ce81764a1
osd.1 up   in  weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6808/16162 10.23.23.11:6807/2016162 10.23.23.11:6819/2016162 10.23.23.11:6801/2016162 exists,up cec98e14-83d5-4785-b8a7-a6f201170ac4
osd.2 up   in  weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [9,228) 10.23.23.11:6804/16160 10.23.23.11:6806/2016160 10.23.23.11:6811/2016160 10.23.23.11:6834/2016160 exists,up 97315996-1cb9-4942-9786-8edc5a3862e3
osd.3 up   in  weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [10,228) 10.23.23.11:6812/16588 10.23.23.11:6815/2016588 10.23.23.11:6805/2016588 10.23.23.11:6817/2016588 exists,up 49082e4c-7827-4c4c-85c9-16ea134289b4
osd.4 up   in  weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [13,228) 10.23.23.11:6816/17053 10.23.23.11:6803/2017053 10.23.23.11:6813/2017053 10.23.23.11:6821/2017053 exists,up 8f9a5a7d-c97d-40c6-912e-33b6ab68d9e7
osd.5 up   in  weight 1 up_from 229 up_thru 229 down_at 228 last_clean_interval [16,228) 10.23.23.11:6820/17600 10.23.23.11:6810/2017600 10.23.23.11:6809/2017600 10.23.23.11:6818/2017600 exists,up b4602bfb-075f-4303-9f76-946576c4ef43
osd.6 up   in  weight 1 up_from 16 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6824/17601 10.23.23.11:6825/17601 10.23.23.11:6826/17601 10.23.23.11:6827/17601 exists,up 2a853bad-7d97-43de-85f3-96e0f9e16c0d
osd.7 up   in  weight 1 up_from 20 up_thru 212 down_at 0 last_clean_interval [0,0) 10.23.23.11:6828/18682 10.23.23.11:6829/18682 10.23.23.11:6830/18682 10.23.23.11:6831/18682 exists,up dfee9a9c-7587-421b-a0dc-eda2314174d9

root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph health detail
HEALTH_WARN Reduced data availability: 338 pgs inactive; Degraded data redundancy: 338 pgs undersized
PG_AVAILABILITY Reduced data availability: 338 pgs inactive
    pg 1.47 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0]
    pg 1.48 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.49 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.4a is stuck inactive for 174532.928425, current state undersized+peered, last acting [4]
    pg 1.4b is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.4c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.4d is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.4e is stuck inactive for 174532.928425, current state undersized+peered, last acting [4]
    pg 1.4f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6]
    pg 1.50 is stuck inactive for 174532.928425, current state undersized+peered, last acting [0]
    pg 1.51 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2]
    pg 1.52 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.53 is stuck inactive for 174532.928425, current state undersized+peered, last acting [6]
    pg 1.54 is stuck inactive for 174532.928425, current state undersized+peered, last acting [4]
    pg 1.55 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.56 is stuck inactive for 174532.928425, current state undersized+peered, last acting [2]
    pg 1.57 is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.58 is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.59 is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.5a is stuck inactive for 174532.928425, current state undersized+peered, last acting [5]
    pg 1.5b is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.5c is stuck inactive for 174532.928425, current state undersized+peered, last acting [7]
    pg 1.5d is stuck inactive for 174532.928425, current state undersized+peered, last acting [3]
    pg 1.5e is stuck inactive for 174532.928425, current state undersized+peered, last acting [0]
    pg 1.5f is stuck inactive for 174532.928425, current state undersized+peered, last acting [6]
    pg 18.40 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.41 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.42 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.43 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.44 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.45 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.46 is stuck inactive for 174337.349457, current state undersized+peered, last acting [4]
    pg 18.47 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.48 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.49 is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.4a is stuck inactive for 174337.349457, current state undersized+peered, last acting [0]
    pg 18.4b is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
    pg 18.4c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.4d is stuck inactive for 174337.349457, current state undersized+peered, last acting [4]
    pg 18.4e is stuck inactive for 174337.349457, current state undersized+peered, last acting [0]
    pg 18.4f is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.54 is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.55 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.58 is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.59 is stuck inactive for 174337.349457, current state undersized+peered, last acting [1]
    pg 18.5a is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.5b is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.5c is stuck inactive for 174337.349457, current state undersized+peered, last acting [6]
    pg 18.5d is stuck inactive for 174337.349457, current state undersized+peered, last acting [5]
    pg 18.5e is stuck inactive for 174337.349457, current state undersized+peered, last acting [4]
    pg 18.5f is stuck inactive for 174337.349457, current state undersized+peered, last acting [7]
PG_DEGRADED Degraded data redundancy: 338 pgs undersized
    pg 1.47 is stuck undersized for 175.198010, current state undersized+peered, last acting [0]
    pg 1.48 is stuck undersized for 175.208624, current state undersized+peered, last acting [5]
    pg 1.49 is stuck undersized for 175.220652, current state undersized+peered, last acting [3]
    pg 1.4a is stuck undersized for 175.187294, current state undersized+peered, last acting [4]
    pg 1.4b is stuck undersized for 175.208051, current state undersized+peered, last acting [5]
    pg 1.4c is stuck undersized for 174531.317358, current state undersized+peered, last acting [7]
    pg 1.4d is stuck undersized for 174531.318742, current state undersized+peered, last acting [7]
    pg 1.4e is stuck undersized for 175.202431, current state undersized+peered, last acting [4]
    pg 1.4f is stuck undersized for 174531.331123, current state undersized+peered, last acting [6]
    pg 1.50 is stuck undersized for 175.207213, current state undersized+peered, last acting [0]
    pg 1.51 is stuck undersized for 175.215944, current state undersized+peered, last acting [2]
    pg 1.52 is stuck undersized for 175.209013, current state undersized+peered, last acting [5]
    pg 1.53 is stuck undersized for 174531.331587, current state undersized+peered, last acting [6]
    pg 1.54 is stuck undersized for 175.202884, current state undersized+peered, last acting [4]
    pg 1.55 is stuck undersized for 175.222033, current state undersized+peered, last acting [3]
    pg 1.56 is stuck undersized for 175.215670, current state undersized+peered, last acting [2]
    pg 1.57 is stuck undersized for 175.196356, current state undersized+peered, last acting [5]
    pg 1.58 is stuck undersized for 175.218886, current state undersized+peered, last acting [3]
    pg 1.59 is stuck undersized for 174531.316843, current state undersized+peered, last acting [7]
    pg 1.5a is stuck undersized for 175.209613, current state undersized+peered, last acting [5]
    pg 1.5b is stuck undersized for 175.219138, current state undersized+peered, last acting [3]
    pg 1.5c is stuck undersized for 174531.319395, current state undersized+peered, last acting [7]
    pg 1.5d is stuck undersized for 175.219426, current state undersized+peered, last acting [3]
    pg 1.5e is stuck undersized for 175.219873, current state undersized+peered, last acting [0]
    pg 1.5f is stuck undersized for 174531.331739, current state undersized+peered, last acting [6]
    pg 18.40 is stuck undersized for 175.211281, current state undersized+peered, last acting [1]
    pg 18.41 is stuck undersized for 174335.530906, current state undersized+peered, last acting [6]
    pg 18.42 is stuck undersized for 175.189316, current state undersized+peered, last acting [5]
    pg 18.43 is stuck undersized for 174335.529402, current state undersized+peered, last acting [6]
    pg 18.44 is stuck undersized for 174335.520749, current state undersized+peered, last acting [7]
    pg 18.45 is stuck undersized for 174335.520646, current state undersized+peered, last acting [7]
    pg 18.46 is stuck undersized for 175.204679, current state undersized+peered, last acting [4]
    pg 18.47 is stuck undersized for 175.211629, current state undersized+peered, last acting [1]
    pg 18.48 is stuck undersized for 175.213907, current state undersized+peered, last acting [1]
    pg 18.49 is stuck undersized for 174335.521334, current state undersized+peered, last acting [7]
    pg 18.4a is stuck undersized for 175.218699, current state undersized+peered, last acting [0]
    pg 18.4b is stuck undersized for 174335.527174, current state undersized+peered, last acting [7]
    pg 18.4c is stuck undersized for 174335.528996, current state undersized+peered, last acting [6]
    pg 18.4d is stuck undersized for 175.204937, current state undersized+peered, last acting [4]
    pg 18.4e is stuck undersized for 175.219027, current state undersized+peered, last acting [0]
    pg 18.4f is stuck undersized for 174335.531066, current state undersized+peered, last acting [6]
    pg 18.54 is stuck undersized for 175.189185, current state undersized+peered, last acting [5]
    pg 18.55 is stuck undersized for 174335.531222, current state undersized+peered, last acting [6]
    pg 18.58 is stuck undersized for 174335.530357, current state undersized+peered, last acting [6]
    pg 18.59 is stuck undersized for 175.204978, current state undersized+peered, last acting [1]
    pg 18.5a is stuck undersized for 175.192362, current state undersized+peered, last acting [5]
    pg 18.5b is stuck undersized for 174335.531432, current state undersized+peered, last acting [6]
    pg 18.5c is stuck undersized for 174335.530149, current state undersized+peered, last acting [6]
    pg 18.5d is stuck undersized for 175.191412, current state undersized+peered, last acting [5]
    pg 18.5e is stuck undersized for 175.205141, current state undersized+peered, last acting [4]
    pg 18.5f is stuck undersized for 174335.527472, current state undersized+peered, last acting [7]

root at cab23-r720-11:~# kubectl exec -it ceph-mon-h2gsm -n ceph -- ceph -s
  cluster:
    id:     7b7576f4-3358-4668-9112-100440079807
    health: HEALTH_WARN
            Reduced data availability: 338 pgs inactive
            Degraded data redundancy: 338 pgs undersized
  services:
    mon: 1 daemons, quorum cab23-r720-11
    mgr: cab23-r720-11(active)
    mds: cephfs-1/1/1 up  {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby
    osd: 8 osds: 8 up, 8 in
  data:
    pools:   18 pools, 338 pgs
    objects: 0 objects, 0 bytes
    usage:   3229 MB used, 8184 GB / 8187 GB avail
    pgs:     100.000% pgs not active
             338 undersized+peered


Best Regards!
Qiaolin Tu
NSB MN 5G ECE HZ CN2 SG04
Mobile:  +86 138 057 59684
E-Mail:   qiaolin.tu at nokia-sbell.com<mailto:qiaolin.tu at nokia-sbell.com>


From: Matthew H <matthew.heler at hotmail.com>
Sent: Friday, November 09, 2018 6:18 AM
To: airship-discuss at lists.airshipit.org
Cc: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu at nokia-sbell.com>
Subject: Re: [Airship-discuss] Airship installation Questions

Greetings,

Could you run the following commands from a MON pod:

ceph osd tree
ceph osd dump

Also how many nodes did you deploy on? one or one or more nodes?

Thanks,


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.airshipit.org/pipermail/airship-discuss/attachments/20181109/c6019739/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: deploy_1_master.rar
Type: application/octet-stream
Size: 4791973 bytes
Desc: deploy_1_master.rar
URL: <http://lists.airshipit.org/pipermail/airship-discuss/attachments/20181109/c6019739/attachment-0001.obj>


More information about the Airship-discuss mailing list