Hi,

 

root@cab23-r720-11:~# kubectl exec -it -n ceph ceph-mon-h2gsm -- ceph -s

  cluster:

    id:     7b7576f4-3358-4668-9112-100440079807

    health: HEALTH_WARN

            Reduced data availability: 338 pgs inactive

            Degraded data redundancy: 338 pgs undersized

 

  services:

    mon: 1 daemons, quorum cab23-r720-11

    mgr: cab23-r720-11(active)

    mds: cephfs-1/1/1 up  {0=mds-ceph-mds-6bfb74d9c7-gqgtl=up:creating}, 1 up:standby

    osd: 8 osds: 8 up, 8 in

  data:

    pools:   18 pools, 338 pgs

    objects: 0 objects, 0 bytes

    usage:   2973 MB used, 8185 GB / 8187 GB avail

    pgs:     100.000% pgs not active

             338 undersized+peered

 

Which I marked with red color mean ceph state have something wrong? And this lead to Readiness probe failed: Get http://10.97.38.77:8088/: dial tcp 10.97.38.77:8088: getsockopt: connection refused ?

 

 

Best Regards!

Qiaolin Tu

NSB MN 5G ECE HZ CN2 SG04

Mobile:  +86 138 057 59684

E-Mail:   qiaolin.tu@nokia-sbell.com

 

 

From: SKELS, KASPARS <ks3019@att.com>
Sent: Thursday, November 08, 2018 3:49 PM
To: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>; Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Cc: GORSHUNOV, ROMAN <roman.gorshunov@att.com>; MEADOWS, ALAN <am240k@att.com>; MCEUEN, MATT <MM9745@att.com>; PACHECO, RODOLFO J <rp2723@att.com>; 'airship-discuss@lists.airshipit.org' <airship-discuss@lists.airshipit.org>; REDDY, CHINASUBBA <cr3938@att.com>; BIRLEY, PETE <pb269f@att.com>
Subject: RE: Airship installation Questions

 

Hey!

 

Have a look if ceph is healthy in general with following command (cluster status)

kubectl exec –it –n ceph ceph-mon-h2gsm -- ceph -s

 

I still see that you have 2 instances of ceph-mgr from your terminal outputs

ceph-mgr-6dc44fc75b-jtv7w                                   0/1       Pending     0          22h

ceph-mgr-6dc44fc75b-jvstm                                   1/1       Running     0          22h

 

There are 2 ceph-client charts, 1 for initial single node/genesis, and ceph-client-update for final state (when additional nodes join). Feel free to explore

https://github.com/openstack/airship-treasuremap/tree/master/global/software/charts/ucp/ceph

 

I think simplest way for you to get ceph running (if you are using single control plane) would be to just stay with initial ceph as it will also keep crush rules and other settings as needed (final state requires 3 domains/hosts)

I would say change https://github.com/openstack/airship-treasuremap/blob/master/global/software/manifests/full-site.yaml#L22

From  `ucp-ceph-update` to just `ucp-ceph` instead; this will keep using initial single node/genesis ceph and keep it healthy.

 

For RGW – is similar – you will need to set it to 1 replica (right now there is no site override so you would need to craft one or set on global level)

https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-rgw.yaml#L133

 

Kindly, Kaspars

 

 

From: Tu, Qiaolin (NSB - CN/Hangzhou) [mailto:qiaolin.tu@nokia-sbell.com]
Sent: Thursday, November 8, 2018 1:25 AM
To: SKELS, KASPARS <ks3019@att.com>; Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Cc: GORSHUNOV, ROMAN <roman.gorshunov@att.com>; MEADOWS, ALAN <am240k@att.com>; MCEUEN, MATT <MM9745@att.com>; PACHECO, RODOLFO J <rp2723@att.com>; 'airship-discuss@lists.airshipit.org' <airship-discuss@lists.airshipit.org>
Subject: RE: Airship installation Questions

 

Hi,

Thanks very much for your reply.

I adjust the replica count on site level (set 1, instead of 2) and ceph-rgw pod still have getsockopt: connection refused error. Below is the log detail:

 

root@cab23-r720-11:~# kubectl describe pod ceph-rgw-5b6645c456-rsv4k -n ceph

Name:           ceph-rgw-5b6645c456-rsv4k

Namespace:      ceph

Node:           cab23-r720-11/10.23.22.11

Start Time:     Wed, 07 Nov 2018 23:07:06 +0000

Labels:         application=ceph

                component=rgw

                pod-template-hash=1622017012

                release_group=airship-ucp-ceph-rgw

Annotations:    configmap-bin-hash=aee34fa624622fb03c48b85c014e805a1012f74f772bc0f3a1e91597839ce49e

                configmap-etc-client-hash=38eb0b2eb23d31bcd3c4238c2bf9f6b8acd335659d94b3fcfff6c333ed1339b2

Status:         Running

IP:             10.97.38.77

Controlled By:  ReplicaSet/ceph-rgw-5b6645c456

 

Events:

  Type     Reason     Age                  From                    Message

  ----     ------     ----                 ----                    -------

  Warning  BackOff    5m (x737 over 7h)    kubelet, cab23-r720-11  Back-off restarting failed container

  Warning  Unhealthy  39s (x1154 over 7h)  kubelet, cab23-r720-11  Readiness probe failed: Get http://10.97.38.77:8088/: dial tcp 10.97.38.77:8088: getsockopt: connection refused

 

 

root@cab23-r720-11:~# kubectl logs -f ceph-rgw-5b6645c456-rsv4k -n ceph

+ export LC_ALL=C

+ LC_ALL=C

+ : 0

++ uname -n

+ : ceph-rgw-5b6645c456-rsv4k

+ : ''

+ : ''

+ : 0

+ : 9000

+ : 0.0.0.0

+ : /etc/ceph/ceph.client.admin.keyring

+ : /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring

+ : /var/lib/ceph/bootstrap-rgw/ceph.keyring

+ [[ ! -e /etc/ceph/ceph.conf ]]

+ '[' 0 -eq 1 ']'

+ '[' '!' -e /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring ']'

+ RGW_FRONTENDS='civetweb port=8088'

+ '[' 0 -eq 1 ']'

+ /usr/bin/radosgw --cluster ceph --setuser ceph --setgroup ceph -d -n client.rgw.ceph-rgw-5b6645c456-rsv4k -k /var/lib/ceph/radosgw/ceph-rgw-5b6645c456-rsv4k/keyring --rgw-socket-path= --rgw-zonegroup= --rgw-zone= '--rgw-frontends=civetweb port=8088'

2018-11-08 06:53:53.769321 7f3c7826de80  0 deferred set uid:gid to 64045:64045 (ceph:ceph)

2018-11-08 06:53:53.769371 7f3c7826de80  0 ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable), process (unknown), pid 8

 

 

 

root@cab23-r720-11:~# kubectl get pods -n ceph

NAME                                                        READY     STATUS      RESTARTS   AGE

airship-ucp-ceph-provisioners-ceph-ns-key-generator-8vm8b   0/1       Completed   0          20h

ceph-bootstrap-mp5tn                                        0/1       Completed   0          22h

ceph-cephfs-client-key-generator-4sqrd                      0/1       Completed   0          20h

ceph-cephfs-provisioner-676684f6bd-48fq8                    1/1       Running     0          20h

ceph-cephfs-provisioner-676684f6bd-mqwnj                    1/1       Running     0          20h

ceph-mds-6bfb74d9c7-gqgtl                                   1/1       Running     0          22h

ceph-mds-6bfb74d9c7-sk4pw                                   1/1       Running     0          22h

ceph-mds-keyring-generator-m2rjj                            0/1       Completed   0          22h

ceph-mgr-6dc44fc75b-jtv7w                                   0/1       Pending     0          22h

ceph-mgr-6dc44fc75b-jvstm                                   1/1       Running     0          22h

ceph-mgr-keyring-generator-m56p5                            0/1       Completed   0          22h

ceph-mon-check-6db6b569b6-flg76                             1/1       Running     0          22h

ceph-mon-h2gsm                                              1/1       Running     0          21h

ceph-mon-keyring-generator-5phjm                            0/1       Completed   0          22h

ceph-osd-default-64779b8c-c74ms                             1/1       Running     0          22h

ceph-osd-default-6ea9de2c-hn4tm                             1/1       Running     0          22h

ceph-osd-default-70a54190-t54l7                             1/1       Running     0          22h

ceph-osd-default-7544b6da-mxfj6                             1/1       Running     1          22h

ceph-osd-default-7cfc44c1-vbqs4                             1/1       Running     0          22h

ceph-osd-default-83945928-66m9s                             1/1       Running     0          22h

ceph-osd-default-be8e8cc4-2vnxb                             1/1       Running     0          22h

ceph-osd-default-f9249fa9-8gw25                             1/1       Running     0          22h

ceph-osd-keyring-generator-fnrgk                            0/1       Completed   0          22h

ceph-rbd-pool-4dzjz                                         0/1       Completed   0          22h

ceph-rbd-provisioner-84bc5c88c7-smfcn                       1/1       Running     0          20h

ceph-rbd-provisioner-84bc5c88c7-vtc5g                       1/1       Running     0          20h

ceph-rgw-5b6645c456-rsv4k                                   0/1       Running     74         8h

ceph-rgw-5b6645c456-sgzpt                                   0/1       Running     74         8h

ceph-rgw-storage-init-pqf6q                                 0/1       Completed   0          8h

ceph-storage-keys-generator-9fx4k                           0/1       Completed   0          22h

ingress-65dc849968-9zlqn                                    1/1       Running     0          22h

ingress-65dc849968-cc8xp                                    1/1       Running     0          22h

ingress-error-pages-796b76c856-dnpcp                        1/1       Running     0          22h

ingress-error-pages-796b76c856-tt9x4                        1/1       Running     0          22h

 

Best Regards!

Qiaolin Tu

NSB MN 5G ECE HZ CN2 SG04

Mobile:  +86 138 057 59684

E-Mail:   qiaolin.tu@nokia-sbell.com

 

 

From: SKELS, KASPARS <ks3019@att.com>
Sent: Wednesday, November 07, 2018 4:28 AM
To: Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>; Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Cc: GORSHUNOV, ROMAN <roman.gorshunov@att.com>; MEADOWS, ALAN <am240k@att.com>; MCEUEN, MATT <MM9745@att.com>; PACHECO, RODOLFO J <rp2723@att.com>; 'airship-discuss@lists.airshipit.org' <airship-discuss@lists.airshipit.org>
Subject: RE: Airship installation Questions

 

Hi Qiaolin/Maxwell,

 

it seems like you may be using 1 control node for your deployment.

The ceph-mgr chart is set to have 2 replicas and is trying to launch them (here you get a port clash).

 

Generally speaking, the manifests (by default) are targeted for 3 control node (HA) configuration, including ceph to reflect production-like HA deployment

https://github.com/openstack/airship-treasuremap/blob/master/global/software/charts/ucp/ceph/ceph-client-update.yaml#L145

 

That said, you may adjust the replica count on site level (set 1, instead of 2) as well, as well as OSD count here by doing overrides (matching how many OSDs you have)

https://github.com/openstack/airship-treasuremap/blob/master/site/airship-seaworthy/software/charts/ucp/ceph/ceph-client-update.yaml

 

And also, adjustments in other charts may be required to set this up for a single control plane node, here is more info on reference site

https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html

 

Kindly, Kaspars

 

 

 

From: Kaspars Skels [mailto:kaspars.skels@gmail.com]
Sent: Tuesday, November 6, 2018 2:10 PM
To: qiaolin.tu@nokia-sbell.com
Cc: MEADOWS, ALAN <am240k@att.com>; MONTEIRO, FELIPE C <fm577c@att.com>; BARTRA, RICK <rb560u@att.com>; REDDY, CHINASUBBA <cr3938@att.com>; GORSHUNOV, ROMAN <roman.gorshunov@att.com>; KHUNTIA, SOUMITRA <sk698p@att.com>; KABANOV, DMITRII <dk370c@att.com>; Mark Burnett <mark.m.burnett@gmail.com>; VOLKOV, ANDREY <av903u@att.com>; avolkov@mirantis.com; Chris Wedgwood <cw@f00f.org>; PALLAV GUPTA <pallavgupta84@gmail.com>; GUPTA, SANGEET <sg774j@att.com>; zuul@review01.openstack.org; HUSSEY, SCOTT T <sh8121@att.com>; PACHECO, RODOLFO J <rp2723@att.com>; maxwell.li@nokia-sbell.com; SKELS, KASPARS <ks3019@att.com>
Subject: Re: Airship installation Questions

 

 

 

On Tue, Nov 6, 2018 at 3:42 AM Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com> wrote:

Hi,

I also found that pod(ceph-mgr-6dc44fc75b-25n24) is always on the pending state. It report error as below:   Warning  FailedScheduling  1m (x125 over 36m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

 

 

root@cab23-r720-11:~# kubectl get pods --all-namespaces

NAMESPACE     NAME                                                        READY     STATUS      RESTARTS   AGE

ceph          airship-ucp-ceph-provisioners-ceph-ns-key-generator-n5vzj   0/1       Completed   0          20m

ceph          ceph-cephfs-client-key-generator-656rd                      0/1       Completed   0          20m

ceph          ceph-cephfs-provisioner-676684f6bd-trf5h                    1/1       Running     0          20m

ceph          ceph-cephfs-provisioner-676684f6bd-zfk2s                    1/1       Running     0          20m

ceph          ceph-mds-6bfb74d9c7-btbt5                                   1/1       Running     0          37m

ceph          ceph-mds-6bfb74d9c7-qw4xl                                   1/1       Running     0          37m

ceph          ceph-mds-keyring-generator-dlqjt                            0/1       Completed   0          53m

ceph          ceph-mgr-6dc44fc75b-25n24                                   0/1       Pending     0          37m

ceph          ceph-mgr-6dc44fc75b-42mqz                                   1/1       Running     0          37m

ceph          ceph-mgr-keyring-generator-7dftx                            0/1       Completed   0          53m

ceph          ceph-mon-cfrf6                                              1/1       Running     0          41m

ceph          ceph-mon-check-6db6b569b6-67vvm                             1/1       Running     0          53m

ceph          ceph-mon-keyring-generator-55ps8                            0/1       Completed   0          53m

 

 

 

root@cab23-r720-11:~# kubectl describe pod ceph-mgr-6dc44fc75b-25n24 -n ceph

Name:           ceph-mgr-6dc44fc75b-25n24

Namespace:      ceph

Node:           <none>

Labels:         application=ceph

                component=mgr

                pod-template-hash=2870097316

                release_group=airship-ucp-ceph-client

Annotations:    configmap-bin-hash=d23610f5cc67014b596eb45b9d47ddb97997d6ec5fd3289fb264429a260f39a9

                configmap-etc-client-hash=0174be85f29398aa5cea5fccb9537e0e0f25ea2d2f280182e9e2506a1be1b115

Status:         Pending

IP:            

Controlled By:  ReplicaSet/ceph-mgr-6dc44fc75b

Init Containers:

  init:

    Image:      quay.io/stackanetes/kubernetes-entrypoint:v0.3.1

    Port:       <none>

    Host Port:  <none>

    Command:

      kubernetes-entrypoint

    Environment:

      POD_NAME:              ceph-mgr-6dc44fc75b-25n24 (v1:metadata.name)

      NAMESPACE:             ceph (v1:metadata.namespace)

      INTERFACE_NAME:        eth0

      PATH:                  /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/

      DEPENDENCY_SERVICE:    ceph:ceph-mon

      DEPENDENCY_JOBS:       ceph-storage-keys-generator,ceph-mgr-keyring-generator

      DEPENDENCY_DAEMONSET: 

      DEPENDENCY_CONTAINER: 

      DEPENDENCY_POD_JSON:  

      COMMAND:               echo done

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro)

  ceph-init-dirs:

    Image:      docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04

    Port:       <none>

    Host Port:  <none>

    Command:

      /tmp/init-dirs.sh

    Environment:

      CLUSTER:  ceph

    Mounts:

      /etc/ceph from pod-etc-ceph (rw)

      /run from pod-run (rw)

      /tmp/init-dirs.sh from ceph-client-bin (ro)

      /var/lib/ceph from pod-var-lib-ceph (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro)

Containers:

  ceph-mgr:

    Image:       docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04

    Ports:       7000/TCP, 9283/TCP

    Host Ports:  7000/TCP, 9283/TCP

    Command:

      /mgr-start.sh

    Liveness:   exec [/tmp/mgr-check.sh liveness] delay=30s timeout=5s period=10s #success=1 #failure=3

    Readiness:  exec [/tmp/mgr-check.sh readiness] delay=30s timeout=5s period=10s #success=1 #failure=3

    Environment:

      CLUSTER:          ceph

      ENABLED_MODULES:  restful

status

prometheus

    Mounts:

      /etc/ceph from pod-etc-ceph (rw)

      /etc/ceph/ceph.client.admin.keyring from ceph-client-admin-keyring (ro)

      /etc/ceph/ceph.conf from ceph-client-etc (ro)

      /mgr-start.sh from ceph-client-bin (ro)

      /run from pod-run (rw)

      /tmp/mgr-check.sh from ceph-client-bin (ro)

      /var/lib/ceph from pod-var-lib-ceph (rw)

      /var/lib/ceph/bootstrap-mgr/ceph.keyring from ceph-bootstrap-mgr-keyring (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from ceph-mgr-token-pgbwp (ro)

Conditions:

  Type           Status

  PodScheduled   False

Volumes:

  pod-etc-ceph:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium: 

  ceph-client-bin:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      ceph-client-bin

    Optional:  false

  ceph-client-etc:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      ceph-client-etc

    Optional:  false

  pod-var-lib-ceph:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium: 

  pod-run:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  Memory

  ceph-client-admin-keyring:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  ceph-client-admin-keyring

    Optional:    false

  ceph-bootstrap-mgr-keyring:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  ceph-bootstrap-mgr-keyring

    Optional:    false

  ceph-mgr-token-pgbwp:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  ceph-mgr-token-pgbwp

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  ceph-mgr=enabled

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason            Age                 From               Message

  ----     ------            ----                ----               -------

  Warning  FailedScheduling  1m (x125 over 36m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

 

Best Regards!

Qiaolin Tu

NSB MN 5G ECE HZ CN2 SG04

Mobile:  +86 138 057 59684

E-Mail:   qiaolin.tu@nokia-sbell.com

 

 

From: Tu, Qiaolin (NSB - CN/Hangzhou)
Sent: Tuesday, November 06, 2018 1:46 PM
To: 'MEADOWS, ALAN' <am240k@att.com>; Kaspars Skels <kaspars.skels@gmail.com>
Cc: MONTEIRO, FELIPE C <fm577c@att.com>; BARTRA, RICK <rb560u@att.com>; REDDY, CHINASUBBA <cr3938@att.com>; GORSHUNOV, ROMAN <roman.gorshunov@att.com>; KHUNTIA, SOUMITRA <sk698p@att.com>; KABANOV, DMITRII <dk370c@att.com>; mark.m.burnett@gmail.com; VOLKOV, ANDREY <av903u@att.com>; avolkov@mirantis.com; cw@f00f.org; pallavgupta84@gmail.com; GUPTA, SANGEET <sg774j@att.com>; pete@port.direct; zuul@review01.openstack.org; HUSSEY, SCOTT T <sh8121@att.com>; PACHECO, RODOLFO J <rp2723@att.com>; Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Subject: Airship installation Questions

 

Hi,

Thanks very much for your kindly support.

I also encountered a problem when install multi-node airship. It blocked by 2 ceph-rgw pods. They crash off and run again and again. Pods error log as bellow:

 

root@cab23-r720-11:~# kubectl describe pod ceph-rgw-6ff5ff866d-4x66n -n ceph

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Normal Scheduled 1m default-scheduler Successfully assigned ceph-rgw-6ff5ff866d-4x66n to cab23-r720-11

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-etc-ceph"

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-var-lib-ceph"

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "pod-run"

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-etc"

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-bin"

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-bootstrap-rgw-keyring"

Normal SuccessfulMountVolume 1m kubelet, cab23-r720-11 MountVolume.SetUp succeeded for volume "ceph-rgw-token-sjg9x"

Normal Pulled 1m kubelet, cab23-r720-11 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine

 

Normal Started 1m kubelet, cab23-r720-11 Started container

Normal Pulled 1m kubelet, cab23-r720-11 Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine

Normal Created 1m kubelet, cab23-r720-11 Created container

Normal Started 1m kubelet, cab23-r720-11 Started container

 

Warning Unhealthy 18s (x6 over 1m) kubelet, cab23-r720-11 Readiness probe failed: Get http://10.97.38.105:8088/: dial tcp 10.97.38.105:8088: getsockopt: connection refused

 

 

Best Regards!

Qiaolin Tu

NSB MN 5G ECE HZ CN2 SG04

Mobile:  +86 138 057 59684

E-Mail:   qiaolin.tu@nokia-sbell.com

 

 

From: MEADOWS, ALAN <am240k@att.com>
Sent: Saturday, November 03, 2018 5:39 AM
To: Kaspars Skels <kaspars.skels@gmail.com>; Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com>
Cc: MONTEIRO, FELIPE C <fm577c@att.com>; BARTRA, RICK <rb560u@att.com>; REDDY, CHINASUBBA <cr3938@att.com>; GORSHUNOV, ROMAN <roman.gorshunov@att.com>; KHUNTIA, SOUMITRA <sk698p@att.com>; KABANOV, DMITRII <dk370c@att.com>; mark.m.burnett@gmail.com; VOLKOV, ANDREY <av903u@att.com>; avolkov@mirantis.com; cw@f00f.org; pallavgupta84@gmail.com; GUPTA, SANGEET <sg774j@att.com>; pete@port.direct; zuul@review01.openstack.org; HUSSEY, SCOTT T <sh8121@att.com>; Tu, Qiaolin (NSB - CN/Hangzhou) <qiaolin.tu@nokia-sbell.com>; PACHECO, RODOLFO J <rp2723@att.com>
Subject: RE: Airship Version Questions

 

Hi Maxwell!

 

Great to see additional people trying Airship out.

 

Treasuremap does change quite frequently.  In fact, this is from a lot of the great work Kaspars has been doing – building a CI/CD pipeline to not only keep treasuremap fresh and referencing the latest working versions of Airship components but also validating that the YAML documents are able to provision a multi-node baremetal physical environment by way of third-party gates.

 

This gives us a high degree of confidence that no matter what version of treasuremap you have decided to start working with to deploy your own environments it should work.

 

To be sure, we do understand when getting started it is nice to work with a stable target that isn’t constantly changing.    Kaspars has introduced tagging to provide a human readable reference (using dates).  These are not on any particular cadence, but usually follow more in-depth tested release beyond just automated CI/CD using the manifests:

 

https://github.com/openstack/airship-treasuremap/releases

 

For right now with regard to pegleg, the best way to leverage the version of pegleg that has been tested with the treasuremap manifests is to pull it the one used in the baremetal CI/CD gate test directly from the target tag Jenkinsfile.  For example, for v18.11.01:

 

https://github.com/openstack/airship-treasuremap/blob/v18.11.01/tools/gate/Jenkinsfile#L15

 

Please feel free to get on the airship mailing list (airshipit.org) to connect with a wider audience or our IRC channel!

 

Alan Meadows 

 

From: Kaspars Skels [mailto:kaspars.skels@gmail.com]
Sent: Friday, November 02, 2018 2:27 PM
To: maxwell.li@nokia-sbell.com
Cc: MONTEIRO, FELIPE C <fm577c@att.com>; BARTRA, RICK <rb560u@att.com>; REDDY, CHINASUBBA <cr3938@att.com>; GORSHUNOV, ROMAN <roman.gorshunov@att.com>; KHUNTIA, SOUMITRA <sk698p@att.com>; KABANOV, DMITRII <dk370c@att.com>; mark.m.burnett@gmail.com; VOLKOV, ANDREY <av903u@att.com>; avolkov@mirantis.com; cw@f00f.org; pallavgupta84@gmail.com; GUPTA, SANGEET <sg774j@att.com>; pete@port.direct; zuul@review01.openstack.org; HUSSEY, SCOTT T <sh8121@att.com>; qiaolin.tu@nokia-sbell.com; MEADOWS, ALAN <am240k@att.com>; PACHECO, RODOLFO J <rp2723@att.com>
Subject: Re: Airship Version Questions

 

+ Alan/Rodolfo

 

 

 

On Thu, Nov 1, 2018 at 3:37 AM Li, Maxwell (NSB - CN/Hangzhou) <maxwell.li@nokia-sbell.com> wrote:

Hi Airship Team:

 

I have a problem when I deploy airship on my server. There are new commits in airship-pegleg and airship-treasuremap, almost erveryday. And there are no tags or branches in these code repositories. Could I get a commit ID about these repositories so that I could deploy airship.

 

Thanks a lot!

 

Best Regards!

Maxwell Li