[Airship-discuss] airship-in-a-bottle deployment issue
Roman Gorshunov
paye600 at gmail.com
Tue May 14 12:46:40 UTC 2019
Hello Calvin,
Here is an ascii recording of my run of AIAB:
https://paste.ubuntu.com/p/C7kWZpG33H/
https://asciinema.org/docs/usage - you can use it to play the recording
>From your logs below it seems like Airflow is not running properly,
Airflow is a part of Shipyard:
==> n0: Reason: Airflow could not be contacted properly by Shipyard.
==> n0: - Error: <class 'requests.exceptions.ConnectTimeout'>
So something could be wrong with airflow-* pods or services:
kubectl get pods --all-namespaces | grep -i airfl
kubectl get svc --all-namespaces | grep airf
The manifests/common/deploy-airship.sh script contains good sequence
of steps being run inside a VM. It is being launched with parameter
"demo" from manifests/dev_single_node/airship-in-a-bottle.sh. Check
the code here: https://opendev.org/airship/in-a-bottle/src/branch/master/manifests.
And yes, you can run parts of this scripts, but manually (I'd
recommend comment certain sections, which you believe have already
completed properly).
Specs of my installation (actually it's a laptop):
[roman at romanpc Airship]$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora-root 49G 21G 26G 45% /
[roman at romanpc Airship]$ free -h # AIAB VM is already running here and
consumes ~20GB RAM and 180-400% of CPU
total used free shared buff/cache available
Mem: 31Gi 23Gi 328Mi 874Mi 7.5Gi 6.5Gi
Swap: 15Gi 21Mi 15Gi
[roman at romanpc Airship]$ grep "model name" /proc/cpuinfo
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
[roman at romanpc Airship]$ cat /etc/fedora-release
Fedora release 30 (Thirty)
[roman at romanpc Airship]$ vagrant version | grep Version
Installed Version: 2.2.3
Latest Version: 2.2.4
[roman at romanpc Airship]$ vagrant plugin list
vagrant-libvirt (0.0.45, system)
[roman at romanpc Airship]$ rpm -q qemu-kvm
qemu-kvm-3.1.0-7.fc30.x86_64
[roman at romanpc Airship]$
Inside VM:
vagrant at n0:~$ sudo virt-what
kvm
vagrant at n0:~$
Best regards,
-- Roman Gorshunov
On Tue, May 14, 2019 at 7:59 AM calvin whole <calvinwhole at gmail.com> wrote:
>
> Hi Roman,
>
> Thanks a lot for looking into this issue.
>
> I re-ran the same process again and this time it successfully completed Genesis phase. The postgresql-0 and nfs-provisioner logs did not show any apparent errors.
> So It seems to me there is a consistency issue because all I did was destroy the n0 vm and "vagrant up" again. It finally succeeded once out of 5 tries.
>
> Btw, my virtualization environment is as below.
>
> vagrant at n0:~$ sudo virt-what
>
> virtualbox
>
> kvm
>
>
> However, the subsequent ./run_shipyard.sh commit configdocs failed as below.
> How can this failure be fixed?
>
> Since Genesis is complete, can we re-run the script without the Genesis part - i.e., skipping the Genesis part ?
>
> Btw, can you describe your environment setup, so we can try to follow your exact execution environment?
>
> (My environment: a physical server, installed ubuntu 16.04.5 , install virtualbox and vagrant, and run "vagrant up")
>
> Thanks again,
> Calvin
>
> =================================================
> ==> n0: + export max_shipyard_count=60
> ==> n0: + max_shipyard_count=60
> ==> n0: + export shipyard_query_time=90
> ==> n0: + shipyard_query_time=90
> ==> n0: + bash execute_shipyard_action.sh deploy_site
> ==> n0: + run_action deploy_site
> ==> n0: + action=deploy_site
> ==> n0: + action_args=
> ==> n0: + NC='\033[0m'
> ==> n0: + RED='\033[0;31m'
> ==> n0: + GREEN='\033[0;32m'
> ==> n0: +++ dirname execute_shipyard_action.sh
> ==> n0: ++ cd .
> ==> n0: ++ pwd
> ==> n0: + DIR=/root/deploy/site
> ==> n0: + cd /root/deploy/site
> ==> n0: + source shipyard_docker_base_command.sh
> ==> n0: ++ NAMESPACE=ucp
> ==> n0: ++ SHIPYARD_IMAGE=quay.io/airshipit/shipyard:master
> ==> n0: +++ cat
> ==> n0: Execute deploy_site Dag...
> ==> n0: ++ base_docker_command='sudo -E docker run -t --rm --net=host
> ==> n0: -e http_proxy=
> ==> n0: -e https_proxy=
> ==> n0: -e no_proxy=
> ==> n0: -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3
> ==> n0: -e OS_USERNAME=shipyard
> ==> n0: -e OS_USER_DOMAIN_NAME=default
> ==> n0: -e OS_PASSWORD
> ==> n0: -e OS_PROJECT_DOMAIN_NAME=default
> ==> n0: -e OS_PROJECT_NAME=service'
> ==> n0: + echo -e 'Execute deploy_site Dag...\n'
> ==> n0: + sudo -E docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc./v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=servhipit/shipyard:master create action deploy_site
> ==> n0: Error: Unable to complete request to Airflow <======================== Failed
> ==> n0: Reason: Airflow could not be contacted properly by Shipyard.
> ==> n0: - Error: <class 'requests.exceptions.ConnectTimeout'>
> ==> n0:
> ==> n0: #### Errors: 1, Warnings: 0, Infos: 0, Other: 0 ####
>
> On Mon, May 13, 2019 at 10:13 PM Roman Gorshunov <paye600 at gmail.com> wrote:
>>
>> Hello Calvin,
>>
>> Seems like PosgreSQL database was not able to properly write data onto
>> the disk. PostgreSQL runs as a postgresql-0 pod in ucp namespace, uses
>> a persistent volume claim postgresql-data-postgresql-0, and persistent
>> volume mounted via NFS.
>>
>> kubectl describe pod postgresql-0 -n ucp
>> kubectl logs -n ucp postgresql-0
>> kubectl -n ucp describe pvc postgresql-data-postgresql-0
>> kubectl describe pv pvc-0382c985-7572-11e9-b431-525400681552 # volume
>> name could be different)
>>
>> NFS is provisioned by nfs-provisioner-7799d64d59-ptsgk (last two parts
>> would be different in your case):
>> kubectl get pods -n kube-system | grep nfs
>> kubectl -n kube-system describe pod nfs-provisioner-7799d64d59-ptsgk
>> kubectl -n kube-system logs nfs-provisioner-7799d64d59-ptsgk
>>
>> Check if there are any problems with it (e.g. unable to mount NFS
>> share, or lack of free storage space - `df -h`).
>>
>> Also running kubectl get events --all-namespaces could help to
>> understand what went wrong.
>>
>> I have run an AIAB installation today twice, and it all worked fine. I
>> use `vagrant up` and my hypervisor is KVM, if that could help you.
>>
>> I hope it helps.
>>
>> Best regards,
>> -- Roman Gorshunov
>>
>> On Fri, May 10, 2019 at 7:01 AM calvin whole <calvinwhole at gmail.com> wrote:
>> >
>> > Hi Roman,
>> >
>> > Not sure if my last email were out properly, its size is too big. Here is a short one. Thanks for responding in advance.
>> >
>> > I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below.
>> > It showed ERROR: checkpoint request failed
>> > HINT: Consult recent messages in the server log for details.
>> >
>> > What is the specific "server" log we should look into for details?
>> >
>> > Thanks for help.
>> >
>> > Sincerely,
>> > Calvin
>> >
>> >
>> > On Thu, May 9, 2019 at 12:17 PM calvin whole <calvinwhole at gmail.com> wrote:
>> >>
>> >> Hi Roman,
>> >>
>> >> Btw, continue my last post, the kubectl describe pod deckhand-db-init-zs499 output is as follows.
>> >>
>> >> Thanks,
>> >> Calvin
>> >> =========== kubectl describe pod deckhand-db-init-zs499 =================
>> >> root at n0:/home/vagrant# kubectl describe pod deckhand-db-init-zs499 -n ucp
>> >> Name: deckhand-db-init-zs499
>> >> Namespace: ucp
>> >> Node: n0/10.0.2.15
>> >> Start Time: Thu, 09 May 2019 03:48:48 +0000
>> >> Labels: application=deckhand
>> >> component=db-init
>> >> controller-uid=59f1bee0-720d-11e9-92ac-080027fc876e
>> >> job-name=deckhand-db-init
>> >> release_group=airship-ucp-deckhand
>> >> Annotations: <none>
>> >> Status: Running
>> >> IP: 10.97.26.50
>> >> Controlled By: Job/deckhand-db-init
>> >> Init Containers:
>> >> init:
>> >> Container ID: docker://b58e8b6b7296df618cb8120b5226370afeba2a4e79dd70ee6894b5afd853c0db
>> >> Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1
>> >> Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint@sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6
>> >> Port: <none>
>> >> Host Port: <none>
>> >> Command:
>> >> kubernetes-entrypoint
>> >> State: Terminated
>> >> Reason: Completed
>> >> Exit Code: 0
>> >> Started: Thu, 09 May 2019 03:48:52 +0000
>> >> Finished: Thu, 09 May 2019 03:48:54 +0000
>> >> Ready: True
>> >> Restart Count: 0
>> >> Environment:
>> >> POD_NAME: deckhand-db-init-zs499 (v1:metadata.name)
>> >> NAMESPACE: ucp (v1:metadata.namespace)
>> >> INTERFACE_NAME: eth0
>> >> PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/
>> >> DEPENDENCY_SERVICE: ucp:postgresql
>> >> DEPENDENCY_DAEMONSET:
>> >> DEPENDENCY_CONTAINER:
>> >> DEPENDENCY_POD_JSON:
>> >> COMMAND: echo done
>> >> Mounts:
>> >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro)
>> >> Containers:
>> >> deckhand-db-init:
>> >> Container ID: docker://5dea2aa975c3718ca298536005b9cc0b21de47e08b2260cc73005e3455bb1350
>> >> Image: docker.io/postgres:9.5
>> >> Image ID: docker-pullable://postgres@sha256:0605b4b20a205c09ddd10eeeddd3ed7bf3cc442a8e9896ec34862ca882658be4
>> >> Port: <none>
>> >> Host Port: <none>
>> >> Command:
>> >> /tmp/db-init.sh
>> >> State: Waiting
>> >> Reason: CrashLoopBackOff
>> >> Last State: Terminated
>> >> Reason: Error <========
>> >> Exit Code: 1
>> >> Started: Thu, 09 May 2019 04:10:29 +0000
>> >> Finished: Thu, 09 May 2019 04:10:30 +0000
>> >> Ready: False
>> >> Restart Count: 9
>> >> Environment:
>> >> DECKHAND_DB_URL: <set to the key 'DATABASE_URI' in secret 'deckhand-db-user'> Optional: false
>> >> DB_NAME: <set to the key 'DATABASE_NAME' in secret 'deckhand-db-user'> Optional: false
>> >> DB_SERVICE_USER: <set to the key 'DATABASE_USERNAME' in secret 'deckhand-db-user'> Optional: false
>> >> DB_SERVICE_PASSWORD: <set to the key 'DATABASE_PASSWORD' in secret 'deckhand-db-user'> Optional: false
>> >> DB_FQDN: <set to the key 'DATABASE_HOST' in secret 'deckhand-db-user'> Optional: false
>> >> DB_PORT: <set to the key 'DATABASE_PORT' in secret 'deckhand-db-user'> Optional: false
>> >> DB_ADMIN_USER: <set to the key 'DATABASE_USERNAME' in secret 'deckhand-db-admin'> Optional: false
>> >> PGPASSWORD: <set to the key 'DATABASE_PASSWORD' in secret 'deckhand-db-admin'> Optional: false
>> >> Mounts:
>> >> /etc/deckhand from etc-deckhand (rw)
>> >> /etc/deckhand/deckhand.conf from deckhand-etc (ro)
>> >> /tmp/db-init.sh from deckhand-bin (ro)
>> >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro)
>> >> Conditions:
>> >> Type Status
>> >> Initialized True
>> >> Ready False
>> >> PodScheduled True
>> >> Volumes:
>> >> etc-deckhand:
>> >> Type: EmptyDir (a temporary directory that shares a pod's lifetime)
>> >> Medium:
>> >> deckhand-etc:
>> >> Type: Secret (a volume populated by a Secret)
>> >> SecretName: deckhand-etc
>> >> Optional: false
>> >> deckhand-bin:
>> >> Type: ConfigMap (a volume populated by a ConfigMap)
>> >> Name: deckhand-bin
>> >> Optional: false
>> >> deckhand-db-init-token-gczr5:
>> >> Type: Secret (a volume populated by a Secret)
>> >> SecretName: deckhand-db-init-token-gczr5
>> >> Optional: false
>> >> QoS Class: BestEffort
>> >> Node-Selectors: ucp-control-plane=enabled
>> >> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
>> >> node.kubernetes.io/unreachable:NoExecute for 300s
>> >> Events:
>> >> Type Reason Age From Message
>> >> ---- ------ ---- ---- -------
>> >> Normal Scheduled 24m default-scheduler Successfully assigned deckhand-db-init-zs499 to n0
>> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "etc-deckhand"
>> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-bin"
>> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-etc"
>> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-db-init-token-gczr5"
>> >> Normal Pulled 24m kubelet, n0 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine
>> >> Normal Created 24m kubelet, n0 Created container
>> >> Normal Started 24m kubelet, n0 Started container
>> >> Normal Pulled 23m (x4 over 24m) kubelet, n0 Container image "docker.io/postgres:9.5" already present on machine
>> >> Normal Created 23m (x4 over 24m) kubelet, n0 Created container
>> >> Normal Started 23m (x4 over 24m) kubelet, n0 Started container
>> >> Warning BackOff 4m (x90 over 24m) kubelet, n0 Back-off restarting failed container
>> >> root at n0:/home/vagrant#
>> >>
>> >> On Thu, May 9, 2019 at 12:08 PM calvin whole <calvinwhole at gmail.com> wrote:
>> >>>
>> >>> Hi Roman,
>> >>>
>> >>> Thanks for looking into this and gave us suggestions.
>> >>>
>> >>> I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below.
>> >>> It showed ERROR: checkpoint request failed
>> >>> HINT: Consult recent messages in the server log for details.
>> >>>
>> >>> What is the specific "server" log we should look into for details?
>> >>>
>> >>> Thanks for help.
>> >>>
>> >>> Sincerely,
>> >>> Calvin
>> >>>
>> >>> ================== log for deckhand-db-init-zs499 ==================================
>> >>> root at n0:/home/vagrant# kubectl logs deckhand-db-init-zs499 -n ucp
>> >>> + export HOME=/tmp
>> >>> + HOME=/tmp
>> >>> + pgsql_superuser_cmd 'SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\'''
>> >>> + grep -q 1
>> >>> + DB_COMMAND='SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\'''
>> >>> + [[ ! -z '' ]]
>> >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\'''
>> >>> + pgsql_superuser_cmd 'CREATE DATABASE deckhand'
>> >>> + DB_COMMAND='CREATE DATABASE deckhand'
>> >>> + [[ ! -z '' ]]
>> >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=CREATE DATABASE deckhand'
>> >>> ERROR: checkpoint request failed
>> >>> HINT: Consult recent messages in the server log for details.
>> >>>
>> >>> =====================================================================================
>> >>> ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE
>> >>> ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 49m
>> >>> ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 49m
>> >>> ==> n0: kube-system calico-etcd-anchor-ncl2p 1/1 Running 0 47m
>> >>> ==> n0: kube-system calico-etcd-n0 1/1 Running 0 46m
>> >>> ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5csnn 1/1 Running 0 46m
>> >>> ==> n0: kube-system calico-node-m4rtf 1/1 Running 0 46m
>> >>> ==> n0: kube-system calico-settings-tkp6r 0/1 Completed 0 46m
>> >>> ==> n0: kube-system coredns-84bdd76f4d-hhbcs 1/1 Running 0 44m
>> >>> ==> n0: kube-system coredns-84bdd76f4d-k8tcc 1/1 Running 0 44m
>> >>> ==> n0: kube-system coredns-84bdd76f4d-qp2xd 1/1 Running 0 44m
>> >>> ==> n0: kube-system haproxy-n0 1/1 Running 0 50m
>> >>> ==> n0: kube-system ingress-error-pages-7c65f766d-dn2tw 1/1 Running 0 41m
>> >>> ==> n0: kube-system ingress-gtvp8 2/2 Running 0 41m
>> >>> ==> n0: kube-system kubernetes-apiserver-anchor-99jhn 1/1 Running 0 42m
>> >>> ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 41m
>> >>> ==> n0: kube-system kubernetes-controller-manager-anchor-vqddp 1/1 Running 0 42m
>> >>> ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 41m
>> >>> ==> n0: kube-system kubernetes-etcd-anchor-9jcpl 1/1 Running 0 44m
>> >>> ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 42m
>> >>> ==> n0: kube-system kubernetes-proxy-2m9t2 1/1 Running 0 47m
>> >>> ==> n0: kube-system kubernetes-scheduler-anchor-nl9fb 1/1 Running 0 42m
>> >>> ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 41m
>> >>> ==> n0: kube-system nfs-provisioner-7799d64d59-vtkbd 1/1 Running 0 40m
>> >>> ==> n0: kube-system tiller-deploy-7d88c6f956-qwfzb 1/1 Running 0 27m
>> >>> ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-vfl69 1/1 Running 0 34m
>> >>> ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 39m
>> >>> ==> n0: ucp armada-api-d5f757d5-6wl98 1/1 Running 0 15m
>> >>> ==> n0: ucp armada-ks-endpoints-vl9rs 0/3 Completed 0 15m
>> >>> ==> n0: ucp armada-ks-service-vpcjd 0/1 Completed 0 15m
>> >>> ==> n0: ucp armada-ks-user-rv4gs 0/1 Completed 0 15m
>> >>> ==> n0: ucp barbican-api-5d7b88d8ff-8dd6w 1/1 Running 0 13m
>> >>> ==> n0: ucp barbican-db-init-gqvt4 0/1 Completed 0 13m
>> >>> ==> n0: ucp barbican-db-sync-tqtgq 0/1 Completed 0 13m
>> >>> ==> n0: ucp barbican-ks-endpoints-rwtql 0/3
>> >>> ==> n0: Completed 0 13m
>> >>> ==> n0: ucp barbican-ks-service-l2h6h 0/1 Completed 0 13m
>> >>> ==> n0: ucp barbican-ks-user-wwvc7 0/1 Completed 0 13m
>> >>> ==> n0: ucp barbican-rabbit-init-6spq4 0/1 Completed 0 13m
>> >>> ==> n0: ucp deckhand-api-78b9644f96-5686f 0/1 Running 0 11m
>> >>> ==> n0: ucp deckhand-db-init-zs499 0/1 CrashLoopBackOff 7 11m <===
>> >>> ==> n0: ucp deckhand-db-sync-ct7wl 0/1 Init:0/1 0 11m
>> >>> ==> n0: ucp deckhand-ks-endpoints-x4hd9 0/3 Completed 0 11m
>> >>> ==> n0: ucp deckhand-ks-service-ms6n5 0/1 Completed 0 11m
>> >>> ==> n0: ucp deckhand-ks-user-7fnvt 0/1 Completed 0 11m
>> >>> ==> n0: ucp divingbell-apparmor-default-hth8z 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-apt-default-r965m 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-ethtool-default-ldcmc 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-exec-default-f7h7x 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-limits-default-sp9mj 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-mounts-default-8f5a00a2-frbl2 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-perm-default-d7wxp 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-sysctl-default-c8pnp 1/1 Running 0 27m
>> >>> ==> n0: ucp divingbell-uamlite-default-rfct6 1/1 Running 0 27m
>> >>> ==> n0: ucp ingress-86576d6599-mdgj4 1/1 Running 0 39m
>> >>> ==> n0: ucp ingress-error-pages-5c97bb46bb-7lg5l 1/1 Running 0 39m
>> >>> ==> n0: ucp keystone-api-678fc44bdd-594bb 1/1 Running 0 34m
>> >>> ==> n0: ucp keystone-bootstrap-rprr6 0/1 Completed 0 34m
>> >>> ==> n0: ucp keystone-credential-setup-zkjgs 0/1 Completed 0 34m
>> >>> ==> n0: ucp keystone-db-init-xkgxm 0/1 Completed 0 34m
>> >>> ==> n0: ucp keystone-db-sync-lm6xs 0/1 Completed 0 34m
>> >>> ==> n0: ucp keystone-domain-manage-9pzjq 0/1 Completed 0 34m
>> >>> ==> n0: ucp keystone-fernet-setup-q7t8p 0/1 Completed 0 34m
>> >>> ==> n0: ucp keystone-rabbit-init-qpvgt 0/1 Completed 0 34m
>> >>> ==> n0: ucp maas-bootstrap-admin-user-8npgw 0/1 Completed 0 26m
>> >>> ==> n0: ucp maas-db-init-9z86n 0/1 Completed 0 26m
>> >>> ==> n0: ucp maas-db-sync-r7rkg 0/1 Completed 0 26m
>> >>> ==> n0: ucp maas-export-api-key-n2gz4 0/1 Completed 1 26m
>> >>> ==> n0: ucp maas-import-resources-prlml 0/1 Completed 0 26m
>> >>> ==> n0: ucp maas-ingress-756f6f9d6-h65nj 2/2 Running 0 26m
>> >>> ==> n0: ucp maas-ingress-errors-8686d56d98-swfg9
>> >>> ==> n0: 1/1 Running 0 26m
>> >>> ==> n0: ucp maas-rack-0 1/1 Running 0 26m
>> >>> ==> n0: ucp maas-region-0 1/1 Running 0 26m
>> >>> ==> n0: ucp mariadb-ingress-55794d94c8-dsw5w 1/1 Running 0 39m
>> >>> ==> n0: ucp mariadb-ingress-55794d94c8-jczmh 1/1 Running 0 39m
>> >>> ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-jrqsg 1/1 Running 0 39m
>> >>> ==> n0: ucp mariadb-server-0 1/1 Running 0 39m
>> >>> ==> n0: ucp postgresql-0 1/1 Running 1 39m
>> >>>
>> >>>
>> >>> On Tue, May 7, 2019 at 6:25 PM Roman Gorshunov <paye600 at gmail.com> wrote:
>> >>>>
>> >>>> Hello Calvin,
>> >>>>
>> >>>> Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod.
>> >>>> kubectl describe pod deckhand-db-init-r9jvg -u ucp
>> >>>> May be it would help to understand what is happening there.
>> >>>>
>> >>>> Thank you for trying Airship.
>> >>>>
>> >>>> Best regards,
>> >>>> -- Roman Gorshunov
>> >>>>
>> >>>> On Tue, May 7, 2019 at 7:51 AM calvin whole <calvinwhole at gmail.com> wrote:
>> >>>> >
>> >>>> > Hi,
>> >>>> >
>> >>>> > We are trying to deploy AIIB.
>> >>>> >
>> >>>> > I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant.
>> >>>> > The process is straightforward by following https://opendev.org/airship/in-a-bottle/
>> >>>> > We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up".
>> >>>> >
>> >>>> > However it stuck in the error below:
>> >>>> > deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16
>> >>>> >
>> >>>> > Could anyone help to resolve this? Many thanks in advance.
>> >>>> >
>> >>>> > Sincerely,
>> >>>> > Calvin
>> >>>> >
>> >>>> > ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE
>> >>>> > ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 1h
>> >>>> > ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 1h
>> >>>> > ==> n0: kube-system calico-etcd-anchor-5tqhk 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system calico-etcd-n0 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system calico-node-pbsxh 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system calico-settings-lzpk9 0/1 Completed 0 1h
>> >>>> > ==> n0: kube-system coredns-84bdd76f4d-6cwnl 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system coredns-84bdd76f4d-d4p8c 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system coredns-84bdd76f4d-xrknz 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system haproxy-n0 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system ingress-9pkmx 2/2 Running 0 1h
>> >>>> > ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-apiserver-anchor-hszbf 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-etcd-anchor-nnjbb 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-proxy-vgzjp 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq 1/1 Running 0 1h
>> >>>> > ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg 1/1 Running 0 1h
>> >>>> > ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 Running 0 1h
>> >>>> > ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h
>> >>>> > ==> n0: ucp armada-api-d5f757d5-d9l9h 1/1 Running 0 1h
>> >>>> > ==> n0: ucp armada-ks-endpoints-qwbtg 0/3 Completed 0 1h
>> >>>> > ==> n0: ucp armada-ks-service-lg8kq 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp armada-ks-user-g2j6v 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp barbican-api-84665dd99d-qv5fz 1/1 Running 0 1h
>> >>>> > ==> n0: ucp barbican-db-init-ndx58 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp barbican-db-sync-sh7c9 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp barbican-ks-endpoints-bv7xv 0/3 Completed 0 1h
>> >>>> > ==> n0: ucp barbican-ks-service-46hjk 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp barbican-ks-user-6df74 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp barbican-rabbit-init-gnvfl 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw 0/1 Running 0 1h
>> >>>> > ==> n0: ucp deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 17 1h <=====
>> >>>> > ==> n0: ucp deckhand-db-sync-llstv 0/1 Init:0/1 0 1h
>> >>>> > ==> n0: ucp deckhand-ks-endpoints-4gqfj 0/3 Completed 0 1h
>> >>>> > ==> n0: ucp deckhand-ks-service-c6gbq 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp deckhand-ks-user-5skng 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp divingbell-apparmor-default-lkcl6 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-apt-default-7jgtv 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-ethtool-default-tm2w4 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-exec-default-l45m8 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-limits-default-q84pr 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-mounts-default-29420945-nrdsz 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-perm-default-wdgld 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-sysctl-default-t7f2m 1/1 Running 0 1h
>> >>>> > ==> n0: ucp divingbell-uamlite-default-fc4jx 1/1 Running 0 1h
>> >>>> > ==> n0: ucp ingress-86576d6599-q8ng4 1/1 Running 0 1h
>> >>>> > ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m 1/1 Running 0 1h
>> >>>> > ==> n0: ucp keystone-api-678fc44bdd-ncxc2 1/1 Running 0 1h
>> >>>> > ==> n0: ucp keystone-bootstrap-28l4g 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp keystone-credential-setup-rq5d4 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp keystone-db-init-z8x4w 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp keystone-db-sync-9hvb5 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp keystone-domain-manage-tzcnf 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp keystone-fernet-setup-bzdpb 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp keystone-rabbit-init-cxpc6 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp maas-bootstrap-admin-user-g99rl 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp maas-db-init-h4llm 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp maas-db-sync-6tsqj 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp maas-export-api-key-c8rdb 0/1 Completed 0 1h
>> >>>> > ==> n0: ucp maas-import-resources-hhq7f 0/1 Completed 1 1h
>> >>>> > ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 2/2 Running 0 1h
>> >>>> > ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx 1/1 Running 0 1h
>> >>>> > ==> n0: u
>> >>>> > ==> n0: cp maas-rack-0 1/1 Running 0 1h
>> >>>> > ==> n0: ucp maas-region-0 1/1 Running 0 1h
>> >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf 1/1 Running 0 1h
>> >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-vglbv 1/1 Running 0 1h
>> >>>> > ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv 1/1 Running 0 1h
>> >>>> > ==> n0: ucp mariadb-server-0 1/1 Running 0 1h
>> >>>> > ==> n0: ucp postgresql-0 1/1 Running 1 1h
>> >>>> > _______________________________________________
>> >>>> > Airship-discuss mailing list
>> >>>> > Airship-discuss at lists.airshipit.org
>> >>>> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss
More information about the Airship-discuss
mailing list