From adrien.macor at hotmail.com Thu May 2 07:22:04 2019 From: adrien.macor at hotmail.com (Adrien Macor) Date: Thu, 2 May 2019 07:22:04 +0000 Subject: [Airship-discuss] New installation of Airhsip Message-ID: Hello, I 'am currently trying to install Airship-in-a-bottle, on my server: I get the following error: "message": "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", "error": true, "name": "ARM100", "documents": [ { "schema": "armada/Chart/v1", "name": "nova" } ], "level": "Error", "kind": "ValidationMessage" } 2019-05-02 07:20:52.246 7 INFO armada.utils.validate [-] ValidationMessage: { "message": "Invalid document [armada/Chart/v1] neutron: {'timeout': 2700} is not valid under any of the given schemas.", "error": true, "name": "ARM100", "documents": [ { "schema": "armada/Chart/v1", "name": "neutron" } ], "level": "Error", "kind": "ValidationMessage" } 2019-05-02 07:20:52.333 7 ERROR armada.cli [-] Caught internal exception: armada.exceptions.validate_exceptions.InvalidManifestException: Armada manifest(s) failed validation. Details: [{'message': "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'nova'}], 'level': 'Error', 'kind': 'ValidationMessage'}, {'message': "Invalid document [armada/Chart/v1] neutron: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'neutron'}], 'level': 'Error', 'kind': 'ValidationMessage'}]. 2019-05-02 07:20:52.333 7 ERROR armada.cli Traceback (most recent call last): 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2019-05-02 07:20:52.333 7 ERROR armada.cli self.invoke() 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/cli/apply.py", line 218, in invoke 2019-05-02 07:20:52.333 7 ERROR armada.cli target_manifest=self.target_manifest) 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/armada.py", line 85, in __init__ 2019-05-02 07:20:52.333 7 ERROR armada.cli values=values).update_manifests() 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/override.py", line 178, in update_manifests 2019-05-02 07:20:52.333 7 ERROR armada.cli self._document_checker(self.documents) 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/override.py", line 53, in _document_checker 2019-05-02 07:20:52.333 7 ERROR armada.cli error_messages=details) 2019-05-02 07:20:52.333 7 ERROR armada.cli armada.exceptions.validate_exceptions.InvalidManifestException: Armada manifest(s) failed validation. Details: [{'message': "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'nova'}], 'level': 'Error', 'kind': 'ValidationMessage'}, {'message': "Invalid document [armada/Chart/v1] neutron: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'neutron'}], 'level': 'Error', 'kind': 'ValidationMessage'}]. 2019-05-02 07:20:52.333 7 ERROR armada.cli Can anyone help me? Thanks Adriano -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Thu May 2 13:39:11 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Thu, 2 May 2019 13:39:11 +0000 Subject: [Airship-discuss] New installation of Airhsip In-Reply-To: References: Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8970912FC5@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hey Adrien, We added the 2700s timeouts to airship-in-a-bottle nova and neutron a few days ago to match observed deployment times. However, I'm not sure offhand what would cause document validation to complain about the timeout - it's allowed. Is this a fresh installation of airship-in-a-bottle in a new VM? +Sean - I don't think this would be related to your recent v2 schema addition, but can you please sanity check me on that? I'll give aiab a try today; it was working consistently for me earlier this week. Thanks, Matt From: Adrien Macor Sent: Thursday, May 2, 2019 2:22 AM To: airship-discuss at lists.airshipit.org Subject: [Airship-discuss] New installation of Airhsip Hello, I 'am currently trying to install Airship-in-a-bottle, on my server: I get the following error: "message": "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", "error": true, "name": "ARM100", "documents": [ { "schema": "armada/Chart/v1", "name": "nova" } ], "level": "Error", "kind": "ValidationMessage" } 2019-05-02 07:20:52.246 7 INFO armada.utils.validate [-] ValidationMessage: { "message": "Invalid document [armada/Chart/v1] neutron: {'timeout': 2700} is not valid under any of the given schemas.", "error": true, "name": "ARM100", "documents": [ { "schema": "armada/Chart/v1", "name": "neutron" } ], "level": "Error", "kind": "ValidationMessage" } 2019-05-02 07:20:52.333 7 ERROR armada.cli [-] Caught internal exception: armada.exceptions.validate_exceptions.InvalidManifestException: Armada manifest(s) failed validation. Details: [{'message': "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'nova'}], 'level': 'Error', 'kind': 'ValidationMessage'}, {'message': "Invalid document [armada/Chart/v1] neutron: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'neutron'}], 'level': 'Error', 'kind': 'ValidationMessage'}]. 2019-05-02 07:20:52.333 7 ERROR armada.cli Traceback (most recent call last): 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2019-05-02 07:20:52.333 7 ERROR armada.cli self.invoke() 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/cli/apply.py", line 218, in invoke 2019-05-02 07:20:52.333 7 ERROR armada.cli target_manifest=self.target_manifest) 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/armada.py", line 85, in __init__ 2019-05-02 07:20:52.333 7 ERROR armada.cli values=values).update_manifests() 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/override.py", line 178, in update_manifests 2019-05-02 07:20:52.333 7 ERROR armada.cli self._document_checker(self.documents) 2019-05-02 07:20:52.333 7 ERROR armada.cli File "/usr/local/lib/python3.6/site-packages/armada/handlers/override.py", line 53, in _document_checker 2019-05-02 07:20:52.333 7 ERROR armada.cli error_messages=details) 2019-05-02 07:20:52.333 7 ERROR armada.cli armada.exceptions.validate_exceptions.InvalidManifestException: Armada manifest(s) failed validation. Details: [{'message': "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'nova'}], 'level': 'Error', 'kind': 'ValidationMessage'}, {'message': "Invalid document [armada/Chart/v1] neutron: {'timeout': 2700} is not valid under any of the given schemas.", 'error': True, 'name': 'ARM100', 'documents': [{'schema': 'armada/Chart/v1', 'name': 'neutron'}], 'level': 'Error', 'kind': 'ValidationMessage'}]. 2019-05-02 07:20:52.333 7 ERROR armada.cli Can anyone help me? Thanks Adriano -------------- next part -------------- An HTML attachment was scrubbed... URL: From drewwalters96 at gmail.com Thu May 2 15:26:46 2019 From: drewwalters96 at gmail.com (Drew Walters) Date: Thu, 2 May 2019 09:26:46 -0600 Subject: [Airship-discuss] New installation of Airhsip In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D8970912FC5@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D8970912FC5@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: > "message": "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} > is not valid under any of the given schemas.", This validation error does not appear to match Matt's patch from the other day [0]. Also, the patch only added timeout values to the demo documents. Adrien, can you also confirm which set of documents/site in AIAB you are deploying? Thanks, Drew [0] https://review.opendev.org/656060 -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrien.macor at hotmail.com Thu May 2 20:37:17 2019 From: adrien.macor at hotmail.com (Adrien Macor) Date: Thu, 2 May 2019 20:37:17 +0000 Subject: [Airship-discuss] New installation of Airhsip In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D8970912FC5@MOSTLS1MSGUSRFF.ITServices.sbc.com>, Message-ID: I just followed the steps from here : https://opendev.org/airship/in-a-bottle/ (If it's not the answer you were expected sorry, but I don't know where to find the information) [https://opendev.org/avatars/14] airship/in-a-bottle: Integrated deployment configuration and documentation. - OpenDev: Free Software Needs Free Tools in-a-bottle - Integrated deployment configuration and documentation. You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. opendev.org ________________________________ De : Drew Walters Envoyé : jeudi, 2 mai 2019 17:26 À : MCEUEN, MATT Cc : Adrien Macor; airship-discuss at lists.airshipit.org; EAGAN, SEAN Objet : Re: [Airship-discuss] New installation of Airhsip "message": "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", This validation error does not appear to match Matt's patch from the other day [0]. Also, the patch only added timeout values to the demo documents. Adrien, can you also confirm which set of documents/site in AIAB you are deploying? Thanks, Drew [0] https://review.opendev.org/656060 -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 2 20:46:43 2019 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 2 May 2019 15:46:43 -0500 Subject: [Airship-discuss] Airship installation Message-ID: Hello Everyone, I have tried researching as much as possible within the documentation - but is Airship only available to install only on Ubuntu as of now? Can i get it working on CentOS7 VM? Regards, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at mirantis.com Thu May 2 20:50:29 2019 From: eli at mirantis.com (Evgeny L) Date: Thu, 2 May 2019 14:50:29 -0600 Subject: [Airship-discuss] New installation of Airhsip In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D8970912FC5@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: Hi Adrien, We've fixed some issues in the project, can you please recreate your VM and try running it again? Thanks! On Thu, May 2, 2019 at 2:37 PM Adrien Macor wrote: > I just followed the steps from here : > https://opendev.org/airship/in-a-bottle/ > (If it's not the answer you were expected sorry, but I don't know where to > find the information) > > airship/in-a-bottle: Integrated deployment configuration and > documentation. - OpenDev: Free Software Needs Free Tools > > in-a-bottle - Integrated deployment configuration and documentation. You > can not select more than 25 topics Topics must start with a letter or > number, can include dashes ('-') and can be up to 35 characters long. > opendev.org > > ------------------------------ > *De :* Drew Walters > *Envoyé :* jeudi, 2 mai 2019 17:26 > *À :* MCEUEN, MATT > *Cc :* Adrien Macor; airship-discuss at lists.airshipit.org; EAGAN, SEAN > *Objet :* Re: [Airship-discuss] New installation of Airhsip > > > "message": "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} > is not valid under any of the given schemas.", > > > This validation error does not appear to match Matt's patch from the other > day [0]. Also, the patch only added timeout values to the demo documents. > > Adrien, can you also confirm which set of documents/site in AIAB you are > deploying? > > Thanks, > Drew > > [0] https://review.opendev.org/656060 > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at mirantis.com Thu May 2 21:00:00 2019 From: eli at mirantis.com (Evgeny L) Date: Thu, 2 May 2019 15:00:00 -0600 Subject: [Airship-discuss] Airship installation In-Reply-To: References: Message-ID: Hi, As of now we do not have an official way to run Airship on CentOS7, however there is an ongoing collaboration with Suse to add a multi-os support for both OSH & Airship. This work should make it easier to add other operating systems in the future if there is an interest and support from the community. Thanks, On Thu, May 2, 2019 at 2:47 PM wrote: > Hello Everyone, > > I have tried researching as much as possible within the documentation - > but is Airship only available to install only on Ubuntu as of now? > > Can i get it working on CentOS7 VM? > > Regards, > Lohit > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 2 21:22:23 2019 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 2 May 2019 16:22:23 -0500 Subject: [Airship-discuss] Airship installation In-Reply-To: References: Message-ID: <9865520f-4fd7-4922-9cd6-cfe87cbfcbab@Spark> Thank you. The reason i am asking is because, my goals of using Airship goes much beyond using it just for Openstack. We are a full CentOS7 HPC cluster with about 800 servers/nodes working together as a supercomputer. Currently, For baremetal/VM deployment- i use Foreman together with Hiera and various yaml files/tags/facts that i get for different servers. Once deployed - puppet then configures the OS on baremetal nodes depending on the same tags/facts/yaml files from Hiera. I see that - Airship uses a similar declarative design with yaml but deploys services on containers. My goal was to use Airship and see if i can deploy both baremetal servers and openstack/VMs using the same yaml files. It would thus be more like Infrastructure as a code. I am not sure if this would be the correct approach, or if i have to deploy Openstack and instead use Openstack Ironic to deploy the baremetal nodes. The issue with the Openstack Ironic approach is that i want to use the same declarative yaml design that airship uses to deploy ironic nodes too, and i am not sure how much would that be possible. Also - I want to use the same approach to deploy instances on public cloud too, so may be using Openstack Ironic would be a better way to go. If it is currently not possible to do any of the above -  I would like to know, if there is way that i could get involved in future road-maps and see if this could be one of the directions that airship can help me. Also - would it be too difficult for me to try if Airship can work with MAAS and deploy CentOS7 instead of Ubuntu? Regards, Lohit On May 2, 2019, 4:00 PM -0500, Evgeny L , wrote: > Hi, > > As of now we do not have an official way to run Airship on CentOS7, however there is an ongoing collaboration with Suse to add a multi-os support for both OSH & Airship. > This work should make it easier to add other operating systems in the future if there is an interest and support from the community. > > Thanks, > > > On Thu, May 2, 2019 at 2:47 PM wrote: > > > Hello Everyone, > > > > > > I have tried researching as much as possible within the documentation - but is Airship only available to install only on Ubuntu as of now? > > > > > > Can i get it working on CentOS7 VM? > > > > > > Regards, > > > Lohit > > > _______________________________________________ > > > Airship-discuss mailing list > > > Airship-discuss at lists.airshipit.org > > > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp2723 at att.com Thu May 2 21:38:12 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Thu, 2 May 2019 21:38:12 +0000 Subject: [Airship-discuss] Airship installation In-Reply-To: <9865520f-4fd7-4922-9cd6-cfe87cbfcbab@Spark> References: <9865520f-4fd7-4922-9cd6-cfe87cbfcbab@Spark> Message-ID: <848AFE13-F4CC-4EA0-BE53-837D1966BD0B@att.com> Lohit You will be glad to know the direction of Airship for the next release includes integration with projects such as cluster-api, baremetal-operator and ironic. Part of the evolution takes us in a direction where support of centos is mostly a given. From what you describe it sounds as if you could definitely take advantage of what Airship provides. We encourage you to start attending our Airship design calls on Thursday’s 11 EST or the rest of our meetings listed here (https://wiki.openstack.org/wiki/Airship#Get_in_Touch) We will be starting to focus on Airship 2,.0 features, and I feel that we are going in a direction you would agree with and benefit from. Regards Rodolfo Pacheco Home/Office 732 5337671 From: "valleru at cbio.mskcc.org" Date: Thursday, May 2, 2019 at 3:31 PM To: Evgeny L Cc: "airship-discuss at lists.airshipit.org" Subject: Re: [Airship-discuss] Airship installation Thank you. The reason i am asking is because, my goals of using Airship goes much beyond using it just for Openstack. We are a full CentOS7 HPC cluster with about 800 servers/nodes working together as a supercomputer. Currently, For baremetal/VM deployment- i use Foreman together with Hiera and various yaml files/tags/facts that i get for different servers. Once deployed - puppet then configures the OS on baremetal nodes depending on the same tags/facts/yaml files from Hiera. I see that - Airship uses a similar declarative design with yaml but deploys services on containers. My goal was to use Airship and see if i can deploy both baremetal servers and openstack/VMs using the same yaml files. It would thus be more like Infrastructure as a code. I am not sure if this would be the correct approach, or if i have to deploy Openstack and instead use Openstack Ironic to deploy the baremetal nodes. The issue with the Openstack Ironic approach is that i want to use the same declarative yaml design that airship uses to deploy ironic nodes too, and i am not sure how much would that be possible. Also - I want to use the same approach to deploy instances on public cloud too, so may be using Openstack Ironic would be a better way to go. If it is currently not possible to do any of the above - I would like to know, if there is way that i could get involved in future road-maps and see if this could be one of the directions that airship can help me. Also - would it be too difficult for me to try if Airship can work with MAAS and deploy CentOS7 instead of Ubuntu? Regards, Lohit On May 2, 2019, 4:00 PM -0500, Evgeny L , wrote: Hi, As of now we do not have an official way to run Airship on CentOS7, however there is an ongoing collaboration with Suse to add a multi-os support for both OSH & Airship. This work should make it easier to add other operating systems in the future if there is an interest and support from the community. Thanks, On Thu, May 2, 2019 at 2:47 PM > wrote: Hello Everyone, I have tried researching as much as possible within the documentation - but is Airship only available to install only on Ubuntu as of now? Can i get it working on CentOS7 VM? Regards, Lohit _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 2 21:56:18 2019 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 2 May 2019 16:56:18 -0500 Subject: [Airship-discuss] Airship installation In-Reply-To: <848AFE13-F4CC-4EA0-BE53-837D1966BD0B@att.com> References: <9865520f-4fd7-4922-9cd6-cfe87cbfcbab@Spark> <848AFE13-F4CC-4EA0-BE53-837D1966BD0B@att.com> Message-ID: <71262a09-1fbf-41d2-9d94-99b0419694be@Spark> Thanks a lot Rodolfo, Thats promising to hear. I will try my best to attend the design calls, but unfortunately i have my teams weekly meeting at the same time on Thursdays. I will see if i can join any other meetings, or track the progress of Airship 2 features. Regards, Lohit On May 2, 2019, 4:38 PM -0500, PACHECO, RODOLFO J , wrote: > Lohit > > You will be glad to know the direction of Airship for the next release includes integration with projects such as cluster-api, baremetal-operator and ironic. > Part of the evolution takes us in a direction where support of centos is  mostly a given. > > From what you describe it sounds as if you could definitely take advantage of what Airship provides. > > We encourage you to start attending our Airship design calls on Thursday’s 11 EST or the rest of our meetings listed here  (https://wiki.openstack.org/wiki/Airship#Get_in_Touch) > > We will be starting to focus on Airship 2,.0 features, and I feel that we are going in a direction you would agree with and benefit from. > > Regards > > Rodolfo Pacheco > > Home/Office 732 5337671 > > From: "valleru at cbio.mskcc.org" > Date: Thursday, May 2, 2019 at 3:31 PM > To: Evgeny L > Cc: "airship-discuss at lists.airshipit.org" > Subject: Re: [Airship-discuss] Airship installation > > Thank you. > The reason i am asking is because, my goals of using Airship goes much beyond using it just for Openstack. > > We are a full CentOS7 HPC cluster with about 800 servers/nodes working together as a supercomputer. > > Currently, > For baremetal/VM deployment- i use Foreman together with Hiera and various yaml files/tags/facts that i get for different servers. Once deployed - puppet then configures the OS on baremetal nodes depending on the same tags/facts/yaml files from Hiera. > > I see that - Airship uses a similar declarative design with yaml but deploys services on containers. > > My goal was to use Airship and see if i can deploy both baremetal servers and openstack/VMs using the same yaml files. > It would thus be more like Infrastructure as a code. > > I am not sure if this would be the correct approach, or if i have to deploy Openstack and instead use Openstack Ironic to deploy the baremetal nodes. > The issue with the Openstack Ironic approach is that i want to use the same declarative yaml design that airship uses to deploy ironic nodes too, and i am not sure how much would that be possible. > Also - I want to use the same approach to deploy instances on public cloud too, so may be using Openstack Ironic would be a better way to go. > > If it is currently not possible to do any of the above -  I would like to know, if there is way that i could get involved in future road-maps and see if this could be one of the directions that airship can help me. > > Also - would it be too difficult for me to try if Airship can work with MAAS and deploy CentOS7 instead of Ubuntu? > > Regards, > Lohit > > On May 2, 2019, 4:00 PM -0500, Evgeny L , wrote: > > > Hi, > > > > As of now we do not have an official way to run Airship on CentOS7, however there is an ongoing collaboration with Suse to add a multi-os support for both OSH & Airship. > > This work should make it easier to add other operating systems in the future if there is an interest and support from the community. > > > > Thanks, > > > > On Thu, May 2, 2019 at 2:47 PM wrote: > > > Hello Everyone, > > > > > > I have tried researching as much as possible within the documentation - but is Airship only available to install only on Ubuntu as of now? > > > > > > Can i get it working on CentOS7 VM? > > > > > > Regards, > > > Lohit > > > _______________________________________________ > > > Airship-discuss mailing list > > > Airship-discuss at lists.airshipit.org > > > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrien.macor at hotmail.com Fri May 3 10:34:13 2019 From: adrien.macor at hotmail.com (Adrien Macor) Date: Fri, 3 May 2019 10:34:13 +0000 Subject: [Airship-discuss] New installation of Airhsip In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D8970912FC5@MOSTLS1MSGUSRFF.ITServices.sbc.com> , Message-ID: Apparently this worked fine, I just finished the installation of AIAB. Thanks 🙂 ________________________________ De : Evgeny L Envoyé : jeudi, 2 mai 2019 22:50 À : Adrien Macor Cc : Drew Walters; MCEUEN, MATT; airship-discuss at lists.airshipit.org; EAGAN, SEAN Objet : Re: [Airship-discuss] New installation of Airhsip Hi Adrien, We've fixed some issues in the project, can you please recreate your VM and try running it again? Thanks! On Thu, May 2, 2019 at 2:37 PM Adrien Macor > wrote: I just followed the steps from here : https://opendev.org/airship/in-a-bottle/ (If it's not the answer you were expected sorry, but I don't know where to find the information) [https://opendev.org/avatars/14] airship/in-a-bottle: Integrated deployment configuration and documentation. - OpenDev: Free Software Needs Free Tools in-a-bottle - Integrated deployment configuration and documentation. You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. opendev.org ________________________________ De : Drew Walters > Envoyé : jeudi, 2 mai 2019 17:26 À : MCEUEN, MATT Cc : Adrien Macor; airship-discuss at lists.airshipit.org; EAGAN, SEAN Objet : Re: [Airship-discuss] New installation of Airhsip "message": "Invalid document [armada/Chart/v1] nova: {'timeout': 2700} is not valid under any of the given schemas.", This validation error does not appear to match Matt's patch from the other day [0]. Also, the patch only added timeout values to the demo documents. Adrien, can you also confirm which set of documents/site in AIAB you are deploying? Thanks, Drew [0] https://review.opendev.org/656060 _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From hlini at vivaldi.com Mon May 6 12:55:38 2019 From: hlini at vivaldi.com (=?UTF-8?Q?Hlini_Melste=C3=B0_J=C3=B3ngeirsson?=) Date: Mon, 06 May 2019 12:55:38 +0000 Subject: [Airship-discuss] Airship in a bottle MAAS missing port Message-ID: <1557147041021.3903304646.1499301416@vivaldi.com> Hi there, I was wondering if is was intentional that MAAS was not working in the "Airship in a bottle" setup. When it finishes I get the following text output. n0: Other dashboards: n0: n0: MAAS: http://192.168.121.127:/MAAS/ admin/password12 That link is missing the port. I tried most of the open ports but didn't find the right one, seemed like the mapping was missing to me when I had a closer look at kubernetes. -- All the best, Hlini From paye600 at gmail.com Mon May 6 15:14:28 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Mon, 6 May 2019 16:14:28 +0100 Subject: [Airship-discuss] Airship in a bottle MAAS missing port In-Reply-To: <1557147041021.3903304646.1499301416@vivaldi.com> References: <1557147041021.3903304646.1499301416@vivaldi.com> Message-ID: Hi Hlini, You are missing MaaS dashboard port. Apply this patch: https://review.opendev.org/#/c/632457/ Publish MaaS dashboard: kubectl -n ucp expose service/maas --type=NodePort --name=maas-dashboard Get MaaS dashboard port: kubectl -n ucp get service maas-dashboard -o jsonpath="{.spec.ports[?(@.port==80)].nodePort}" Hope it will help. Best regards, -- Roman Gorshunov On Mon, May 6, 2019 at 1:56 PM Hlini Melsteð Jóngeirsson wrote: > > Hi there, > > > I was wondering if is was intentional that MAAS was not working in the > "Airship in a bottle" setup. > > > When it finishes I get the following text output. > > > n0: Other dashboards: > n0: > n0: MAAS: http://192.168.121.127:/MAAS/ admin/password12 > > > That link is missing the port. I tried most of the open ports but didn't > find the right one, seemed like the mapping was missing to me when I had a > closer look at kubernetes. > > > > > -- > All the best, > Hlini > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From hlini at vivaldi.com Mon May 6 15:47:33 2019 From: hlini at vivaldi.com (=?UTF-8?Q?Hlini_Melste=C3=B0_J=C3=B3ngeirsson?=) Date: Mon, 06 May 2019 15:47:33 +0000 Subject: [Airship-discuss] Airship in a bottle MAAS missing port In-Reply-To: References: <1557147041021.3903304646.1499301416@vivaldi.com> Message-ID: <1557157486818.1997167599.362851818@vivaldi.com> Hi Roman, That's great, that worked with one correction. Had to run with service/maas-region instead of just service/maas in the default setup. # kubectl -n ucp expose service/maas-region --type=NodePort --name=maas-dashboard On Mon May 06 2019 15:14:28 GMT+0000 (Greenwich Mean Time), Roman Gorshunov wrote: > Hi Hlini, > > You are missing MaaS dashboard port. > > Apply this patch: https://review.opendev.org/#/c/632457/ > > Publish MaaS dashboard: > kubectl -n ucp expose service/maas --type=NodePort --name=maas-dashboard > Get MaaS dashboard port: > kubectl -n ucp get service maas-dashboard -o > jsonpath="{.spec.ports[?(@.port==80)].nodePort}" > > Hope it will help. > > Best regards, > -- Roman Gorshunov > > On Mon, May 6, 2019 at 1:56 PM Hlini Melsteð Jóngeirsson > wrote: > > > > Hi there, > > > > > > I was wondering if is was intentional that MAAS was not working in the > > "Airship in a bottle" setup. > > > > > > When it finishes I get the following text output. > > > > > > n0: Other dashboards: > > n0: > > n0: MAAS: http://192.168.121.127:/MAAS/ admin/password12 > > > > > > That link is missing the port. I tried most of the open ports but didn't > > find the right one, seemed like the mapping was missing to me when I had a > > closer look at kubernetes. > > > > > > > > > > -- > > All the best, > > Hlini > > > > _______________________________________________ > > Airship-discuss mailing list > > Airship-discuss at lists.airshipit.org > > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- Hlini From hlini at vivaldi.com Mon May 6 15:53:35 2019 From: hlini at vivaldi.com (=?UTF-8?Q?Hlini_Melste=C3=B0_J=C3=B3ngeirsson?=) Date: Mon, 06 May 2019 15:53:35 +0000 Subject: [Airship-discuss] Airship in a bottle MAAS missing port In-Reply-To: <1557157486818.1997167599.362851818@vivaldi.com> References: <1557147041021.3903304646.1499301416@vivaldi.com> <1557157486818.1997167599.362851818@vivaldi.com> Message-ID: <1557157937301.3069837173.3481104694@vivaldi.com> Hi again, Also so it's here for those looking for the same info, the default user/pass is not what is stated in the output in cli which is stated to be admin/password12 as default it is in fact admin/admin. On Mon May 06 2019 15:47:33 GMT+0000 (Greenwich Mean Time), Hlini Melsteð Jóngeirsson wrote: > Hi Roman, > > > That's great, that worked with one correction. > > > Had to run with service/maas-region instead of just service/maas in the default setup. > > # kubectl -n ucp expose service/maas-region --type=NodePort --name=maas-dashboard > > > > > > > On Mon May 06 2019 15:14:28 GMT+0000 (Greenwich Mean Time), Roman Gorshunov wrote: > > > Hi Hlini, > > > > You are missing MaaS dashboard port. > > > > Apply this patch: https://review.opendev.org/#/c/632457/ > > > > Publish MaaS dashboard: > > kubectl -n ucp expose service/maas --type=NodePort --name=maas-dashboard > > Get MaaS dashboard port: > > kubectl -n ucp get service maas-dashboard -o > > jsonpath="{.spec.ports[?(@.port==80)].nodePort}" > > > > Hope it will help. > > > > Best regards, > > -- Roman Gorshunov > > > > On Mon, May 6, 2019 at 1:56 PM Hlini Melsteð Jóngeirsson > > wrote: > > > > > > Hi there, > > > > > > > > > I was wondering if is was intentional that MAAS was not working in the > > > "Airship in a bottle" setup. > > > > > > > > > When it finishes I get the following text output. > > > > > > > > > n0: Other dashboards: > > > n0: > > > n0: MAAS: http://192.168.121.127:/MAAS/ admin/password12 > > > > > > > > > That link is missing the port. I tried most of the open ports but didn't > > > find the right one, seemed like the mapping was missing to me when I had a > > > closer look at kubernetes. > > > > > > > > > > > > > > > -- > > > All the best, > > > Hlini > > > > > > _______________________________________________ > > > Airship-discuss mailing list > > > Airship-discuss at lists.airshipit.org > > > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- Hlini Melsteð Jóngeirsson System Administrator Vivaldi Technologies Hlini at vivaldi.com +354-6962200 Reykjavik-Oslo-Gloucester https://vivaldi.com From calvinwhole at gmail.com Tue May 7 03:14:08 2019 From: calvinwhole at gmail.com (calvin whole) Date: Tue, 7 May 2019 11:14:08 +0800 Subject: [Airship-discuss] Airship 1.0 Documentation Message-ID: Hi, We are interested in Airship development. Since Airship 1.0 is officially released now, we are trying to look for release 1.0 deployment documentation but cannot find it. Is there documentation updated for 1.0? Please advise and give us a pointer. Much appreciated. Sincerely, Calvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From calvinwhole at gmail.com Tue May 7 05:50:28 2019 From: calvinwhole at gmail.com (calvin whole) Date: Tue, 7 May 2019 13:50:28 +0800 Subject: [Airship-discuss] airship-in-a-bottle deployment issue Message-ID: Hi, We are trying to deploy AIIB. I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant. The process is straightforward by following https://opendev.org/airship/in-a-bottle/ We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up". However it stuck in the error below: deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16 Could anyone help to resolve this? Many thanks in advance. Sincerely, Calvin ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 1h ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 1h ==> n0: kube-system calico-etcd-anchor-5tqhk 1/1 Running 0 1h ==> n0: kube-system calico-etcd-n0 1/1 Running 0 1h ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 1/1 Running 0 1h ==> n0: kube-system calico-node-pbsxh 1/1 Running 0 1h ==> n0: kube-system calico-settings-lzpk9 0/1 Completed 0 1h ==> n0: kube-system coredns-84bdd76f4d-6cwnl 1/1 Running 0 1h ==> n0: kube-system coredns-84bdd76f4d-d4p8c 1/1 Running 0 1h ==> n0: kube-system coredns-84bdd76f4d-xrknz 1/1 Running 0 1h ==> n0: kube-system haproxy-n0 1/1 Running 0 1h ==> n0: kube-system ingress-9pkmx 2/2 Running 0 1h ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx 1/1 Running 0 1h ==> n0: kube-system kubernetes-apiserver-anchor-hszbf 1/1 Running 0 1h ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 1h ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz 1/1 Running 0 1h ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 1h ==> n0: kube-system kubernetes-etcd-anchor-nnjbb 1/1 Running 0 1h ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 1h ==> n0: kube-system kubernetes-proxy-vgzjp 1/1 Running 0 1h ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk 1/1 Running 0 1h ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 1h ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq 1/1 Running 0 1h ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg 1/1 Running 0 1h ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 Running 0 1h ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h ==> n0: ucp armada-api-d5f757d5-d9l9h 1/1 Running 0 1h ==> n0: ucp armada-ks-endpoints-qwbtg 0/3 Completed 0 1h ==> n0: ucp armada-ks-service-lg8kq 0/1 Completed 0 1h ==> n0: ucp armada-ks-user-g2j6v 0/1 Completed 0 1h ==> n0: ucp barbican-api-84665dd99d-qv5fz 1/1 Running 0 1h ==> n0: ucp barbican-db-init-ndx58 0/1 Completed 0 1h ==> n0: ucp barbican-db-sync-sh7c9 0/1 Completed 0 1h ==> n0: ucp barbican-ks-endpoints-bv7xv 0/3 Completed 0 1h ==> n0: ucp barbican-ks-service-46hjk 0/1 Completed 0 1h ==> n0: ucp barbican-ks-user-6df74 0/1 Completed 0 1h ==> n0: ucp barbican-rabbit-init-gnvfl 0/1 Completed 0 1h ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw 0/1 Running 0 1h ==> n0: ucp deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 17 1h <===== ==> n0: ucp deckhand-db-sync-llstv 0/1 Init:0/1 0 1h ==> n0: ucp deckhand-ks-endpoints-4gqfj 0/3 Completed 0 1h ==> n0: ucp deckhand-ks-service-c6gbq 0/1 Completed 0 1h ==> n0: ucp deckhand-ks-user-5skng 0/1 Completed 0 1h ==> n0: ucp divingbell-apparmor-default-lkcl6 1/1 Running 0 1h ==> n0: ucp divingbell-apt-default-7jgtv 1/1 Running 0 1h ==> n0: ucp divingbell-ethtool-default-tm2w4 1/1 Running 0 1h ==> n0: ucp divingbell-exec-default-l45m8 1/1 Running 0 1h ==> n0: ucp divingbell-limits-default-q84pr 1/1 Running 0 1h ==> n0: ucp divingbell-mounts-default-29420945-nrdsz 1/1 Running 0 1h ==> n0: ucp divingbell-perm-default-wdgld 1/1 Running 0 1h ==> n0: ucp divingbell-sysctl-default-t7f2m 1/1 Running 0 1h ==> n0: ucp divingbell-uamlite-default-fc4jx 1/1 Running 0 1h ==> n0: ucp ingress-86576d6599-q8ng4 1/1 Running 0 1h ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m 1/1 Running 0 1h ==> n0: ucp keystone-api-678fc44bdd-ncxc2 1/1 Running 0 1h ==> n0: ucp keystone-bootstrap-28l4g 0/1 Completed 0 1h ==> n0: ucp keystone-credential-setup-rq5d4 0/1 Completed 0 1h ==> n0: ucp keystone-db-init-z8x4w 0/1 Completed 0 1h ==> n0: ucp keystone-db-sync-9hvb5 0/1 Completed 0 1h ==> n0: ucp keystone-domain-manage-tzcnf 0/1 Completed 0 1h ==> n0: ucp keystone-fernet-setup-bzdpb 0/1 Completed 0 1h ==> n0: ucp keystone-rabbit-init-cxpc6 0/1 Completed 0 1h ==> n0: ucp maas-bootstrap-admin-user-g99rl 0/1 Completed 0 1h ==> n0: ucp maas-db-init-h4llm 0/1 Completed 0 1h ==> n0: ucp maas-db-sync-6tsqj 0/1 Completed 0 1h ==> n0: ucp maas-export-api-key-c8rdb 0/1 Completed 0 1h ==> n0: ucp maas-import-resources-hhq7f 0/1 Completed 1 1h ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 2/2 Running 0 1h ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx 1/1 Running 0 1h ==> n0: u ==> n0: cp maas-rack-0 1/1 Running 0 1h ==> n0: ucp maas-region-0 1/1 Running 0 1h ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf 1/1 Running 0 1h ==> n0: ucp mariadb-ingress-55794d94c8-vglbv 1/1 Running 0 1h ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv 1/1 Running 0 1h ==> n0: ucp mariadb-server-0 1/1 Running 0 1h ==> n0: ucp postgresql-0 1/1 Running 1 1h -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Tue May 7 10:19:16 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 7 May 2019 12:19:16 +0200 Subject: [Airship-discuss] Airship 1.0 Documentation In-Reply-To: References: Message-ID: Hello Calvin, Airship Treasuremap repository has v1.0 tag, and corresponding tagged documentation is published on RTD website: https://airship-treasuremap.readthedocs.io/en/v1.0/ . Individual components of Airship have not been tagged with release tag, as far as I know. Best regards, -- Roman Gorshunov On Tue, May 7, 2019 at 5:14 AM calvin whole wrote: > > Hi, > > We are interested in Airship development. > > Since Airship 1.0 is officially released now, we are trying to look for release 1.0 deployment documentation but cannot find it. Is there documentation updated for 1.0? Please advise and give us a pointer. Much appreciated. > > > Sincerely, > Calvin > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From paye600 at gmail.com Tue May 7 10:25:34 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 7 May 2019 12:25:34 +0200 Subject: [Airship-discuss] airship-in-a-bottle deployment issue In-Reply-To: References: Message-ID: Hello Calvin, Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod. kubectl describe pod deckhand-db-init-r9jvg -u ucp May be it would help to understand what is happening there. Thank you for trying Airship. Best regards, -- Roman Gorshunov On Tue, May 7, 2019 at 7:51 AM calvin whole wrote: > > Hi, > > We are trying to deploy AIIB. > > I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant. > The process is straightforward by following https://opendev.org/airship/in-a-bottle/ > We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up". > > However it stuck in the error below: > deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16 > > Could anyone help to resolve this? Many thanks in advance. > > Sincerely, > Calvin > > ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE > ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 1h > ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 1h > ==> n0: kube-system calico-etcd-anchor-5tqhk 1/1 Running 0 1h > ==> n0: kube-system calico-etcd-n0 1/1 Running 0 1h > ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 1/1 Running 0 1h > ==> n0: kube-system calico-node-pbsxh 1/1 Running 0 1h > ==> n0: kube-system calico-settings-lzpk9 0/1 Completed 0 1h > ==> n0: kube-system coredns-84bdd76f4d-6cwnl 1/1 Running 0 1h > ==> n0: kube-system coredns-84bdd76f4d-d4p8c 1/1 Running 0 1h > ==> n0: kube-system coredns-84bdd76f4d-xrknz 1/1 Running 0 1h > ==> n0: kube-system haproxy-n0 1/1 Running 0 1h > ==> n0: kube-system ingress-9pkmx 2/2 Running 0 1h > ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx 1/1 Running 0 1h > ==> n0: kube-system kubernetes-apiserver-anchor-hszbf 1/1 Running 0 1h > ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 1h > ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz 1/1 Running 0 1h > ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 1h > ==> n0: kube-system kubernetes-etcd-anchor-nnjbb 1/1 Running 0 1h > ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 1h > ==> n0: kube-system kubernetes-proxy-vgzjp 1/1 Running 0 1h > ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk 1/1 Running 0 1h > ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 1h > ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq 1/1 Running 0 1h > ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg 1/1 Running 0 1h > ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 Running 0 1h > ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h > ==> n0: ucp armada-api-d5f757d5-d9l9h 1/1 Running 0 1h > ==> n0: ucp armada-ks-endpoints-qwbtg 0/3 Completed 0 1h > ==> n0: ucp armada-ks-service-lg8kq 0/1 Completed 0 1h > ==> n0: ucp armada-ks-user-g2j6v 0/1 Completed 0 1h > ==> n0: ucp barbican-api-84665dd99d-qv5fz 1/1 Running 0 1h > ==> n0: ucp barbican-db-init-ndx58 0/1 Completed 0 1h > ==> n0: ucp barbican-db-sync-sh7c9 0/1 Completed 0 1h > ==> n0: ucp barbican-ks-endpoints-bv7xv 0/3 Completed 0 1h > ==> n0: ucp barbican-ks-service-46hjk 0/1 Completed 0 1h > ==> n0: ucp barbican-ks-user-6df74 0/1 Completed 0 1h > ==> n0: ucp barbican-rabbit-init-gnvfl 0/1 Completed 0 1h > ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw 0/1 Running 0 1h > ==> n0: ucp deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 17 1h <===== > ==> n0: ucp deckhand-db-sync-llstv 0/1 Init:0/1 0 1h > ==> n0: ucp deckhand-ks-endpoints-4gqfj 0/3 Completed 0 1h > ==> n0: ucp deckhand-ks-service-c6gbq 0/1 Completed 0 1h > ==> n0: ucp deckhand-ks-user-5skng 0/1 Completed 0 1h > ==> n0: ucp divingbell-apparmor-default-lkcl6 1/1 Running 0 1h > ==> n0: ucp divingbell-apt-default-7jgtv 1/1 Running 0 1h > ==> n0: ucp divingbell-ethtool-default-tm2w4 1/1 Running 0 1h > ==> n0: ucp divingbell-exec-default-l45m8 1/1 Running 0 1h > ==> n0: ucp divingbell-limits-default-q84pr 1/1 Running 0 1h > ==> n0: ucp divingbell-mounts-default-29420945-nrdsz 1/1 Running 0 1h > ==> n0: ucp divingbell-perm-default-wdgld 1/1 Running 0 1h > ==> n0: ucp divingbell-sysctl-default-t7f2m 1/1 Running 0 1h > ==> n0: ucp divingbell-uamlite-default-fc4jx 1/1 Running 0 1h > ==> n0: ucp ingress-86576d6599-q8ng4 1/1 Running 0 1h > ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m 1/1 Running 0 1h > ==> n0: ucp keystone-api-678fc44bdd-ncxc2 1/1 Running 0 1h > ==> n0: ucp keystone-bootstrap-28l4g 0/1 Completed 0 1h > ==> n0: ucp keystone-credential-setup-rq5d4 0/1 Completed 0 1h > ==> n0: ucp keystone-db-init-z8x4w 0/1 Completed 0 1h > ==> n0: ucp keystone-db-sync-9hvb5 0/1 Completed 0 1h > ==> n0: ucp keystone-domain-manage-tzcnf 0/1 Completed 0 1h > ==> n0: ucp keystone-fernet-setup-bzdpb 0/1 Completed 0 1h > ==> n0: ucp keystone-rabbit-init-cxpc6 0/1 Completed 0 1h > ==> n0: ucp maas-bootstrap-admin-user-g99rl 0/1 Completed 0 1h > ==> n0: ucp maas-db-init-h4llm 0/1 Completed 0 1h > ==> n0: ucp maas-db-sync-6tsqj 0/1 Completed 0 1h > ==> n0: ucp maas-export-api-key-c8rdb 0/1 Completed 0 1h > ==> n0: ucp maas-import-resources-hhq7f 0/1 Completed 1 1h > ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 2/2 Running 0 1h > ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx 1/1 Running 0 1h > ==> n0: u > ==> n0: cp maas-rack-0 1/1 Running 0 1h > ==> n0: ucp maas-region-0 1/1 Running 0 1h > ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf 1/1 Running 0 1h > ==> n0: ucp mariadb-ingress-55794d94c8-vglbv 1/1 Running 0 1h > ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv 1/1 Running 0 1h > ==> n0: ucp mariadb-server-0 1/1 Running 0 1h > ==> n0: ucp postgresql-0 1/1 Running 1 1h > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From MM9745 at att.com Tue May 7 11:57:30 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 7 May 2019 11:57:30 +0000 Subject: [Airship-discuss] Airship 1.0 Documentation In-Reply-To: References: , Message-ID: Agree- I would recommend using the v1.0 tag of the treasuremap repository, which integrates pinned versions of the Airship components, OpenStack-Helm, Kubernetes, etc; and the deployment documentation on https://airship-treasuremap.readthedocs.io. The "sloop" deployment manifests in particular are intended to be simplified and easy to get started with from a bare metal perspective. Thanks, Matt Sent from MyOwn, an AT&T BYOD solution -------- Original Message -------- From: Roman Gorshunov > Date: Tue, May 7, 2019, 5:20 AM To: calvin whole > CC: "airship-discuss at lists.airshipit.org" > Subject: Re: [Airship-discuss] Airship 1.0 Documentation Hello Calvin, Airship Treasuremap repository has v1.0 tag, and corresponding tagged documentation is published on RTD website: https://urldefense.proofpoint.com/v2/url?u=https-3A__airship-2Dtreasuremap.readthedocs.io_en_v1.0_&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=uMFPEzwooWPUD7QxMdrLQaokp3yIxviSUFIHGQ8eKPY&s=tR0uKbyTGQRPoFgN694Jn5w7_BWWYIVdgyMmCBRkG9k&e= . Individual components of Airship have not been tagged with release tag, as far as I know. Best regards, -- Roman Gorshunov On Tue, May 7, 2019 at 5:14 AM calvin whole wrote: > > Hi, > > We are interested in Airship development. > > Since Airship 1.0 is officially released now, we are trying to look for release 1.0 deployment documentation but cannot find it. Is there documentation updated for 1.0? Please advise and give us a pointer. Much appreciated. > > > Sincerely, > Calvin > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=uMFPEzwooWPUD7QxMdrLQaokp3yIxviSUFIHGQ8eKPY&s=VksUaUHuo7UKOVAin4k_pXBgxLoDpe9NZqssibz8nZs&e= _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=uMFPEzwooWPUD7QxMdrLQaokp3yIxviSUFIHGQ8eKPY&s=VksUaUHuo7UKOVAin4k_pXBgxLoDpe9NZqssibz8nZs&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From ks3019 at att.com Tue May 7 14:31:47 2019 From: ks3019 at att.com (SKELS, KASPARS) Date: Tue, 7 May 2019 14:31:47 +0000 Subject: [Airship-discuss] Airship 1.0 Documentation In-Reply-To: References: , Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E7481B6@MOKSCY3MSGUSRGI.ITServices.sbc.com> Good morning! Just to add, I would suggest to use latest documentation which is improved but still very much relevant for v1.0 code base https://airship-treasuremap.readthedocs.io/en/latest/ Have fun! From: MCEUEN, MATT Sent: Tuesday, May 7, 2019 6:58 AM To: Roman Gorshunov ; calvin whole Cc: airship-discuss at lists.airshipit.org Subject: Re: [Airship-discuss] Airship 1.0 Documentation ***Security Advisory: This Message Originated Outside of AT&T *** Reference http://cso.att.com/EmailSecurity/IDSP.html for more information. Agree- I would recommend using the v1.0 tag of the treasuremap repository, which integrates pinned versions of the Airship components, OpenStack-Helm, Kubernetes, etc; and the deployment documentation on https://airship-treasuremap.readthedocs.io. The "sloop" deployment manifests in particular are intended to be simplified and easy to get started with from a bare metal perspective. Thanks, Matt Sent from MyOwn, an AT&T BYOD solution -------- Original Message -------- From: Roman Gorshunov > Date: Tue, May 7, 2019, 5:20 AM To: calvin whole > CC: "airship-discuss at lists.airshipit.org" > Subject: Re: [Airship-discuss] Airship 1.0 Documentation Hello Calvin, Airship Treasuremap repository has v1.0 tag, and corresponding tagged documentation is published on RTD website: https://urldefense.proofpoint.com/v2/url?u=https-3A__airship-2Dtreasuremap.readthedocs.io_en_v1.0_&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=uMFPEzwooWPUD7QxMdrLQaokp3yIxviSUFIHGQ8eKPY&s=tR0uKbyTGQRPoFgN694Jn5w7_BWWYIVdgyMmCBRkG9k&e= . Individual components of Airship have not been tagged with release tag, as far as I know. Best regards, -- Roman Gorshunov On Tue, May 7, 2019 at 5:14 AM calvin whole > wrote: > > Hi, > > We are interested in Airship development. > > Since Airship 1.0 is officially released now, we are trying to look for release 1.0 deployment documentation but cannot find it. Is there documentation updated for 1.0? Please advise and give us a pointer. Much appreciated. > > > Sincerely, > Calvin > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=uMFPEzwooWPUD7QxMdrLQaokp3yIxviSUFIHGQ8eKPY&s=VksUaUHuo7UKOVAin4k_pXBgxLoDpe9NZqssibz8nZs&e= _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=uMFPEzwooWPUD7QxMdrLQaokp3yIxviSUFIHGQ8eKPY&s=VksUaUHuo7UKOVAin4k_pXBgxLoDpe9NZqssibz8nZs&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp2723 at att.com Tue May 7 16:20:26 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Tue, 7 May 2019 16:20:26 +0000 Subject: [Airship-discuss] Airship Design Meeting Schedule Voting Message-ID: <4D602948-74B8-4AD2-8D77-6597C381836A@att.com> At the PTG we had the discussion for the Design Call Times/and Number of Days. The following options where proposed at the PTG · Design Call evolution * Move to at least 2 calls a week (Suggested) § Thursday - 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 (Asia)/ 3-4 PM UTC+1/2 (Europe)+1+1+1+1+1+1+1 § Tuesday - 9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 (Asia)/ 1-2 PM UTC+1/2 (Europe)+1+1 * Extend length of the calls? § Remain 60 mins+1 § Extend 90 mins +1 The key question remaining is the length of the calls. Please vote here for Calls length. Ignore the Tuesday/Thursday May date’s, all I am interested in deciding is the length of the calls at this point. https://doodle.com/poll/z42tprkkhkatbdc6 Will close this voting Sunday May 12 at 12 AM EST. Will send an updated invite in the mailing list afterwards. Reminder we will have the Design Call Thursday May 9 at 11 EST as per usual. Regards Rodolfo Pacheco Home/Office 732 5337671 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp2723 at att.com Tue May 7 16:49:39 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Tue, 7 May 2019 16:49:39 +0000 Subject: [Airship-discuss] Airship Design Meeting Schedule Voting In-Reply-To: <4D602948-74B8-4AD2-8D77-6597C381836A@att.com> References: <4D602948-74B8-4AD2-8D77-6597C381836A@att.com> Message-ID: <5B76FBD3-5FD4-4902-AB3E-351768493E30@att.com> RESENDING -> ISSUES with the PREVIOUS DOODLE (confusing and not working) At the PTG we had the discussion for the Design Call Times/and Number of Days. The following options where proposed at the PTG · Design Call evolution * Move to at least 2 calls a week (Suggested) § Thursday - 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 (Asia)/ 3-4 PM UTC+1/2 (Europe)+1+1+1+1+1+1+1 § Tuesday - 9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 (Asia)/ 1-2 PM UTC+1/2 (Europe)+1+1 * Extend length of the calls? § Remain 60 mins+1 § Extend 90 mins +1 The key question remaining is the length of the calls. Please vote here for Calls length. Ignore the Tuesday/Thursday May date’s, all I am interested in deciding is the length of the calls at this point. https://doodle.com/poll/wbdwym6x4dqnhe6f Will close this voting Sunday May 12 at 12 AM EST. Will send an updated invite in the mailing list afterwards. Reminder we will have the Design Call Thursday May 9 at 11 EST as per usual. Regards Rodolfo Pacheco Home/Office 732 5337671 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluejay.ahn at gmail.com Wed May 8 02:13:24 2019 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Wed, 8 May 2019 11:13:24 +0900 Subject: [Airship-discuss] Airship Design Meeting Schedule Voting In-Reply-To: <5B76FBD3-5FD4-4902-AB3E-351768493E30@att.com> References: <4D602948-74B8-4AD2-8D77-6597C381836A@att.com> <5B76FBD3-5FD4-4902-AB3E-351768493E30@att.com> Message-ID: Hi, Just to be clear on meeting time, I suggest meeting time should be based on UTC, not on any local time, to avoid regional specific summer time confusion. Having said that, meeting schedule proposal would be - Thursday 15:00 UTC ( 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 (Asia)/ 3-4 PM UTC+1/2 (Europe)) - Tuesday 13:00 UTC (9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 (Asia)/ 1-2 PM UTC+1/2 (Europe) Q1. Is this "2 calls per week" a final decision? Q2. If we do twice a week, it seems like Thursday meeting is the main one, especially from the number of "+1"s. Do we have any idea on how we can synchronize two meetings, how we want to run these two meetings? Anyway, I voted for the calls length. Regards, Jaesuk Ahn On Wed, May 8, 2019 at 1:51 AM PACHECO, RODOLFO J wrote: > RESENDING -> ISSUES with the PREVIOUS DOODLE (confusing and not working) > > > > At the PTG we had the discussion for the Design Call Times/and Number of > Days. > > The following options where proposed at the PTG > > > > · Design Call evolution > > - Move to at least 2 calls a week (Suggested) > > § Thursday - 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 > (Asia)/ 3-4 PM UTC+1/2 (Europe)+1+1+1+1+1+1+1 > > § Tuesday - 9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 > (Asia)/ 1-2 PM UTC+1/2 (Europe)+1+1 > > - Extend length of the calls? > > § Remain 60 mins+1 > > § Extend 90 mins +1 > > > > The key question remaining is the length of the calls. > > > > Please vote here for Calls length. > > Ignore the Tuesday/Thursday May date’s, all I am interested in deciding is > the length of the calls at this point. > > > > https://doodle.com/poll/wbdwym6x4dqnhe6f > > > > > > Will close this voting Sunday May 12 at 12 AM EST. > > Will send an updated invite in the mailing list afterwards. > > > > Reminder we will have the Design Call Thursday May 9 at 11 EST as per > usual. > > > > > > Regards > > > > *Rodolfo Pacheco* > > > > *Home/Office* 732 5337671 > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- *Jaesuk Ahn*, Ph.D. Software R&D Center, SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp2723 at att.com Wed May 8 17:13:06 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Wed, 8 May 2019 17:13:06 +0000 Subject: [Airship-discuss] Airship Design Meeting Schedule Voting In-Reply-To: References: <4D602948-74B8-4AD2-8D77-6597C381836A@att.com> <5B76FBD3-5FD4-4902-AB3E-351768493E30@att.com>, Message-ID: <840FA9A3-3854-44A9-8D75-67E417025313@att.com> Jaesuk The UTC time zone for the meetings makes sense, will is that As far as the design call I believe the expectation is that after a couple of weeks we will probable start splintering into smaller groups as we properly define the work areas in more detail We will have the 2 calls +++ as needed We can discuss which areas of work are dealt with on which calls, should be ok to use the call that better suits APAC time zones for whichever topics you guys and others from that time zone will be collaborating on. Sent from my iPhone On May 7, 2019, at 10:13 PM, Jaesuk Ahn > wrote: Hi, Just to be clear on meeting time, I suggest meeting time should be based on UTC, not on any local time, to avoid regional specific summer time confusion. Having said that, meeting schedule proposal would be - Thursday 15:00 UTC ( 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 (Asia)/ 3-4 PM UTC+1/2 (Europe)) - Tuesday 13:00 UTC (9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 (Asia)/ 1-2 PM UTC+1/2 (Europe) Q1. Is this "2 calls per week" a final decision? Q2. If we do twice a week, it seems like Thursday meeting is the main one, especially from the number of "+1"s. Do we have any idea on how we can synchronize two meetings, how we want to run these two meetings? Anyway, I voted for the calls length. Regards, Jaesuk Ahn On Wed, May 8, 2019 at 1:51 AM PACHECO, RODOLFO J > wrote: RESENDING -> ISSUES with the PREVIOUS DOODLE (confusing and not working) At the PTG we had the discussion for the Design Call Times/and Number of Days. The following options where proposed at the PTG • Design Call evolution * Move to at least 2 calls a week (Suggested) • Thursday - 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 (Asia)/ 3-4 PM UTC+1/2 (Europe)+1+1+1+1+1+1+1 • Tuesday - 9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 (Asia)/ 1-2 PM UTC+1/2 (Europe)+1+1 * Extend length of the calls? • Remain 60 mins+1 • Extend 90 mins +1 The key question remaining is the length of the calls. Please vote here for Calls length. Ignore the Tuesday/Thursday May date’s, all I am interested in deciding is the length of the calls at this point. https://doodle.com/poll/wbdwym6x4dqnhe6f Will close this voting Sunday May 12 at 12 AM EST. Will send an updated invite in the mailing list afterwards. Reminder we will have the Design Call Thursday May 9 at 11 EST as per usual. Regards Rodolfo Pacheco Home/Office 732 5337671 _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- Jaesuk Ahn, Ph.D. Software R&D Center, SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluejay.ahn at gmail.com Wed May 8 21:38:44 2019 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Thu, 9 May 2019 06:38:44 +0900 Subject: [Airship-discuss] Airship Design Meeting Schedule Voting In-Reply-To: <840FA9A3-3854-44A9-8D75-67E417025313@att.com> References: <4D602948-74B8-4AD2-8D77-6597C381836A@att.com> <5B76FBD3-5FD4-4902-AB3E-351768493E30@att.com> <840FA9A3-3854-44A9-8D75-67E417025313@att.com> Message-ID: Thanks for the detail explanation. :) As you wrote, we can certainly see and discuss the best combination of call schedule along the way. One thing though, for Thursday call, 12am - 1:30am is really difficult for us. 11pm - 12:30am would be more realistic scenairo. Through the few next design calls, let's figure out how much we need to be invoved in Thursday discussion. If it turns out we need to participate in Thursday call, I will sincerely ask slight time adjustment. :) Thanks! Jaesuk Ahn, Ph.D. Software Labs, SK Telecom 2019년 5월 9일 (목) 오전 2:13, PACHECO, RODOLFO J 님이 작성: > Jaesuk > > The UTC time zone for the meetings makes sense, will is that > > As far as the design call > I believe the expectation is that after a couple of weeks we will probable > start splintering into smaller groups as we properly define the work areas > in more detail > > We will have the 2 calls +++ as needed > > We can discuss which areas of work are dealt with on which calls, should > be ok to use the call that better suits APAC time zones for whichever > topics you guys and others from that time zone will be collaborating on. > > > > > > Sent from my iPhone > > On May 7, 2019, at 10:13 PM, Jaesuk Ahn wrote: > > Hi, > > Just to be clear on meeting time, I suggest meeting time should be based > on UTC, not on any local time, to avoid regional specific summer time > confusion. > Having said that, meeting schedule proposal would be > - Thursday 15:00 UTC ( 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM > UTC +9 (Asia)/ 3-4 PM UTC+1/2 (Europe)) > - Tuesday 13:00 UTC (9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC > +9 (Asia)/ 1-2 PM UTC+1/2 (Europe) > > Q1. Is this "2 calls per week" a final decision? > Q2. If we do twice a week, it seems like Thursday meeting is the main one, > especially from the number of "+1"s. Do we have any idea on how we can > synchronize two meetings, how we want to run these two meetings? > > Anyway, I voted for the calls length. > > Regards, > > Jaesuk Ahn > > > On Wed, May 8, 2019 at 1:51 AM PACHECO, RODOLFO J wrote: > >> RESENDING -> ISSUES with the PREVIOUS DOODLE (confusing and not working) >> >> >> >> At the PTG we had the discussion for the Design Call Times/and Number of >> Days. >> >> The following options where proposed at the PTG >> >> >> >> · Design Call evolution >> >> - Move to at least 2 calls a week (Suggested) >> >> § Thursday - 11:00 AM EST/ 10:00 AM CST / 8:00 AM PST/ 12:00 AM UTC +9 >> (Asia)/ 3-4 PM UTC+1/2 (Europe)+1+1+1+1+1+1+1 >> >> § Tuesday - 9:00 AM EST/ 8:00 AM CST / 6:00 AM PST/ 10:00 PM UTC +9 >> (Asia)/ 1-2 PM UTC+1/2 (Europe)+1+1 >> >> - Extend length of the calls? >> >> § Remain 60 mins+1 >> >> § Extend 90 mins +1 >> >> >> >> The key question remaining is the length of the calls. >> >> >> >> Please vote here for Calls length. >> >> Ignore the Tuesday/Thursday May date’s, all I am interested in deciding >> is the length of the calls at this point. >> >> >> >> https://doodle.com/poll/wbdwym6x4dqnhe6f >> >> >> >> >> >> >> Will close this voting Sunday May 12 at 12 AM EST. >> >> Will send an updated invite in the mailing list afterwards. >> >> >> >> Reminder we will have the Design Call Thursday May 9 at 11 EST as per >> usual. >> >> >> >> >> >> Regards >> >> >> >> *Rodolfo Pacheco* >> >> >> >> *Home/Office* 732 5337671 >> _______________________________________________ >> Airship-discuss mailing list >> Airship-discuss at lists.airshipit.org >> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> > > > > -- > *Jaesuk Ahn*, Ph.D. > Software R&D Center, SK Telecom > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From calvinwhole at gmail.com Fri May 10 05:18:42 2019 From: calvinwhole at gmail.com (calvin whole) Date: Fri, 10 May 2019 13:18:42 +0800 Subject: [Airship-discuss] airship-in-a-bottle deployment issue [resend] Message-ID: Hi Roman, Since my last emails were too big in size and may be blocked, I reformat the original mails as below and send to you to continue the discussion. Thanks for help in advance. Hi Roman, Thanks for looking into this and gave us suggestions. I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. It showed ERROR: checkpoint request failed HINT: Consult recent messages in the server log for details. What is the specific "server" log we should look into for details? ================== log for deckhand-db-init-zs499 ================================== root at n0:/home/vagrant# kubectl logs deckhand-db-init-zs499 -n ucp + export HOME=/tmp + HOME=/tmp + pgsql_superuser_cmd 'SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' + grep -q 1 + DB_COMMAND='SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' + [[ ! -z '' ]] + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' + pgsql_superuser_cmd 'CREATE DATABASE deckhand' + DB_COMMAND='CREATE DATABASE deckhand' + [[ ! -z '' ]] + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=CREATE DATABASE deckhand' ERROR: checkpoint request failed HINT: Consult recent messages in the server log for details. =========== kubectl describe pod deckhand-db-init-zs499 ================= ...... Containers: deckhand-db-init: Container ID: docker://5dea2aa975c3718ca298536005b9cc0b21de47e08b2260cc73005e3455bb1350 Image: docker.io/postgres:9.5 Image ID: docker-pullable://postgres at sha256 :0605b4b20a205c09ddd10eeeddd3ed7bf3cc442a8e9896ec34862ca882658be4 Port: Host Port: Command: /tmp/db-init.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error <======== Exit Code: 1 Started: Thu, 09 May 2019 04:10:29 +0000 Finished: Thu, 09 May 2019 04:10:30 +0000 Ready: False Restart Count: 9 Sincerely, Calvin ========================================== Hi, We are trying to deploy AIIB. I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant. The process is straightforward by following https://opendev.org/airship/in-a-bottle/ We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up". However it stuck in the error below: deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16 Could anyone help to resolve this? Many thanks in advance. Sincerely, Calvin ====================================== Hello Calvin, Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod. kubectl describe pod deckhand-db-init-r9jvg -u ucp May be it would help to understand what is happening there. Thank you for trying Airship. Best regards, -- Roman Gorshunov -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Fri May 10 13:45:51 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Fri, 10 May 2019 15:45:51 +0200 Subject: [Airship-discuss] airship-in-a-bottle deployment issue In-Reply-To: References: Message-ID: Hi Calvin, I don't know which logs does it suggest to look at. I will run AIAB myself and hopefully understand what the error means. Best regards, -- Roman Gorshunov On Fri, May 10, 2019 at 7:01 AM calvin whole wrote: > > Hi Roman, > > Not sure if my last email were out properly, its size is too big. Here is a short one. Thanks for responding in advance. > > I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. > It showed ERROR: checkpoint request failed > HINT: Consult recent messages in the server log for details. > > What is the specific "server" log we should look into for details? > > Thanks for help. > > Sincerely, > Calvin > > > On Thu, May 9, 2019 at 12:17 PM calvin whole wrote: >> >> Hi Roman, >> >> Btw, continue my last post, the kubectl describe pod deckhand-db-init-zs499 output is as follows. >> >> Thanks, >> Calvin >> =========== kubectl describe pod deckhand-db-init-zs499 ================= >> root at n0:/home/vagrant# kubectl describe pod deckhand-db-init-zs499 -n ucp >> Name: deckhand-db-init-zs499 >> Namespace: ucp >> Node: n0/10.0.2.15 >> Start Time: Thu, 09 May 2019 03:48:48 +0000 >> Labels: application=deckhand >> component=db-init >> controller-uid=59f1bee0-720d-11e9-92ac-080027fc876e >> job-name=deckhand-db-init >> release_group=airship-ucp-deckhand >> Annotations: >> Status: Running >> IP: 10.97.26.50 >> Controlled By: Job/deckhand-db-init >> Init Containers: >> init: >> Container ID: docker://b58e8b6b7296df618cb8120b5226370afeba2a4e79dd70ee6894b5afd853c0db >> Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 >> Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 >> Port: >> Host Port: >> Command: >> kubernetes-entrypoint >> State: Terminated >> Reason: Completed >> Exit Code: 0 >> Started: Thu, 09 May 2019 03:48:52 +0000 >> Finished: Thu, 09 May 2019 03:48:54 +0000 >> Ready: True >> Restart Count: 0 >> Environment: >> POD_NAME: deckhand-db-init-zs499 (v1:metadata.name) >> NAMESPACE: ucp (v1:metadata.namespace) >> INTERFACE_NAME: eth0 >> PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ >> DEPENDENCY_SERVICE: ucp:postgresql >> DEPENDENCY_DAEMONSET: >> DEPENDENCY_CONTAINER: >> DEPENDENCY_POD_JSON: >> COMMAND: echo done >> Mounts: >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro) >> Containers: >> deckhand-db-init: >> Container ID: docker://5dea2aa975c3718ca298536005b9cc0b21de47e08b2260cc73005e3455bb1350 >> Image: docker.io/postgres:9.5 >> Image ID: docker-pullable://postgres at sha256:0605b4b20a205c09ddd10eeeddd3ed7bf3cc442a8e9896ec34862ca882658be4 >> Port: >> Host Port: >> Command: >> /tmp/db-init.sh >> State: Waiting >> Reason: CrashLoopBackOff >> Last State: Terminated >> Reason: Error <======== >> Exit Code: 1 >> Started: Thu, 09 May 2019 04:10:29 +0000 >> Finished: Thu, 09 May 2019 04:10:30 +0000 >> Ready: False >> Restart Count: 9 >> Environment: >> DECKHAND_DB_URL: Optional: false >> DB_NAME: Optional: false >> DB_SERVICE_USER: Optional: false >> DB_SERVICE_PASSWORD: Optional: false >> DB_FQDN: Optional: false >> DB_PORT: Optional: false >> DB_ADMIN_USER: Optional: false >> PGPASSWORD: Optional: false >> Mounts: >> /etc/deckhand from etc-deckhand (rw) >> /etc/deckhand/deckhand.conf from deckhand-etc (ro) >> /tmp/db-init.sh from deckhand-bin (ro) >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro) >> Conditions: >> Type Status >> Initialized True >> Ready False >> PodScheduled True >> Volumes: >> etc-deckhand: >> Type: EmptyDir (a temporary directory that shares a pod's lifetime) >> Medium: >> deckhand-etc: >> Type: Secret (a volume populated by a Secret) >> SecretName: deckhand-etc >> Optional: false >> deckhand-bin: >> Type: ConfigMap (a volume populated by a ConfigMap) >> Name: deckhand-bin >> Optional: false >> deckhand-db-init-token-gczr5: >> Type: Secret (a volume populated by a Secret) >> SecretName: deckhand-db-init-token-gczr5 >> Optional: false >> QoS Class: BestEffort >> Node-Selectors: ucp-control-plane=enabled >> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s >> node.kubernetes.io/unreachable:NoExecute for 300s >> Events: >> Type Reason Age From Message >> ---- ------ ---- ---- ------- >> Normal Scheduled 24m default-scheduler Successfully assigned deckhand-db-init-zs499 to n0 >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "etc-deckhand" >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-bin" >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-etc" >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-db-init-token-gczr5" >> Normal Pulled 24m kubelet, n0 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine >> Normal Created 24m kubelet, n0 Created container >> Normal Started 24m kubelet, n0 Started container >> Normal Pulled 23m (x4 over 24m) kubelet, n0 Container image "docker.io/postgres:9.5" already present on machine >> Normal Created 23m (x4 over 24m) kubelet, n0 Created container >> Normal Started 23m (x4 over 24m) kubelet, n0 Started container >> Warning BackOff 4m (x90 over 24m) kubelet, n0 Back-off restarting failed container >> root at n0:/home/vagrant# >> >> On Thu, May 9, 2019 at 12:08 PM calvin whole wrote: >>> >>> Hi Roman, >>> >>> Thanks for looking into this and gave us suggestions. >>> >>> I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. >>> It showed ERROR: checkpoint request failed >>> HINT: Consult recent messages in the server log for details. >>> >>> What is the specific "server" log we should look into for details? >>> >>> Thanks for help. >>> >>> Sincerely, >>> Calvin >>> >>> ================== log for deckhand-db-init-zs499 ================================== >>> root at n0:/home/vagrant# kubectl logs deckhand-db-init-zs499 -n ucp >>> + export HOME=/tmp >>> + HOME=/tmp >>> + pgsql_superuser_cmd 'SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >>> + grep -q 1 >>> + DB_COMMAND='SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >>> + [[ ! -z '' ]] >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >>> + pgsql_superuser_cmd 'CREATE DATABASE deckhand' >>> + DB_COMMAND='CREATE DATABASE deckhand' >>> + [[ ! -z '' ]] >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=CREATE DATABASE deckhand' >>> ERROR: checkpoint request failed >>> HINT: Consult recent messages in the server log for details. >>> >>> ===================================================================================== >>> ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE >>> ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 49m >>> ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 49m >>> ==> n0: kube-system calico-etcd-anchor-ncl2p 1/1 Running 0 47m >>> ==> n0: kube-system calico-etcd-n0 1/1 Running 0 46m >>> ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5csnn 1/1 Running 0 46m >>> ==> n0: kube-system calico-node-m4rtf 1/1 Running 0 46m >>> ==> n0: kube-system calico-settings-tkp6r 0/1 Completed 0 46m >>> ==> n0: kube-system coredns-84bdd76f4d-hhbcs 1/1 Running 0 44m >>> ==> n0: kube-system coredns-84bdd76f4d-k8tcc 1/1 Running 0 44m >>> ==> n0: kube-system coredns-84bdd76f4d-qp2xd 1/1 Running 0 44m >>> ==> n0: kube-system haproxy-n0 1/1 Running 0 50m >>> ==> n0: kube-system ingress-error-pages-7c65f766d-dn2tw 1/1 Running 0 41m >>> ==> n0: kube-system ingress-gtvp8 2/2 Running 0 41m >>> ==> n0: kube-system kubernetes-apiserver-anchor-99jhn 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 41m >>> ==> n0: kube-system kubernetes-controller-manager-anchor-vqddp 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 41m >>> ==> n0: kube-system kubernetes-etcd-anchor-9jcpl 1/1 Running 0 44m >>> ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-proxy-2m9t2 1/1 Running 0 47m >>> ==> n0: kube-system kubernetes-scheduler-anchor-nl9fb 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 41m >>> ==> n0: kube-system nfs-provisioner-7799d64d59-vtkbd 1/1 Running 0 40m >>> ==> n0: kube-system tiller-deploy-7d88c6f956-qwfzb 1/1 Running 0 27m >>> ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-vfl69 1/1 Running 0 34m >>> ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 39m >>> ==> n0: ucp armada-api-d5f757d5-6wl98 1/1 Running 0 15m >>> ==> n0: ucp armada-ks-endpoints-vl9rs 0/3 Completed 0 15m >>> ==> n0: ucp armada-ks-service-vpcjd 0/1 Completed 0 15m >>> ==> n0: ucp armada-ks-user-rv4gs 0/1 Completed 0 15m >>> ==> n0: ucp barbican-api-5d7b88d8ff-8dd6w 1/1 Running 0 13m >>> ==> n0: ucp barbican-db-init-gqvt4 0/1 Completed 0 13m >>> ==> n0: ucp barbican-db-sync-tqtgq 0/1 Completed 0 13m >>> ==> n0: ucp barbican-ks-endpoints-rwtql 0/3 >>> ==> n0: Completed 0 13m >>> ==> n0: ucp barbican-ks-service-l2h6h 0/1 Completed 0 13m >>> ==> n0: ucp barbican-ks-user-wwvc7 0/1 Completed 0 13m >>> ==> n0: ucp barbican-rabbit-init-6spq4 0/1 Completed 0 13m >>> ==> n0: ucp deckhand-api-78b9644f96-5686f 0/1 Running 0 11m >>> ==> n0: ucp deckhand-db-init-zs499 0/1 CrashLoopBackOff 7 11m <=== >>> ==> n0: ucp deckhand-db-sync-ct7wl 0/1 Init:0/1 0 11m >>> ==> n0: ucp deckhand-ks-endpoints-x4hd9 0/3 Completed 0 11m >>> ==> n0: ucp deckhand-ks-service-ms6n5 0/1 Completed 0 11m >>> ==> n0: ucp deckhand-ks-user-7fnvt 0/1 Completed 0 11m >>> ==> n0: ucp divingbell-apparmor-default-hth8z 1/1 Running 0 27m >>> ==> n0: ucp divingbell-apt-default-r965m 1/1 Running 0 27m >>> ==> n0: ucp divingbell-ethtool-default-ldcmc 1/1 Running 0 27m >>> ==> n0: ucp divingbell-exec-default-f7h7x 1/1 Running 0 27m >>> ==> n0: ucp divingbell-limits-default-sp9mj 1/1 Running 0 27m >>> ==> n0: ucp divingbell-mounts-default-8f5a00a2-frbl2 1/1 Running 0 27m >>> ==> n0: ucp divingbell-perm-default-d7wxp 1/1 Running 0 27m >>> ==> n0: ucp divingbell-sysctl-default-c8pnp 1/1 Running 0 27m >>> ==> n0: ucp divingbell-uamlite-default-rfct6 1/1 Running 0 27m >>> ==> n0: ucp ingress-86576d6599-mdgj4 1/1 Running 0 39m >>> ==> n0: ucp ingress-error-pages-5c97bb46bb-7lg5l 1/1 Running 0 39m >>> ==> n0: ucp keystone-api-678fc44bdd-594bb 1/1 Running 0 34m >>> ==> n0: ucp keystone-bootstrap-rprr6 0/1 Completed 0 34m >>> ==> n0: ucp keystone-credential-setup-zkjgs 0/1 Completed 0 34m >>> ==> n0: ucp keystone-db-init-xkgxm 0/1 Completed 0 34m >>> ==> n0: ucp keystone-db-sync-lm6xs 0/1 Completed 0 34m >>> ==> n0: ucp keystone-domain-manage-9pzjq 0/1 Completed 0 34m >>> ==> n0: ucp keystone-fernet-setup-q7t8p 0/1 Completed 0 34m >>> ==> n0: ucp keystone-rabbit-init-qpvgt 0/1 Completed 0 34m >>> ==> n0: ucp maas-bootstrap-admin-user-8npgw 0/1 Completed 0 26m >>> ==> n0: ucp maas-db-init-9z86n 0/1 Completed 0 26m >>> ==> n0: ucp maas-db-sync-r7rkg 0/1 Completed 0 26m >>> ==> n0: ucp maas-export-api-key-n2gz4 0/1 Completed 1 26m >>> ==> n0: ucp maas-import-resources-prlml 0/1 Completed 0 26m >>> ==> n0: ucp maas-ingress-756f6f9d6-h65nj 2/2 Running 0 26m >>> ==> n0: ucp maas-ingress-errors-8686d56d98-swfg9 >>> ==> n0: 1/1 Running 0 26m >>> ==> n0: ucp maas-rack-0 1/1 Running 0 26m >>> ==> n0: ucp maas-region-0 1/1 Running 0 26m >>> ==> n0: ucp mariadb-ingress-55794d94c8-dsw5w 1/1 Running 0 39m >>> ==> n0: ucp mariadb-ingress-55794d94c8-jczmh 1/1 Running 0 39m >>> ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-jrqsg 1/1 Running 0 39m >>> ==> n0: ucp mariadb-server-0 1/1 Running 0 39m >>> ==> n0: ucp postgresql-0 1/1 Running 1 39m >>> >>> >>> On Tue, May 7, 2019 at 6:25 PM Roman Gorshunov wrote: >>>> >>>> Hello Calvin, >>>> >>>> Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod. >>>> kubectl describe pod deckhand-db-init-r9jvg -u ucp >>>> May be it would help to understand what is happening there. >>>> >>>> Thank you for trying Airship. >>>> >>>> Best regards, >>>> -- Roman Gorshunov >>>> >>>> On Tue, May 7, 2019 at 7:51 AM calvin whole wrote: >>>> > >>>> > Hi, >>>> > >>>> > We are trying to deploy AIIB. >>>> > >>>> > I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant. >>>> > The process is straightforward by following https://opendev.org/airship/in-a-bottle/ >>>> > We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up". >>>> > >>>> > However it stuck in the error below: >>>> > deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16 >>>> > >>>> > Could anyone help to resolve this? Many thanks in advance. >>>> > >>>> > Sincerely, >>>> > Calvin >>>> > >>>> > ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE >>>> > ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 1h >>>> > ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 1h >>>> > ==> n0: kube-system calico-etcd-anchor-5tqhk 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-etcd-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-node-pbsxh 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-settings-lzpk9 0/1 Completed 0 1h >>>> > ==> n0: kube-system coredns-84bdd76f4d-6cwnl 1/1 Running 0 1h >>>> > ==> n0: kube-system coredns-84bdd76f4d-d4p8c 1/1 Running 0 1h >>>> > ==> n0: kube-system coredns-84bdd76f4d-xrknz 1/1 Running 0 1h >>>> > ==> n0: kube-system haproxy-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system ingress-9pkmx 2/2 Running 0 1h >>>> > ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-apiserver-anchor-hszbf 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-etcd-anchor-nnjbb 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-proxy-vgzjp 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq 1/1 Running 0 1h >>>> > ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg 1/1 Running 0 1h >>>> > ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 Running 0 1h >>>> > ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h >>>> > ==> n0: ucp armada-api-d5f757d5-d9l9h 1/1 Running 0 1h >>>> > ==> n0: ucp armada-ks-endpoints-qwbtg 0/3 Completed 0 1h >>>> > ==> n0: ucp armada-ks-service-lg8kq 0/1 Completed 0 1h >>>> > ==> n0: ucp armada-ks-user-g2j6v 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-api-84665dd99d-qv5fz 1/1 Running 0 1h >>>> > ==> n0: ucp barbican-db-init-ndx58 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-db-sync-sh7c9 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-ks-endpoints-bv7xv 0/3 Completed 0 1h >>>> > ==> n0: ucp barbican-ks-service-46hjk 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-ks-user-6df74 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-rabbit-init-gnvfl 0/1 Completed 0 1h >>>> > ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw 0/1 Running 0 1h >>>> > ==> n0: ucp deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 17 1h <===== >>>> > ==> n0: ucp deckhand-db-sync-llstv 0/1 Init:0/1 0 1h >>>> > ==> n0: ucp deckhand-ks-endpoints-4gqfj 0/3 Completed 0 1h >>>> > ==> n0: ucp deckhand-ks-service-c6gbq 0/1 Completed 0 1h >>>> > ==> n0: ucp deckhand-ks-user-5skng 0/1 Completed 0 1h >>>> > ==> n0: ucp divingbell-apparmor-default-lkcl6 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-apt-default-7jgtv 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-ethtool-default-tm2w4 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-exec-default-l45m8 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-limits-default-q84pr 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-mounts-default-29420945-nrdsz 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-perm-default-wdgld 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-sysctl-default-t7f2m 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-uamlite-default-fc4jx 1/1 Running 0 1h >>>> > ==> n0: ucp ingress-86576d6599-q8ng4 1/1 Running 0 1h >>>> > ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m 1/1 Running 0 1h >>>> > ==> n0: ucp keystone-api-678fc44bdd-ncxc2 1/1 Running 0 1h >>>> > ==> n0: ucp keystone-bootstrap-28l4g 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-credential-setup-rq5d4 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-db-init-z8x4w 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-db-sync-9hvb5 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-domain-manage-tzcnf 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-fernet-setup-bzdpb 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-rabbit-init-cxpc6 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-bootstrap-admin-user-g99rl 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-db-init-h4llm 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-db-sync-6tsqj 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-export-api-key-c8rdb 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-import-resources-hhq7f 0/1 Completed 1 1h >>>> > ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 2/2 Running 0 1h >>>> > ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx 1/1 Running 0 1h >>>> > ==> n0: u >>>> > ==> n0: cp maas-rack-0 1/1 Running 0 1h >>>> > ==> n0: ucp maas-region-0 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-vglbv 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-server-0 1/1 Running 0 1h >>>> > ==> n0: ucp postgresql-0 1/1 Running 1 1h >>>> > _______________________________________________ >>>> > Airship-discuss mailing list >>>> > Airship-discuss at lists.airshipit.org >>>> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From rp2723 at att.com Mon May 13 13:33:43 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Mon, 13 May 2019 13:33:43 +0000 Subject: [Airship-discuss] Airship - Open Design Call - Thursdays Message-ID: <99088997CCAD0C4BA20008FD094344963C243958@MISOUT7MSGUSRDI.ITServices.sbc.com> When: Occurs every Thursday from 11:00 AM to 12:30 PM effective 8/30/2018 until 1/2/2020. (UTC-05:00) Eastern Time (US & Canada) Where: https://attcorp.webex.com/meet/rp2723 *~*~*~*~*~*~*~*~*~* REMINDER –Airship Design Call Based on the doodle votes the meeting length will be 90 mins Join us to continue Airship 2.0 Design discussions Etherpad for the Airship Open Design discussion https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions Storyboard in flight Specs https://storyboard.openstack.org/#!/project/openstack/airship-specs Github Airship Specs https://github.com/openstack/airship-specs/tree/master/specs Inflight/reviewing specs https://review.openstack.org/#/q/status:open+airship-specs __________________________________________ Join by video system i Dial rp2723 at attcorp.webex.com and enter your host PIN 02083790. You can also dial 173.243.2.68 and enter your meeting number. Join by phone 1-844-517-1415 United States Toll Free 1-618-230-6039 United States Toll Access code: 733 333 726 Host PIN: 02083790 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2696 bytes Desc: not available URL: From rp2723 at att.com Mon May 13 13:33:39 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Mon, 13 May 2019 13:33:39 +0000 Subject: [Airship-discuss] Airship - Open Design Call - Tuesdays Message-ID: <99088997CCAD0C4BA20008FD094344963C243945@MISOUT7MSGUSRDI.ITServices.sbc.com> When: Occurs every Tuesday from 9:00 AM to 10:30 AM effective 5/14/2019 until 1/1/2020. (UTC-05:00) Eastern Time (US & Canada) Where: https://attcorp.webex.com/meet/rp2723 *~*~*~*~*~*~*~*~*~* REMINDER –Airship Design Call – New Added Call Based on the doodle votes the meeting length will be 90 mins Join us to continue Airship 2.0 Design discussions Etherpad for the Airship Open Design discussion https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions Storyboard in flight Specs https://storyboard.openstack.org/#!/project/openstack/airship-specs Github Airship Specs https://github.com/openstack/airship-specs/tree/master/specs Inflight/reviewing specs https://review.openstack.org/#/q/status:open+airship-specs __________________________________________ Join by video system i Dial rp2723 at attcorp.webex.com and enter your host PIN 02083790. You can also dial 173.243.2.68 and enter your meeting number. Join by phone 1-844-517-1415 United States Toll Free 1-618-230-6039 United States Toll Access code: 733 333 726 Host PIN: 02083790 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2803 bytes Desc: not available URL: From paye600 at gmail.com Mon May 13 14:12:53 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Mon, 13 May 2019 16:12:53 +0200 Subject: [Airship-discuss] airship-in-a-bottle deployment issue In-Reply-To: References: Message-ID: Hello Calvin, Seems like PosgreSQL database was not able to properly write data onto the disk. PostgreSQL runs as a postgresql-0 pod in ucp namespace, uses a persistent volume claim postgresql-data-postgresql-0, and persistent volume mounted via NFS. kubectl describe pod postgresql-0 -n ucp kubectl logs -n ucp postgresql-0 kubectl -n ucp describe pvc postgresql-data-postgresql-0 kubectl describe pv pvc-0382c985-7572-11e9-b431-525400681552 # volume name could be different) NFS is provisioned by nfs-provisioner-7799d64d59-ptsgk (last two parts would be different in your case): kubectl get pods -n kube-system | grep nfs kubectl -n kube-system describe pod nfs-provisioner-7799d64d59-ptsgk kubectl -n kube-system logs nfs-provisioner-7799d64d59-ptsgk Check if there are any problems with it (e.g. unable to mount NFS share, or lack of free storage space - `df -h`). Also running kubectl get events --all-namespaces could help to understand what went wrong. I have run an AIAB installation today twice, and it all worked fine. I use `vagrant up` and my hypervisor is KVM, if that could help you. I hope it helps. Best regards, -- Roman Gorshunov On Fri, May 10, 2019 at 7:01 AM calvin whole wrote: > > Hi Roman, > > Not sure if my last email were out properly, its size is too big. Here is a short one. Thanks for responding in advance. > > I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. > It showed ERROR: checkpoint request failed > HINT: Consult recent messages in the server log for details. > > What is the specific "server" log we should look into for details? > > Thanks for help. > > Sincerely, > Calvin > > > On Thu, May 9, 2019 at 12:17 PM calvin whole wrote: >> >> Hi Roman, >> >> Btw, continue my last post, the kubectl describe pod deckhand-db-init-zs499 output is as follows. >> >> Thanks, >> Calvin >> =========== kubectl describe pod deckhand-db-init-zs499 ================= >> root at n0:/home/vagrant# kubectl describe pod deckhand-db-init-zs499 -n ucp >> Name: deckhand-db-init-zs499 >> Namespace: ucp >> Node: n0/10.0.2.15 >> Start Time: Thu, 09 May 2019 03:48:48 +0000 >> Labels: application=deckhand >> component=db-init >> controller-uid=59f1bee0-720d-11e9-92ac-080027fc876e >> job-name=deckhand-db-init >> release_group=airship-ucp-deckhand >> Annotations: >> Status: Running >> IP: 10.97.26.50 >> Controlled By: Job/deckhand-db-init >> Init Containers: >> init: >> Container ID: docker://b58e8b6b7296df618cb8120b5226370afeba2a4e79dd70ee6894b5afd853c0db >> Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 >> Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 >> Port: >> Host Port: >> Command: >> kubernetes-entrypoint >> State: Terminated >> Reason: Completed >> Exit Code: 0 >> Started: Thu, 09 May 2019 03:48:52 +0000 >> Finished: Thu, 09 May 2019 03:48:54 +0000 >> Ready: True >> Restart Count: 0 >> Environment: >> POD_NAME: deckhand-db-init-zs499 (v1:metadata.name) >> NAMESPACE: ucp (v1:metadata.namespace) >> INTERFACE_NAME: eth0 >> PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ >> DEPENDENCY_SERVICE: ucp:postgresql >> DEPENDENCY_DAEMONSET: >> DEPENDENCY_CONTAINER: >> DEPENDENCY_POD_JSON: >> COMMAND: echo done >> Mounts: >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro) >> Containers: >> deckhand-db-init: >> Container ID: docker://5dea2aa975c3718ca298536005b9cc0b21de47e08b2260cc73005e3455bb1350 >> Image: docker.io/postgres:9.5 >> Image ID: docker-pullable://postgres at sha256:0605b4b20a205c09ddd10eeeddd3ed7bf3cc442a8e9896ec34862ca882658be4 >> Port: >> Host Port: >> Command: >> /tmp/db-init.sh >> State: Waiting >> Reason: CrashLoopBackOff >> Last State: Terminated >> Reason: Error <======== >> Exit Code: 1 >> Started: Thu, 09 May 2019 04:10:29 +0000 >> Finished: Thu, 09 May 2019 04:10:30 +0000 >> Ready: False >> Restart Count: 9 >> Environment: >> DECKHAND_DB_URL: Optional: false >> DB_NAME: Optional: false >> DB_SERVICE_USER: Optional: false >> DB_SERVICE_PASSWORD: Optional: false >> DB_FQDN: Optional: false >> DB_PORT: Optional: false >> DB_ADMIN_USER: Optional: false >> PGPASSWORD: Optional: false >> Mounts: >> /etc/deckhand from etc-deckhand (rw) >> /etc/deckhand/deckhand.conf from deckhand-etc (ro) >> /tmp/db-init.sh from deckhand-bin (ro) >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro) >> Conditions: >> Type Status >> Initialized True >> Ready False >> PodScheduled True >> Volumes: >> etc-deckhand: >> Type: EmptyDir (a temporary directory that shares a pod's lifetime) >> Medium: >> deckhand-etc: >> Type: Secret (a volume populated by a Secret) >> SecretName: deckhand-etc >> Optional: false >> deckhand-bin: >> Type: ConfigMap (a volume populated by a ConfigMap) >> Name: deckhand-bin >> Optional: false >> deckhand-db-init-token-gczr5: >> Type: Secret (a volume populated by a Secret) >> SecretName: deckhand-db-init-token-gczr5 >> Optional: false >> QoS Class: BestEffort >> Node-Selectors: ucp-control-plane=enabled >> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s >> node.kubernetes.io/unreachable:NoExecute for 300s >> Events: >> Type Reason Age From Message >> ---- ------ ---- ---- ------- >> Normal Scheduled 24m default-scheduler Successfully assigned deckhand-db-init-zs499 to n0 >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "etc-deckhand" >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-bin" >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-etc" >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-db-init-token-gczr5" >> Normal Pulled 24m kubelet, n0 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine >> Normal Created 24m kubelet, n0 Created container >> Normal Started 24m kubelet, n0 Started container >> Normal Pulled 23m (x4 over 24m) kubelet, n0 Container image "docker.io/postgres:9.5" already present on machine >> Normal Created 23m (x4 over 24m) kubelet, n0 Created container >> Normal Started 23m (x4 over 24m) kubelet, n0 Started container >> Warning BackOff 4m (x90 over 24m) kubelet, n0 Back-off restarting failed container >> root at n0:/home/vagrant# >> >> On Thu, May 9, 2019 at 12:08 PM calvin whole wrote: >>> >>> Hi Roman, >>> >>> Thanks for looking into this and gave us suggestions. >>> >>> I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. >>> It showed ERROR: checkpoint request failed >>> HINT: Consult recent messages in the server log for details. >>> >>> What is the specific "server" log we should look into for details? >>> >>> Thanks for help. >>> >>> Sincerely, >>> Calvin >>> >>> ================== log for deckhand-db-init-zs499 ================================== >>> root at n0:/home/vagrant# kubectl logs deckhand-db-init-zs499 -n ucp >>> + export HOME=/tmp >>> + HOME=/tmp >>> + pgsql_superuser_cmd 'SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >>> + grep -q 1 >>> + DB_COMMAND='SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >>> + [[ ! -z '' ]] >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >>> + pgsql_superuser_cmd 'CREATE DATABASE deckhand' >>> + DB_COMMAND='CREATE DATABASE deckhand' >>> + [[ ! -z '' ]] >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=CREATE DATABASE deckhand' >>> ERROR: checkpoint request failed >>> HINT: Consult recent messages in the server log for details. >>> >>> ===================================================================================== >>> ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE >>> ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 49m >>> ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 49m >>> ==> n0: kube-system calico-etcd-anchor-ncl2p 1/1 Running 0 47m >>> ==> n0: kube-system calico-etcd-n0 1/1 Running 0 46m >>> ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5csnn 1/1 Running 0 46m >>> ==> n0: kube-system calico-node-m4rtf 1/1 Running 0 46m >>> ==> n0: kube-system calico-settings-tkp6r 0/1 Completed 0 46m >>> ==> n0: kube-system coredns-84bdd76f4d-hhbcs 1/1 Running 0 44m >>> ==> n0: kube-system coredns-84bdd76f4d-k8tcc 1/1 Running 0 44m >>> ==> n0: kube-system coredns-84bdd76f4d-qp2xd 1/1 Running 0 44m >>> ==> n0: kube-system haproxy-n0 1/1 Running 0 50m >>> ==> n0: kube-system ingress-error-pages-7c65f766d-dn2tw 1/1 Running 0 41m >>> ==> n0: kube-system ingress-gtvp8 2/2 Running 0 41m >>> ==> n0: kube-system kubernetes-apiserver-anchor-99jhn 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 41m >>> ==> n0: kube-system kubernetes-controller-manager-anchor-vqddp 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 41m >>> ==> n0: kube-system kubernetes-etcd-anchor-9jcpl 1/1 Running 0 44m >>> ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-proxy-2m9t2 1/1 Running 0 47m >>> ==> n0: kube-system kubernetes-scheduler-anchor-nl9fb 1/1 Running 0 42m >>> ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 41m >>> ==> n0: kube-system nfs-provisioner-7799d64d59-vtkbd 1/1 Running 0 40m >>> ==> n0: kube-system tiller-deploy-7d88c6f956-qwfzb 1/1 Running 0 27m >>> ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-vfl69 1/1 Running 0 34m >>> ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 39m >>> ==> n0: ucp armada-api-d5f757d5-6wl98 1/1 Running 0 15m >>> ==> n0: ucp armada-ks-endpoints-vl9rs 0/3 Completed 0 15m >>> ==> n0: ucp armada-ks-service-vpcjd 0/1 Completed 0 15m >>> ==> n0: ucp armada-ks-user-rv4gs 0/1 Completed 0 15m >>> ==> n0: ucp barbican-api-5d7b88d8ff-8dd6w 1/1 Running 0 13m >>> ==> n0: ucp barbican-db-init-gqvt4 0/1 Completed 0 13m >>> ==> n0: ucp barbican-db-sync-tqtgq 0/1 Completed 0 13m >>> ==> n0: ucp barbican-ks-endpoints-rwtql 0/3 >>> ==> n0: Completed 0 13m >>> ==> n0: ucp barbican-ks-service-l2h6h 0/1 Completed 0 13m >>> ==> n0: ucp barbican-ks-user-wwvc7 0/1 Completed 0 13m >>> ==> n0: ucp barbican-rabbit-init-6spq4 0/1 Completed 0 13m >>> ==> n0: ucp deckhand-api-78b9644f96-5686f 0/1 Running 0 11m >>> ==> n0: ucp deckhand-db-init-zs499 0/1 CrashLoopBackOff 7 11m <=== >>> ==> n0: ucp deckhand-db-sync-ct7wl 0/1 Init:0/1 0 11m >>> ==> n0: ucp deckhand-ks-endpoints-x4hd9 0/3 Completed 0 11m >>> ==> n0: ucp deckhand-ks-service-ms6n5 0/1 Completed 0 11m >>> ==> n0: ucp deckhand-ks-user-7fnvt 0/1 Completed 0 11m >>> ==> n0: ucp divingbell-apparmor-default-hth8z 1/1 Running 0 27m >>> ==> n0: ucp divingbell-apt-default-r965m 1/1 Running 0 27m >>> ==> n0: ucp divingbell-ethtool-default-ldcmc 1/1 Running 0 27m >>> ==> n0: ucp divingbell-exec-default-f7h7x 1/1 Running 0 27m >>> ==> n0: ucp divingbell-limits-default-sp9mj 1/1 Running 0 27m >>> ==> n0: ucp divingbell-mounts-default-8f5a00a2-frbl2 1/1 Running 0 27m >>> ==> n0: ucp divingbell-perm-default-d7wxp 1/1 Running 0 27m >>> ==> n0: ucp divingbell-sysctl-default-c8pnp 1/1 Running 0 27m >>> ==> n0: ucp divingbell-uamlite-default-rfct6 1/1 Running 0 27m >>> ==> n0: ucp ingress-86576d6599-mdgj4 1/1 Running 0 39m >>> ==> n0: ucp ingress-error-pages-5c97bb46bb-7lg5l 1/1 Running 0 39m >>> ==> n0: ucp keystone-api-678fc44bdd-594bb 1/1 Running 0 34m >>> ==> n0: ucp keystone-bootstrap-rprr6 0/1 Completed 0 34m >>> ==> n0: ucp keystone-credential-setup-zkjgs 0/1 Completed 0 34m >>> ==> n0: ucp keystone-db-init-xkgxm 0/1 Completed 0 34m >>> ==> n0: ucp keystone-db-sync-lm6xs 0/1 Completed 0 34m >>> ==> n0: ucp keystone-domain-manage-9pzjq 0/1 Completed 0 34m >>> ==> n0: ucp keystone-fernet-setup-q7t8p 0/1 Completed 0 34m >>> ==> n0: ucp keystone-rabbit-init-qpvgt 0/1 Completed 0 34m >>> ==> n0: ucp maas-bootstrap-admin-user-8npgw 0/1 Completed 0 26m >>> ==> n0: ucp maas-db-init-9z86n 0/1 Completed 0 26m >>> ==> n0: ucp maas-db-sync-r7rkg 0/1 Completed 0 26m >>> ==> n0: ucp maas-export-api-key-n2gz4 0/1 Completed 1 26m >>> ==> n0: ucp maas-import-resources-prlml 0/1 Completed 0 26m >>> ==> n0: ucp maas-ingress-756f6f9d6-h65nj 2/2 Running 0 26m >>> ==> n0: ucp maas-ingress-errors-8686d56d98-swfg9 >>> ==> n0: 1/1 Running 0 26m >>> ==> n0: ucp maas-rack-0 1/1 Running 0 26m >>> ==> n0: ucp maas-region-0 1/1 Running 0 26m >>> ==> n0: ucp mariadb-ingress-55794d94c8-dsw5w 1/1 Running 0 39m >>> ==> n0: ucp mariadb-ingress-55794d94c8-jczmh 1/1 Running 0 39m >>> ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-jrqsg 1/1 Running 0 39m >>> ==> n0: ucp mariadb-server-0 1/1 Running 0 39m >>> ==> n0: ucp postgresql-0 1/1 Running 1 39m >>> >>> >>> On Tue, May 7, 2019 at 6:25 PM Roman Gorshunov wrote: >>>> >>>> Hello Calvin, >>>> >>>> Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod. >>>> kubectl describe pod deckhand-db-init-r9jvg -u ucp >>>> May be it would help to understand what is happening there. >>>> >>>> Thank you for trying Airship. >>>> >>>> Best regards, >>>> -- Roman Gorshunov >>>> >>>> On Tue, May 7, 2019 at 7:51 AM calvin whole wrote: >>>> > >>>> > Hi, >>>> > >>>> > We are trying to deploy AIIB. >>>> > >>>> > I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant. >>>> > The process is straightforward by following https://opendev.org/airship/in-a-bottle/ >>>> > We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up". >>>> > >>>> > However it stuck in the error below: >>>> > deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16 >>>> > >>>> > Could anyone help to resolve this? Many thanks in advance. >>>> > >>>> > Sincerely, >>>> > Calvin >>>> > >>>> > ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE >>>> > ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 1h >>>> > ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 1h >>>> > ==> n0: kube-system calico-etcd-anchor-5tqhk 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-etcd-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-node-pbsxh 1/1 Running 0 1h >>>> > ==> n0: kube-system calico-settings-lzpk9 0/1 Completed 0 1h >>>> > ==> n0: kube-system coredns-84bdd76f4d-6cwnl 1/1 Running 0 1h >>>> > ==> n0: kube-system coredns-84bdd76f4d-d4p8c 1/1 Running 0 1h >>>> > ==> n0: kube-system coredns-84bdd76f4d-xrknz 1/1 Running 0 1h >>>> > ==> n0: kube-system haproxy-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system ingress-9pkmx 2/2 Running 0 1h >>>> > ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-apiserver-anchor-hszbf 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-etcd-anchor-nnjbb 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-proxy-vgzjp 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk 1/1 Running 0 1h >>>> > ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 1h >>>> > ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq 1/1 Running 0 1h >>>> > ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg 1/1 Running 0 1h >>>> > ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 Running 0 1h >>>> > ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h >>>> > ==> n0: ucp armada-api-d5f757d5-d9l9h 1/1 Running 0 1h >>>> > ==> n0: ucp armada-ks-endpoints-qwbtg 0/3 Completed 0 1h >>>> > ==> n0: ucp armada-ks-service-lg8kq 0/1 Completed 0 1h >>>> > ==> n0: ucp armada-ks-user-g2j6v 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-api-84665dd99d-qv5fz 1/1 Running 0 1h >>>> > ==> n0: ucp barbican-db-init-ndx58 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-db-sync-sh7c9 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-ks-endpoints-bv7xv 0/3 Completed 0 1h >>>> > ==> n0: ucp barbican-ks-service-46hjk 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-ks-user-6df74 0/1 Completed 0 1h >>>> > ==> n0: ucp barbican-rabbit-init-gnvfl 0/1 Completed 0 1h >>>> > ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw 0/1 Running 0 1h >>>> > ==> n0: ucp deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 17 1h <===== >>>> > ==> n0: ucp deckhand-db-sync-llstv 0/1 Init:0/1 0 1h >>>> > ==> n0: ucp deckhand-ks-endpoints-4gqfj 0/3 Completed 0 1h >>>> > ==> n0: ucp deckhand-ks-service-c6gbq 0/1 Completed 0 1h >>>> > ==> n0: ucp deckhand-ks-user-5skng 0/1 Completed 0 1h >>>> > ==> n0: ucp divingbell-apparmor-default-lkcl6 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-apt-default-7jgtv 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-ethtool-default-tm2w4 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-exec-default-l45m8 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-limits-default-q84pr 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-mounts-default-29420945-nrdsz 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-perm-default-wdgld 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-sysctl-default-t7f2m 1/1 Running 0 1h >>>> > ==> n0: ucp divingbell-uamlite-default-fc4jx 1/1 Running 0 1h >>>> > ==> n0: ucp ingress-86576d6599-q8ng4 1/1 Running 0 1h >>>> > ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m 1/1 Running 0 1h >>>> > ==> n0: ucp keystone-api-678fc44bdd-ncxc2 1/1 Running 0 1h >>>> > ==> n0: ucp keystone-bootstrap-28l4g 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-credential-setup-rq5d4 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-db-init-z8x4w 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-db-sync-9hvb5 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-domain-manage-tzcnf 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-fernet-setup-bzdpb 0/1 Completed 0 1h >>>> > ==> n0: ucp keystone-rabbit-init-cxpc6 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-bootstrap-admin-user-g99rl 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-db-init-h4llm 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-db-sync-6tsqj 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-export-api-key-c8rdb 0/1 Completed 0 1h >>>> > ==> n0: ucp maas-import-resources-hhq7f 0/1 Completed 1 1h >>>> > ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 2/2 Running 0 1h >>>> > ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx 1/1 Running 0 1h >>>> > ==> n0: u >>>> > ==> n0: cp maas-rack-0 1/1 Running 0 1h >>>> > ==> n0: ucp maas-region-0 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-vglbv 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv 1/1 Running 0 1h >>>> > ==> n0: ucp mariadb-server-0 1/1 Running 0 1h >>>> > ==> n0: ucp postgresql-0 1/1 Running 1 1h >>>> > _______________________________________________ >>>> > Airship-discuss mailing list >>>> > Airship-discuss at lists.airshipit.org >>>> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From MM9745 at att.com Mon May 13 16:44:31 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Mon, 13 May 2019 16:44:31 +0000 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations Message-ID: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> Airship team, In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project As discussed, this will seed the core teams for these relatively young projects with folks who are actively focused on implementing them. Existing Airship core reviewers remain grandfathered in, but are encouraged to bow out of coreship if they deem it appropriate. Following OpenStack norms, current Airship core reviewers have 7 days (till EOD 5/20) to respond to this email with a +1 or -1 vote. Please consider this my +1. A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response. Thanks, Matt McEuen From eli at mirantis.com Mon May 13 16:48:05 2019 From: eli at mirantis.com (Evgeny L) Date: Mon, 13 May 2019 09:48:05 -0700 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: +1 On Mon, May 13, 2019 at 9:45 AM MCEUEN, MATT wrote: > Airship team, > > In line with our discussion at the PTG, I would like to nominate two > project-specific core reviewers: > > Alex Hughes (alexanderhughes): Spyglass, Pegleg projects > Ian Pittwood (ian-pittwood): Spyglass project > > As discussed, this will seed the core teams for these relatively young > projects with folks who are actively focused on implementing them. > Existing Airship core reviewers remain grandfathered in, but are encouraged > to bow out of coreship if they deem it appropriate. > > Following OpenStack norms, current Airship core reviewers have 7 days > (till EOD 5/20) to respond to this email with a +1 or -1 vote. Please > consider this my +1. A simple +1/-1 will be interpreted as being for both > of the folks/repos above; if you have more specific votes please specify in > your response. > > Thanks, > Matt McEuen > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bs4939 at att.com Mon May 13 17:28:18 2019 From: bs4939 at att.com (STRASSNER, BRYAN) Date: Mon, 13 May 2019 17:28:18 +0000 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: <651A1B0DF16CC24CBBA3ABFDB19EE69E4ED70F7D@MOSTLS1MSGUSRFE.ITServices.sbc.com> +1 On Mon, May 13, 2019 at 9:45 AM MCEUEN, MATT > wrote: Airship team, In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project As discussed, this will seed the core teams for these relatively young projects with folks who are actively focused on implementing them. Existing Airship core reviewers remain grandfathered in, but are encouraged to bow out of coreship if they deem it appropriate. Following OpenStack norms, current Airship core reviewers have 7 days (till EOD 5/20) to respond to this email with a +1 or -1 vote. Please consider this my +1. A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response. Thanks, Matt McEuen _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From drewwalters96 at gmail.com Mon May 13 19:42:40 2019 From: drewwalters96 at gmail.com (Drew Walters) Date: Mon, 13 May 2019 14:42:40 -0500 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: > In line with our discussion at the PTG, I would like to nominate two > project-specific core reviewers: > > Alex Hughes (alexanderhughes): Spyglass, Pegleg projects > Ian Pittwood (ian-pittwood): Spyglass project > > A simple +1/-1 will be interpreted as being for both of the folks/repos > above; if you have more specific votes please specify in your response +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From js5175 at att.com Mon May 13 22:43:31 2019 From: js5175 at att.com (STEIN, JARED W) Date: Mon, 13 May 2019 22:43:31 +0000 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: <2536A0D233A8D6499F8C1C72F0A49D1E35907306@MOSTLS1MSGUSREI.ITServices.sbc.com> +1 Thanks, Jared Stein Network Cloud Engineering | Principal Member of the Technical Staff jared.stein at att.com Lync: 573.723.7138 Cell: 314.277.3127 From: Drew Walters Sent: Monday, May 13, 2019 2:43 PM To: MCEUEN, MATT Cc: airship-discuss at lists.airshipit.org Subject: Re: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From purnendu at gmail.com Tue May 14 05:18:04 2019 From: purnendu at gmail.com (Purnendu) Date: Tue, 14 May 2019 10:48:04 +0530 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: <2536A0D233A8D6499F8C1C72F0A49D1E35907306@MOSTLS1MSGUSREI.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> <2536A0D233A8D6499F8C1C72F0A49D1E35907306@MOSTLS1MSGUSREI.ITServices.sbc.com> Message-ID: +1 with best regards, Purnendu Ghosh On Tue, May 14, 2019 at 4:19 AM STEIN, JARED W wrote: > +1 > > > > Thanks, > > > > *Jared Stein* > > Network Cloud Engineering | Principal Member of the Technical Staff > > > > jared.stein at att.com > > Lync: 573.723.7138 > > Cell: 314.277.3127 > > > > *From:* Drew Walters > *Sent:* Monday, May 13, 2019 2:43 PM > *To:* MCEUEN, MATT > *Cc:* airship-discuss at lists.airshipit.org > *Subject:* Re: [Airship-discuss] Spyglass/Pegleg core reviewer nominations > > > > > > In line with our discussion at the PTG, I would like to nominate two > project-specific core reviewers: > > Alex Hughes (alexanderhughes): Spyglass, Pegleg projects > Ian Pittwood (ian-pittwood): Spyglass project > > A simple +1/-1 will be interpreted as being for both of the folks/repos > above; if you have more specific votes please specify in your response > > > > +1 > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluejay.ahn at gmail.com Tue May 14 06:40:59 2019 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Tue, 14 May 2019 15:40:59 +0900 Subject: [Airship-discuss] Airship - Open Design Call - Tuesdays In-Reply-To: <99088997CCAD0C4BA20008FD094344963C243945@MISOUT7MSGUSRDI.ITServices.sbc.com> References: <99088997CCAD0C4BA20008FD094344963C243945@MISOUT7MSGUSRDI.ITServices.sbc.com> Message-ID: Hi, Rodolfo, There is no password information for May 9th meeting recording. Cloud you please provide one for us? Thank you On Mon, May 13, 2019 at 10:37 PM PACHECO, RODOLFO J wrote: > When: Occurs every Tuesday from 9:00 AM to 10:30 AM effective 5/14/2019 > until 1/1/2020. (UTC-05:00) Eastern Time (US & Canada) > Where: https://attcorp.webex.com/meet/rp2723 > > *~*~*~*~*~*~*~*~*~* > > *REMINDER –Airship Design Call – New Added Call * > > *Based on the doodle votes the meeting length will be 90 mins * > > *Join us to continue Airship 2.0 Design discussions * > > > *Etherpad for the Airship Open Design discussion * > *https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions* > > > > *Storyboard in flight Specs * > *https://storyboard.openstack.org/#!/project/openstack/airship-specs* > > > > *Github Airship Specs * > *https://github.com/openstack/airship-specs/tree/master/specs* > > > > *Inflight/reviewing specs * > *https://review.openstack.org/#/q/status:open+airship-specs* > > > > > *__________________________________________ * > > Join by video system *i* > > Dial rp2723 at attcorp.webex.com and enter your host PIN 02083790. > You can also dial 173.243.2.68 and enter your meeting number. > Join by phone > 1-844-517-1415 *United States Toll Free* > 1-618-230-6039 *United States Toll* > Access code: 733 333 726 > Host PIN: 02083790 > > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- *Jaesuk Ahn*, Ph.D. Software R&D Center, SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Tue May 14 12:46:40 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 14 May 2019 14:46:40 +0200 Subject: [Airship-discuss] airship-in-a-bottle deployment issue In-Reply-To: References: Message-ID: Hello Calvin, Here is an ascii recording of my run of AIAB: https://paste.ubuntu.com/p/C7kWZpG33H/ https://asciinema.org/docs/usage - you can use it to play the recording >From your logs below it seems like Airflow is not running properly, Airflow is a part of Shipyard: ==> n0: Reason: Airflow could not be contacted properly by Shipyard. ==> n0: - Error: So something could be wrong with airflow-* pods or services: kubectl get pods --all-namespaces | grep -i airfl kubectl get svc --all-namespaces | grep airf The manifests/common/deploy-airship.sh script contains good sequence of steps being run inside a VM. It is being launched with parameter "demo" from manifests/dev_single_node/airship-in-a-bottle.sh. Check the code here: https://opendev.org/airship/in-a-bottle/src/branch/master/manifests. And yes, you can run parts of this scripts, but manually (I'd recommend comment certain sections, which you believe have already completed properly). Specs of my installation (actually it's a laptop): [roman at romanpc Airship]$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora-root 49G 21G 26G 45% / [roman at romanpc Airship]$ free -h # AIAB VM is already running here and consumes ~20GB RAM and 180-400% of CPU total used free shared buff/cache available Mem: 31Gi 23Gi 328Mi 874Mi 7.5Gi 6.5Gi Swap: 15Gi 21Mi 15Gi [roman at romanpc Airship]$ grep "model name" /proc/cpuinfo model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz model name : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz [roman at romanpc Airship]$ cat /etc/fedora-release Fedora release 30 (Thirty) [roman at romanpc Airship]$ vagrant version | grep Version Installed Version: 2.2.3 Latest Version: 2.2.4 [roman at romanpc Airship]$ vagrant plugin list vagrant-libvirt (0.0.45, system) [roman at romanpc Airship]$ rpm -q qemu-kvm qemu-kvm-3.1.0-7.fc30.x86_64 [roman at romanpc Airship]$ Inside VM: vagrant at n0:~$ sudo virt-what kvm vagrant at n0:~$ Best regards, -- Roman Gorshunov On Tue, May 14, 2019 at 7:59 AM calvin whole wrote: > > Hi Roman, > > Thanks a lot for looking into this issue. > > I re-ran the same process again and this time it successfully completed Genesis phase. The postgresql-0 and nfs-provisioner logs did not show any apparent errors. > So It seems to me there is a consistency issue because all I did was destroy the n0 vm and "vagrant up" again. It finally succeeded once out of 5 tries. > > Btw, my virtualization environment is as below. > > vagrant at n0:~$ sudo virt-what > > virtualbox > > kvm > > > However, the subsequent ./run_shipyard.sh commit configdocs failed as below. > How can this failure be fixed? > > Since Genesis is complete, can we re-run the script without the Genesis part - i.e., skipping the Genesis part ? > > Btw, can you describe your environment setup, so we can try to follow your exact execution environment? > > (My environment: a physical server, installed ubuntu 16.04.5 , install virtualbox and vagrant, and run "vagrant up") > > Thanks again, > Calvin > > ================================================= > ==> n0: + export max_shipyard_count=60 > ==> n0: + max_shipyard_count=60 > ==> n0: + export shipyard_query_time=90 > ==> n0: + shipyard_query_time=90 > ==> n0: + bash execute_shipyard_action.sh deploy_site > ==> n0: + run_action deploy_site > ==> n0: + action=deploy_site > ==> n0: + action_args= > ==> n0: + NC='\033[0m' > ==> n0: + RED='\033[0;31m' > ==> n0: + GREEN='\033[0;32m' > ==> n0: +++ dirname execute_shipyard_action.sh > ==> n0: ++ cd . > ==> n0: ++ pwd > ==> n0: + DIR=/root/deploy/site > ==> n0: + cd /root/deploy/site > ==> n0: + source shipyard_docker_base_command.sh > ==> n0: ++ NAMESPACE=ucp > ==> n0: ++ SHIPYARD_IMAGE=quay.io/airshipit/shipyard:master > ==> n0: +++ cat > ==> n0: Execute deploy_site Dag... > ==> n0: ++ base_docker_command='sudo -E docker run -t --rm --net=host > ==> n0: -e http_proxy= > ==> n0: -e https_proxy= > ==> n0: -e no_proxy= > ==> n0: -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 > ==> n0: -e OS_USERNAME=shipyard > ==> n0: -e OS_USER_DOMAIN_NAME=default > ==> n0: -e OS_PASSWORD > ==> n0: -e OS_PROJECT_DOMAIN_NAME=default > ==> n0: -e OS_PROJECT_NAME=service' > ==> n0: + echo -e 'Execute deploy_site Dag...\n' > ==> n0: + sudo -E docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc./v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=servhipit/shipyard:master create action deploy_site > ==> n0: Error: Unable to complete request to Airflow <======================== Failed > ==> n0: Reason: Airflow could not be contacted properly by Shipyard. > ==> n0: - Error: > ==> n0: > ==> n0: #### Errors: 1, Warnings: 0, Infos: 0, Other: 0 #### > > On Mon, May 13, 2019 at 10:13 PM Roman Gorshunov wrote: >> >> Hello Calvin, >> >> Seems like PosgreSQL database was not able to properly write data onto >> the disk. PostgreSQL runs as a postgresql-0 pod in ucp namespace, uses >> a persistent volume claim postgresql-data-postgresql-0, and persistent >> volume mounted via NFS. >> >> kubectl describe pod postgresql-0 -n ucp >> kubectl logs -n ucp postgresql-0 >> kubectl -n ucp describe pvc postgresql-data-postgresql-0 >> kubectl describe pv pvc-0382c985-7572-11e9-b431-525400681552 # volume >> name could be different) >> >> NFS is provisioned by nfs-provisioner-7799d64d59-ptsgk (last two parts >> would be different in your case): >> kubectl get pods -n kube-system | grep nfs >> kubectl -n kube-system describe pod nfs-provisioner-7799d64d59-ptsgk >> kubectl -n kube-system logs nfs-provisioner-7799d64d59-ptsgk >> >> Check if there are any problems with it (e.g. unable to mount NFS >> share, or lack of free storage space - `df -h`). >> >> Also running kubectl get events --all-namespaces could help to >> understand what went wrong. >> >> I have run an AIAB installation today twice, and it all worked fine. I >> use `vagrant up` and my hypervisor is KVM, if that could help you. >> >> I hope it helps. >> >> Best regards, >> -- Roman Gorshunov >> >> On Fri, May 10, 2019 at 7:01 AM calvin whole wrote: >> > >> > Hi Roman, >> > >> > Not sure if my last email were out properly, its size is too big. Here is a short one. Thanks for responding in advance. >> > >> > I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. >> > It showed ERROR: checkpoint request failed >> > HINT: Consult recent messages in the server log for details. >> > >> > What is the specific "server" log we should look into for details? >> > >> > Thanks for help. >> > >> > Sincerely, >> > Calvin >> > >> > >> > On Thu, May 9, 2019 at 12:17 PM calvin whole wrote: >> >> >> >> Hi Roman, >> >> >> >> Btw, continue my last post, the kubectl describe pod deckhand-db-init-zs499 output is as follows. >> >> >> >> Thanks, >> >> Calvin >> >> =========== kubectl describe pod deckhand-db-init-zs499 ================= >> >> root at n0:/home/vagrant# kubectl describe pod deckhand-db-init-zs499 -n ucp >> >> Name: deckhand-db-init-zs499 >> >> Namespace: ucp >> >> Node: n0/10.0.2.15 >> >> Start Time: Thu, 09 May 2019 03:48:48 +0000 >> >> Labels: application=deckhand >> >> component=db-init >> >> controller-uid=59f1bee0-720d-11e9-92ac-080027fc876e >> >> job-name=deckhand-db-init >> >> release_group=airship-ucp-deckhand >> >> Annotations: >> >> Status: Running >> >> IP: 10.97.26.50 >> >> Controlled By: Job/deckhand-db-init >> >> Init Containers: >> >> init: >> >> Container ID: docker://b58e8b6b7296df618cb8120b5226370afeba2a4e79dd70ee6894b5afd853c0db >> >> Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 >> >> Image ID: docker-pullable://quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 >> >> Port: >> >> Host Port: >> >> Command: >> >> kubernetes-entrypoint >> >> State: Terminated >> >> Reason: Completed >> >> Exit Code: 0 >> >> Started: Thu, 09 May 2019 03:48:52 +0000 >> >> Finished: Thu, 09 May 2019 03:48:54 +0000 >> >> Ready: True >> >> Restart Count: 0 >> >> Environment: >> >> POD_NAME: deckhand-db-init-zs499 (v1:metadata.name) >> >> NAMESPACE: ucp (v1:metadata.namespace) >> >> INTERFACE_NAME: eth0 >> >> PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ >> >> DEPENDENCY_SERVICE: ucp:postgresql >> >> DEPENDENCY_DAEMONSET: >> >> DEPENDENCY_CONTAINER: >> >> DEPENDENCY_POD_JSON: >> >> COMMAND: echo done >> >> Mounts: >> >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro) >> >> Containers: >> >> deckhand-db-init: >> >> Container ID: docker://5dea2aa975c3718ca298536005b9cc0b21de47e08b2260cc73005e3455bb1350 >> >> Image: docker.io/postgres:9.5 >> >> Image ID: docker-pullable://postgres at sha256:0605b4b20a205c09ddd10eeeddd3ed7bf3cc442a8e9896ec34862ca882658be4 >> >> Port: >> >> Host Port: >> >> Command: >> >> /tmp/db-init.sh >> >> State: Waiting >> >> Reason: CrashLoopBackOff >> >> Last State: Terminated >> >> Reason: Error <======== >> >> Exit Code: 1 >> >> Started: Thu, 09 May 2019 04:10:29 +0000 >> >> Finished: Thu, 09 May 2019 04:10:30 +0000 >> >> Ready: False >> >> Restart Count: 9 >> >> Environment: >> >> DECKHAND_DB_URL: Optional: false >> >> DB_NAME: Optional: false >> >> DB_SERVICE_USER: Optional: false >> >> DB_SERVICE_PASSWORD: Optional: false >> >> DB_FQDN: Optional: false >> >> DB_PORT: Optional: false >> >> DB_ADMIN_USER: Optional: false >> >> PGPASSWORD: Optional: false >> >> Mounts: >> >> /etc/deckhand from etc-deckhand (rw) >> >> /etc/deckhand/deckhand.conf from deckhand-etc (ro) >> >> /tmp/db-init.sh from deckhand-bin (ro) >> >> /var/run/secrets/kubernetes.io/serviceaccount from deckhand-db-init-token-gczr5 (ro) >> >> Conditions: >> >> Type Status >> >> Initialized True >> >> Ready False >> >> PodScheduled True >> >> Volumes: >> >> etc-deckhand: >> >> Type: EmptyDir (a temporary directory that shares a pod's lifetime) >> >> Medium: >> >> deckhand-etc: >> >> Type: Secret (a volume populated by a Secret) >> >> SecretName: deckhand-etc >> >> Optional: false >> >> deckhand-bin: >> >> Type: ConfigMap (a volume populated by a ConfigMap) >> >> Name: deckhand-bin >> >> Optional: false >> >> deckhand-db-init-token-gczr5: >> >> Type: Secret (a volume populated by a Secret) >> >> SecretName: deckhand-db-init-token-gczr5 >> >> Optional: false >> >> QoS Class: BestEffort >> >> Node-Selectors: ucp-control-plane=enabled >> >> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s >> >> node.kubernetes.io/unreachable:NoExecute for 300s >> >> Events: >> >> Type Reason Age From Message >> >> ---- ------ ---- ---- ------- >> >> Normal Scheduled 24m default-scheduler Successfully assigned deckhand-db-init-zs499 to n0 >> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "etc-deckhand" >> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-bin" >> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-etc" >> >> Normal SuccessfulMountVolume 24m kubelet, n0 MountVolume.SetUp succeeded for volume "deckhand-db-init-token-gczr5" >> >> Normal Pulled 24m kubelet, n0 Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine >> >> Normal Created 24m kubelet, n0 Created container >> >> Normal Started 24m kubelet, n0 Started container >> >> Normal Pulled 23m (x4 over 24m) kubelet, n0 Container image "docker.io/postgres:9.5" already present on machine >> >> Normal Created 23m (x4 over 24m) kubelet, n0 Created container >> >> Normal Started 23m (x4 over 24m) kubelet, n0 Started container >> >> Warning BackOff 4m (x90 over 24m) kubelet, n0 Back-off restarting failed container >> >> root at n0:/home/vagrant# >> >> >> >> On Thu, May 9, 2019 at 12:08 PM calvin whole wrote: >> >>> >> >>> Hi Roman, >> >>> >> >>> Thanks for looking into this and gave us suggestions. >> >>> >> >>> I re-ran the "vagrant up" and looking into the logs for "deckhand-db-init-zs499" as showed below. >> >>> It showed ERROR: checkpoint request failed >> >>> HINT: Consult recent messages in the server log for details. >> >>> >> >>> What is the specific "server" log we should look into for details? >> >>> >> >>> Thanks for help. >> >>> >> >>> Sincerely, >> >>> Calvin >> >>> >> >>> ================== log for deckhand-db-init-zs499 ================================== >> >>> root at n0:/home/vagrant# kubectl logs deckhand-db-init-zs499 -n ucp >> >>> + export HOME=/tmp >> >>> + HOME=/tmp >> >>> + pgsql_superuser_cmd 'SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >> >>> + grep -q 1 >> >>> + DB_COMMAND='SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >> >>> + [[ ! -z '' ]] >> >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' >> >>> + pgsql_superuser_cmd 'CREATE DATABASE deckhand' >> >>> + DB_COMMAND='CREATE DATABASE deckhand' >> >>> + [[ ! -z '' ]] >> >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres '--command=CREATE DATABASE deckhand' >> >>> ERROR: checkpoint request failed >> >>> HINT: Consult recent messages in the server log for details. >> >>> >> >>> ===================================================================================== >> >>> ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE >> >>> ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 49m >> >>> ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 49m >> >>> ==> n0: kube-system calico-etcd-anchor-ncl2p 1/1 Running 0 47m >> >>> ==> n0: kube-system calico-etcd-n0 1/1 Running 0 46m >> >>> ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5csnn 1/1 Running 0 46m >> >>> ==> n0: kube-system calico-node-m4rtf 1/1 Running 0 46m >> >>> ==> n0: kube-system calico-settings-tkp6r 0/1 Completed 0 46m >> >>> ==> n0: kube-system coredns-84bdd76f4d-hhbcs 1/1 Running 0 44m >> >>> ==> n0: kube-system coredns-84bdd76f4d-k8tcc 1/1 Running 0 44m >> >>> ==> n0: kube-system coredns-84bdd76f4d-qp2xd 1/1 Running 0 44m >> >>> ==> n0: kube-system haproxy-n0 1/1 Running 0 50m >> >>> ==> n0: kube-system ingress-error-pages-7c65f766d-dn2tw 1/1 Running 0 41m >> >>> ==> n0: kube-system ingress-gtvp8 2/2 Running 0 41m >> >>> ==> n0: kube-system kubernetes-apiserver-anchor-99jhn 1/1 Running 0 42m >> >>> ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 41m >> >>> ==> n0: kube-system kubernetes-controller-manager-anchor-vqddp 1/1 Running 0 42m >> >>> ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 41m >> >>> ==> n0: kube-system kubernetes-etcd-anchor-9jcpl 1/1 Running 0 44m >> >>> ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 42m >> >>> ==> n0: kube-system kubernetes-proxy-2m9t2 1/1 Running 0 47m >> >>> ==> n0: kube-system kubernetes-scheduler-anchor-nl9fb 1/1 Running 0 42m >> >>> ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 41m >> >>> ==> n0: kube-system nfs-provisioner-7799d64d59-vtkbd 1/1 Running 0 40m >> >>> ==> n0: kube-system tiller-deploy-7d88c6f956-qwfzb 1/1 Running 0 27m >> >>> ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-vfl69 1/1 Running 0 34m >> >>> ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 39m >> >>> ==> n0: ucp armada-api-d5f757d5-6wl98 1/1 Running 0 15m >> >>> ==> n0: ucp armada-ks-endpoints-vl9rs 0/3 Completed 0 15m >> >>> ==> n0: ucp armada-ks-service-vpcjd 0/1 Completed 0 15m >> >>> ==> n0: ucp armada-ks-user-rv4gs 0/1 Completed 0 15m >> >>> ==> n0: ucp barbican-api-5d7b88d8ff-8dd6w 1/1 Running 0 13m >> >>> ==> n0: ucp barbican-db-init-gqvt4 0/1 Completed 0 13m >> >>> ==> n0: ucp barbican-db-sync-tqtgq 0/1 Completed 0 13m >> >>> ==> n0: ucp barbican-ks-endpoints-rwtql 0/3 >> >>> ==> n0: Completed 0 13m >> >>> ==> n0: ucp barbican-ks-service-l2h6h 0/1 Completed 0 13m >> >>> ==> n0: ucp barbican-ks-user-wwvc7 0/1 Completed 0 13m >> >>> ==> n0: ucp barbican-rabbit-init-6spq4 0/1 Completed 0 13m >> >>> ==> n0: ucp deckhand-api-78b9644f96-5686f 0/1 Running 0 11m >> >>> ==> n0: ucp deckhand-db-init-zs499 0/1 CrashLoopBackOff 7 11m <=== >> >>> ==> n0: ucp deckhand-db-sync-ct7wl 0/1 Init:0/1 0 11m >> >>> ==> n0: ucp deckhand-ks-endpoints-x4hd9 0/3 Completed 0 11m >> >>> ==> n0: ucp deckhand-ks-service-ms6n5 0/1 Completed 0 11m >> >>> ==> n0: ucp deckhand-ks-user-7fnvt 0/1 Completed 0 11m >> >>> ==> n0: ucp divingbell-apparmor-default-hth8z 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-apt-default-r965m 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-ethtool-default-ldcmc 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-exec-default-f7h7x 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-limits-default-sp9mj 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-mounts-default-8f5a00a2-frbl2 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-perm-default-d7wxp 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-sysctl-default-c8pnp 1/1 Running 0 27m >> >>> ==> n0: ucp divingbell-uamlite-default-rfct6 1/1 Running 0 27m >> >>> ==> n0: ucp ingress-86576d6599-mdgj4 1/1 Running 0 39m >> >>> ==> n0: ucp ingress-error-pages-5c97bb46bb-7lg5l 1/1 Running 0 39m >> >>> ==> n0: ucp keystone-api-678fc44bdd-594bb 1/1 Running 0 34m >> >>> ==> n0: ucp keystone-bootstrap-rprr6 0/1 Completed 0 34m >> >>> ==> n0: ucp keystone-credential-setup-zkjgs 0/1 Completed 0 34m >> >>> ==> n0: ucp keystone-db-init-xkgxm 0/1 Completed 0 34m >> >>> ==> n0: ucp keystone-db-sync-lm6xs 0/1 Completed 0 34m >> >>> ==> n0: ucp keystone-domain-manage-9pzjq 0/1 Completed 0 34m >> >>> ==> n0: ucp keystone-fernet-setup-q7t8p 0/1 Completed 0 34m >> >>> ==> n0: ucp keystone-rabbit-init-qpvgt 0/1 Completed 0 34m >> >>> ==> n0: ucp maas-bootstrap-admin-user-8npgw 0/1 Completed 0 26m >> >>> ==> n0: ucp maas-db-init-9z86n 0/1 Completed 0 26m >> >>> ==> n0: ucp maas-db-sync-r7rkg 0/1 Completed 0 26m >> >>> ==> n0: ucp maas-export-api-key-n2gz4 0/1 Completed 1 26m >> >>> ==> n0: ucp maas-import-resources-prlml 0/1 Completed 0 26m >> >>> ==> n0: ucp maas-ingress-756f6f9d6-h65nj 2/2 Running 0 26m >> >>> ==> n0: ucp maas-ingress-errors-8686d56d98-swfg9 >> >>> ==> n0: 1/1 Running 0 26m >> >>> ==> n0: ucp maas-rack-0 1/1 Running 0 26m >> >>> ==> n0: ucp maas-region-0 1/1 Running 0 26m >> >>> ==> n0: ucp mariadb-ingress-55794d94c8-dsw5w 1/1 Running 0 39m >> >>> ==> n0: ucp mariadb-ingress-55794d94c8-jczmh 1/1 Running 0 39m >> >>> ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-jrqsg 1/1 Running 0 39m >> >>> ==> n0: ucp mariadb-server-0 1/1 Running 0 39m >> >>> ==> n0: ucp postgresql-0 1/1 Running 1 39m >> >>> >> >>> >> >>> On Tue, May 7, 2019 at 6:25 PM Roman Gorshunov wrote: >> >>>> >> >>>> Hello Calvin, >> >>>> >> >>>> Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod. >> >>>> kubectl describe pod deckhand-db-init-r9jvg -u ucp >> >>>> May be it would help to understand what is happening there. >> >>>> >> >>>> Thank you for trying Airship. >> >>>> >> >>>> Best regards, >> >>>> -- Roman Gorshunov >> >>>> >> >>>> On Tue, May 7, 2019 at 7:51 AM calvin whole wrote: >> >>>> > >> >>>> > Hi, >> >>>> > >> >>>> > We are trying to deploy AIIB. >> >>>> > >> >>>> > I have a physical server with Ubuntu 16.04.5 OS, installed virtualbox and vagrant. >> >>>> > The process is straightforward by following https://opendev.org/airship/in-a-bottle/ >> >>>> > We created ~/deploy directory, downloaded Vagrantfile, and do "vagrant up". >> >>>> > >> >>>> > However it stuck in the error below: >> >>>> > deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 16 >> >>>> > >> >>>> > Could anyone help to resolve this? Many thanks in advance. >> >>>> > >> >>>> > Sincerely, >> >>>> > Calvin >> >>>> > >> >>>> > ==> n0: NAMESPACE NAME READY STATUS RESTARTS AGE >> >>>> > ==> n0: kube-system auxiliary-etcd-n0 3/3 Running 0 1h >> >>>> > ==> n0: kube-system bootstrap-armada-n0 4/4 Running 0 1h >> >>>> > ==> n0: kube-system calico-etcd-anchor-5tqhk 1/1 Running 0 1h >> >>>> > ==> n0: kube-system calico-etcd-n0 1/1 Running 0 1h >> >>>> > ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 1/1 Running 0 1h >> >>>> > ==> n0: kube-system calico-node-pbsxh 1/1 Running 0 1h >> >>>> > ==> n0: kube-system calico-settings-lzpk9 0/1 Completed 0 1h >> >>>> > ==> n0: kube-system coredns-84bdd76f4d-6cwnl 1/1 Running 0 1h >> >>>> > ==> n0: kube-system coredns-84bdd76f4d-d4p8c 1/1 Running 0 1h >> >>>> > ==> n0: kube-system coredns-84bdd76f4d-xrknz 1/1 Running 0 1h >> >>>> > ==> n0: kube-system haproxy-n0 1/1 Running 0 1h >> >>>> > ==> n0: kube-system ingress-9pkmx 2/2 Running 0 1h >> >>>> > ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-apiserver-anchor-hszbf 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-apiserver-n0 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-controller-manager-n0 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-etcd-anchor-nnjbb 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-etcd-n0 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-proxy-vgzjp 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk 1/1 Running 0 1h >> >>>> > ==> n0: kube-system kubernetes-scheduler-n0 1/1 Running 0 1h >> >>>> > ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq 1/1 Running 0 1h >> >>>> > ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg 1/1 Running 0 1h >> >>>> > ==> n0: ucp airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 Running 0 1h >> >>>> > ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 1h >> >>>> > ==> n0: ucp armada-api-d5f757d5-d9l9h 1/1 Running 0 1h >> >>>> > ==> n0: ucp armada-ks-endpoints-qwbtg 0/3 Completed 0 1h >> >>>> > ==> n0: ucp armada-ks-service-lg8kq 0/1 Completed 0 1h >> >>>> > ==> n0: ucp armada-ks-user-g2j6v 0/1 Completed 0 1h >> >>>> > ==> n0: ucp barbican-api-84665dd99d-qv5fz 1/1 Running 0 1h >> >>>> > ==> n0: ucp barbican-db-init-ndx58 0/1 Completed 0 1h >> >>>> > ==> n0: ucp barbican-db-sync-sh7c9 0/1 Completed 0 1h >> >>>> > ==> n0: ucp barbican-ks-endpoints-bv7xv 0/3 Completed 0 1h >> >>>> > ==> n0: ucp barbican-ks-service-46hjk 0/1 Completed 0 1h >> >>>> > ==> n0: ucp barbican-ks-user-6df74 0/1 Completed 0 1h >> >>>> > ==> n0: ucp barbican-rabbit-init-gnvfl 0/1 Completed 0 1h >> >>>> > ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw 0/1 Running 0 1h >> >>>> > ==> n0: ucp deckhand-db-init-r9jvg 0/1 CrashLoopBackOff 17 1h <===== >> >>>> > ==> n0: ucp deckhand-db-sync-llstv 0/1 Init:0/1 0 1h >> >>>> > ==> n0: ucp deckhand-ks-endpoints-4gqfj 0/3 Completed 0 1h >> >>>> > ==> n0: ucp deckhand-ks-service-c6gbq 0/1 Completed 0 1h >> >>>> > ==> n0: ucp deckhand-ks-user-5skng 0/1 Completed 0 1h >> >>>> > ==> n0: ucp divingbell-apparmor-default-lkcl6 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-apt-default-7jgtv 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-ethtool-default-tm2w4 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-exec-default-l45m8 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-limits-default-q84pr 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-mounts-default-29420945-nrdsz 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-perm-default-wdgld 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-sysctl-default-t7f2m 1/1 Running 0 1h >> >>>> > ==> n0: ucp divingbell-uamlite-default-fc4jx 1/1 Running 0 1h >> >>>> > ==> n0: ucp ingress-86576d6599-q8ng4 1/1 Running 0 1h >> >>>> > ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m 1/1 Running 0 1h >> >>>> > ==> n0: ucp keystone-api-678fc44bdd-ncxc2 1/1 Running 0 1h >> >>>> > ==> n0: ucp keystone-bootstrap-28l4g 0/1 Completed 0 1h >> >>>> > ==> n0: ucp keystone-credential-setup-rq5d4 0/1 Completed 0 1h >> >>>> > ==> n0: ucp keystone-db-init-z8x4w 0/1 Completed 0 1h >> >>>> > ==> n0: ucp keystone-db-sync-9hvb5 0/1 Completed 0 1h >> >>>> > ==> n0: ucp keystone-domain-manage-tzcnf 0/1 Completed 0 1h >> >>>> > ==> n0: ucp keystone-fernet-setup-bzdpb 0/1 Completed 0 1h >> >>>> > ==> n0: ucp keystone-rabbit-init-cxpc6 0/1 Completed 0 1h >> >>>> > ==> n0: ucp maas-bootstrap-admin-user-g99rl 0/1 Completed 0 1h >> >>>> > ==> n0: ucp maas-db-init-h4llm 0/1 Completed 0 1h >> >>>> > ==> n0: ucp maas-db-sync-6tsqj 0/1 Completed 0 1h >> >>>> > ==> n0: ucp maas-export-api-key-c8rdb 0/1 Completed 0 1h >> >>>> > ==> n0: ucp maas-import-resources-hhq7f 0/1 Completed 1 1h >> >>>> > ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 2/2 Running 0 1h >> >>>> > ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx 1/1 Running 0 1h >> >>>> > ==> n0: u >> >>>> > ==> n0: cp maas-rack-0 1/1 Running 0 1h >> >>>> > ==> n0: ucp maas-region-0 1/1 Running 0 1h >> >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf 1/1 Running 0 1h >> >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-vglbv 1/1 Running 0 1h >> >>>> > ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv 1/1 Running 0 1h >> >>>> > ==> n0: ucp mariadb-server-0 1/1 Running 0 1h >> >>>> > ==> n0: ucp postgresql-0 1/1 Running 1 1h >> >>>> > _______________________________________________ >> >>>> > Airship-discuss mailing list >> >>>> > Airship-discuss at lists.airshipit.org >> >>>> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From ajs at sheffieldfamily.net Tue May 14 13:11:51 2019 From: ajs at sheffieldfamily.net (Aaron Sheffield) Date: Tue, 14 May 2019 08:11:51 -0500 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations Message-ID: <402BC49A-00C1-4AC7-B1FD-F8F502753991@att.com> +1 On 5/13/19, 11:44, "MCEUEN, MATT" wrote: Airship team, In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project As discussed, this will seed the core teams for these relatively young projects with folks who are actively focused on implementing them. Existing Airship core reviewers remain grandfathered in, but are encouraged to bow out of coreship if they deem it appropriate. Following OpenStack norms, current Airship core reviewers have 7 days (till EOD 5/20) to respond to this email with a +1 or -1 vote. Please consider this my +1. A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response. Thanks, Matt McEuen _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From rp2723 at att.com Tue May 14 13:51:51 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Tue, 14 May 2019 13:51:51 +0000 Subject: [Airship-discuss] Airship - Open Design Call - Tuesdays In-Reply-To: References: <99088997CCAD0C4BA20008FD094344963C243945@MISOUT7MSGUSRDI.ITServices.sbc.com> Message-ID: <6FEDCCC2-C3D2-4390-BE03-8389E9E4757A@att.com> Thanks, I add it, Regards Rodolfo Pacheco Home/Office 732 5337671 From: Jaesuk Ahn Date: Tuesday, May 14, 2019 at 2:41 AM To: "PACHECO, RODOLFO J" Cc: "airship-discuss at lists.airshipit.org" Subject: Re: [Airship-discuss] Airship - Open Design Call - Tuesdays Hi, Rodolfo, There is no password information for May 9th meeting recording. Cloud you please provide one for us? Thank you On Mon, May 13, 2019 at 10:37 PM PACHECO, RODOLFO J > wrote: When: Occurs every Tuesday from 9:00 AM to 10:30 AM effective 5/14/2019 until 1/1/2020. (UTC-05:00) Eastern Time (US & Canada) Where: https://attcorp.webex.com/meet/rp2723 *~*~*~*~*~*~*~*~*~* REMINDER –Airship Design Call – New Added Call Based on the doodle votes the meeting length will be 90 mins Join us to continue Airship 2.0 Design discussions Etherpad for the Airship Open Design discussion https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions Storyboard in flight Specs https://storyboard.openstack.org/#!/project/openstack/airship-specs Github Airship Specs https://github.com/openstack/airship-specs/tree/master/specs Inflight/reviewing specs https://review.openstack.org/#/q/status:open+airship-specs __________________________________________ Join by video system i Dial rp2723 at attcorp.webex.com and enter your host PIN 02083790. You can also dial 173.243.2.68 and enter your meeting number. Join by phone 1-844-517-1415 United States Toll Free 1-618-230-6039 United States Toll Access code: 733 333 726 Host PIN: 02083790 _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -- Jaesuk Ahn, Ph.D. Software R&D Center, SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From ca846m at att.com Tue May 14 16:56:58 2019 From: ca846m at att.com (ANDERSON, CRAIG) Date: Tue, 14 May 2019 16:56:58 +0000 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: <402BC49A-00C1-4AC7-B1FD-F8F502753991@att.com> References: <402BC49A-00C1-4AC7-B1FD-F8F502753991@att.com> Message-ID: +1 -----Original Message----- From: Aaron Sheffield [mailto:ajs at sheffieldfamily.net] Sent: Tuesday, May 14, 2019 6:12 AM To: airship-discuss at lists.airshipit.org Subject: Re: [Airship-discuss] Spyglass/Pegleg core reviewer nominations +1 On 5/13/19, 11:44, "MCEUEN, MATT" wrote: Airship team, In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project As discussed, this will seed the core teams for these relatively young projects with folks who are actively focused on implementing them. Existing Airship core reviewers remain grandfathered in, but are encouraged to bow out of coreship if they deem it appropriate. Following OpenStack norms, current Airship core reviewers have 7 days (till EOD 5/20) to respond to this email with a +1 or -1 vote. Please consider this my +1. A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response. Thanks, Matt McEuen _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=aimn2OylFgog_5_aS85wtQ&m=fcPzXjeqq7G4mS5mRx-lB-DZbH0CTzOpooYFFE43cRI&s=f96gV0WtTvv0RRBhoitK6TjeGY2nvG8ZU40GABhVa9o&e= _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=aimn2OylFgog_5_aS85wtQ&m=fcPzXjeqq7G4mS5mRx-lB-DZbH0CTzOpooYFFE43cRI&s=f96gV0WtTvv0RRBhoitK6TjeGY2nvG8ZU40GABhVa9o&e= From kennelson11 at gmail.com Tue May 14 23:07:32 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 14 May 2019 16:07:32 -0700 Subject: [Airship-discuss] PTG Team Photos! Message-ID: Hey :) Here's a link to the dropbox[1] with the team photos from the PTG. Enjoy! -Kendall [1] https://www.dropbox.com/sh/hbftzdhlaptwet8/AAC8zADDqnO_u3Y0iqk4k8vta?dl=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Wed May 15 15:44:50 2019 From: allison at openstack.org (Allison Price) Date: Wed, 15 May 2019 10:44:50 -0500 Subject: [Airship-discuss] Open Infrastructure Summit CFP Open - Deadline: July 2 Message-ID: <6E334C53-ABCE-44A6-9E16-7C2DF35F2645@openstack.org> Hi Airship community, The Call for Presentations (CFP) [1] for the Open Infrastructure Summit in Shanghai (November 4 - 6, 2019) [2] is open! Review the list of Tracks [3], and submit your presentations, panels, and workshops before July 2, 2019. Sessions will be presented in both Mandarin and English, so you may submit your presentation in either language. The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. SUBMIT YOUR PRESENTATION [1] - Deadline July 2, 2019 at 11:59pm PT (July 3 at 6:59 UTC) Want to help shape the content for the Summit? The Programming Committee helps select sessions from the CFP for the Summit schedule. Nominate yourself or someone else for the Programming Committee [4] before May 20, 2019. Registration and Sponsorship • Shanghai Summit + PTG registration is available in the following currencies: • Register in USD [5] • Register in RMB (includes fapiao) [6] • Sponsorship opportunities [7] Please email speakersupport at openstack.org with any questions or feedback. Thanks, Allison [1] https://cfp.openstack.org/ [2] https://www.openstack.org/summit/shanghai-2019 [3] https://www.openstack.org/summit/shanghai-2019/summit-categories [4] http://bit.ly/ShanghaiProgrammingCommittee [5] https://app.eventxtra.link/registrations/6640a923-98d7-44c7-a623-1e2c9132b402?locale=en [6] https://app.eventxtra.link/registrations/f564960c-74f6-452d-b0b2-484386d33eb6?locale=en [7] https://www.openstack.org/summit/shanghai-2019/sponsors/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.legal.astellia at gmail.com Fri May 17 10:04:55 2019 From: l.legal.astellia at gmail.com (Loic Le Gal) Date: Fri, 17 May 2019 12:04:55 +0200 Subject: [Airship-discuss] CoreDNS problem with liveness/readiness probe Message-ID: Hello Airshipers, After succeeding to deploy a SingleNode config, I'm trying to deploy the multi-nodes-gate site of Airship in a botttle. The first steps are Ok (Genesis creation and run on node n0) but the Undercloud deployment fails in the creation of coredns pods in kube-system namespace. (Endless Crashloop) I fix manually this problem but want to fix it automatically before launching the whole Airship deployment (i.e. In an Openstack stack) because the initial coredns failure leads to side-effects that needs a new deployment... After investigation, it turns out that the problem comes from a timing issue (may be from the infra I use) and I succeed to fix this manually by modifying the ConfigMap used by CoreDNS's probe and recreate the pods The problem was in the probe.sh file: this command sometimes fails: ==> dig +trace +time=2 +tries=1 att.com @127.0.0.1 Increasing the time and retries fix the problem: root at n0:/home/ubuntu# kubectl exec coredns-6494d65b66-dk8g9 -n kube-system -- dig +trace +time=2 +tries=1 att.com @127.0.0.1; echo $? ; <<>> DiG 9.11.2-P1 <<>> +trace +time=2 +tries=1 att.com @127.0.0.1 ;; global options: +cmd ;; connection timed out; no servers could be reached command terminated with exit code 9 9 root at n0:/home/ubuntu# kubectl exec coredns-6494d65b66-dk8g9 -n kube-system -- dig +trace +time=10 +tries=5 att.com @127.0.0.1; echo $? ; <<>> DiG 9.11.2-P1 <<>> +trace +time=10 +tries=5 att.com @127.0.0.1 ;; global options: +cmd ;; Received 40 bytes from 127.0.0.1#53(127.0.0.1) in 5000 ms 0 (Inception inside!) I succeed in patching the Genesis stage to patch the genesis.sh script and finally patch the template file: /etc/genesis/armada/assets/charts/coredns/templates/bin/_probe.sh.tpl Unfortunately this file doesn't seems to be the one used to create the Pods' configmap. Can someone tell me the easiest way to patch this readiness probe before after a Git clone and before launching Airship deployment ? BR, Loïc -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Fri May 17 16:02:10 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Fri, 17 May 2019 16:02:10 +0000 Subject: [Airship-discuss] CoreDNS problem with liveness/readiness probe In-Reply-To: References: Message-ID: <7C64A75C21BB8D43BD75BB18635E4D897094C263@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hey Loic, Good to hear that your multinode testing is progressing well, and great catch on the coredns probe! I think it’s a good idea to err on the side of caution with our coredns probe. Would you mind submitting a patchset to promenade, making your change here: https://opendev.org/airship/promenade/src/branch/master/charts/coredns/templates/bin/_probe.sh.tpl#L15 That would allow folks to easily weigh in on whether those are the right values? The patchset will also allow you to easily pull in the change to a AiaB multi-node deployment – you can make a local edit to the AiaB versions.yaml file: https://opendev.org/airship/in-a-bottle/src/branch/master/deployment_files/global/v1.0demo/software/config/versions.yaml In versions.yaml, find data.charts.coredns, and make these changes: Change location to: https://review.opendev.org/airship/promenade Change reference to: the “refs/changes/x/y/z” reference from your promenade patchset Thanks for catching this – it would be awesome if you can resolve it for others. Let me know if any questions come up or if you need any pointers around the gerrit submit/review process. Thanks, Matt From: Loic Le Gal Sent: Friday, May 17, 2019 5:05 AM To: airship-discuss at lists.airshipit.org Subject: [Airship-discuss] CoreDNS problem with liveness/readiness probe Hello Airshipers, After succeeding to deploy a SingleNode config, I'm trying to deploy the multi-nodes-gate site of Airship in a botttle. The first steps are Ok (Genesis creation and run on node n0) but the Undercloud deployment fails in the creation of coredns pods in kube-system namespace. (Endless Crashloop) I fix manually this problem but want to fix it automatically before launching the whole Airship deployment (i.e. In an Openstack stack) because the initial coredns failure leads to side-effects that needs a new deployment... After investigation, it turns out that the problem comes from a timing issue (may be from the infra I use) and I succeed to fix this manually by modifying the ConfigMap used by CoreDNS's probe and recreate the pods The problem was in the probe.sh file: this command sometimes fails: ==> dig +trace +time=2 +tries=1 att.com @127.0.0.1 Increasing the time and retries fix the problem: root at n0:/home/ubuntu# kubectl exec coredns-6494d65b66-dk8g9 -n kube-system -- dig +trace +time=2 +tries=1 att.com @127.0.0.1; echo $? ; <<>> DiG 9.11.2-P1 <<>> +trace +time=2 +tries=1 att.com @127.0.0.1 ;; global options: +cmd ;; connection timed out; no servers could be reached command terminated with exit code 9 9 root at n0:/home/ubuntu# kubectl exec coredns-6494d65b66-dk8g9 -n kube-system -- dig +trace +time=10 +tries=5 att.com @127.0.0.1; echo $? ; <<>> DiG 9.11.2-P1 <<>> +trace +time=10 +tries=5 att.com @127.0.0.1 ;; global options: +cmd ;; Received 40 bytes from 127.0.0.1#53(127.0.0.1) in 5000 ms 0 (Inception inside!) I succeed in patching the Genesis stage to patch the genesis.sh script and finally patch the template file: /etc/genesis/armada/assets/charts/coredns/templates/bin/_probe.sh.tpl Unfortunately this file doesn't seems to be the one used to create the Pods' configmap. Can someone tell me the easiest way to patch this readiness probe before after a Git clone and before launching Airship deployment ? BR, Loïc -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng1.li at intel.com Mon May 20 12:07:25 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Mon, 20 May 2019 12:07:25 +0000 Subject: [Airship-discuss] Adding collectd for airship monitoring Message-ID: Hi Airshippers, I notice we have a plan for grafana/monitoring integration in 2.0 [1], I would like to proposal enabling collectd as a data collector. (for host resource usage monitoring, and many other resource monitoring) Not sure if you have any concern about collectd, anywhere to discuss about the monitoring feature? [1] https://etherpad.openstack.org/p/airship-ptg-train Thanks, Cheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Mon May 20 13:36:02 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Mon, 20 May 2019 15:36:02 +0200 Subject: [Airship-discuss] airship-in-a-bottle deployment issue In-Reply-To: References: Message-ID: Calvin, No, I don't know why does it fail. `kubectl -n ucp describe pod mariadb-server-0` would probably shed some light. -- Roman On Mon, May 20, 2019 at 9:08 AM calvin whole wrote: > > Hi Roman, > > I tried to mimic your AIAB environment, but the deployment still failed. This time the mariadb failed to become "readiness" state. > Do you have any thoughts on this? > > Which direction should I look into? > > Thanks, > Calvin > From dangtrinhnt at gmail.com Tue May 21 05:36:00 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 21 May 2019 14:36:00 +0900 Subject: [Airship-discuss] OpenInfra Days Vietnam 2019 (Hanoi) - Call For Presentations Message-ID: Hello, Hope you're doing well :) The OpenInfra Days Vietnam 2019 [1] is looking for speakers in many different topics (e.g., container, CI, deployment, edge computing, etc.). If you would love to have a taste of Hanoi, the capital of Vietnam, please join us this one-day event and submit your presentation [2]. *- Date:* 24 AUGUST 2019 *- Location:* INTERCONTINENTAL HANOI LANDMARK72, HANOI, VIETNAM Especially this time, we're honored to have the Upstream Institute Training [3] hosted by the OpenStack Foundation on the next day (25 August 2019). [1] http://day.vietopeninfra.org/ [2] https://forms.gle/iiRBxxyRv1mGFbgi7 [3] https://docs.openstack.org/upstream-training/upstream-training-content.html See you in Hanoi! Bests, On behalf of the VietOpenInfra Group. -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Tue May 21 17:20:25 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 21 May 2019 17:20:25 +0000 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8970950FDD@MOSTLS1MSGUSRFF.ITServices.sbc.com> With unanimous vote, I'd like to welcome Alex and Ian to the core reviewer teams of their respective Airship projects! I have put in a patchset to request project-specific ACLs for Spyglass and Pegleg. I'm looking forward to continuing to build Airship alongside you guys!! Matt -----Original Message----- From: MCEUEN, MATT Sent: Monday, May 13, 2019 11:45 AM To: airship-discuss at lists.airshipit.org Subject: Spyglass/Pegleg core reviewer nominations Airship team, In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project As discussed, this will seed the core teams for these relatively young projects with folks who are actively focused on implementing them. Existing Airship core reviewers remain grandfathered in, but are encouraged to bow out of coreship if they deem it appropriate. Following OpenStack norms, current Airship core reviewers have 7 days (till EOD 5/20) to respond to this email with a +1 or -1 vote. Please consider this my +1. A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response. Thanks, Matt McEuen From rp2723 at att.com Tue May 21 19:35:32 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Tue, 21 May 2019 19:35:32 +0000 Subject: [Airship-discuss] Airship - Open Design Call - Tuesdays Message-ID: <99088997CCAD0C4BA20008FD094344963C26858E@MISOUT7MSGUSRDI.ITServices.sbc.com> When: Tuesday, May 21, 2019 9:00 AM-10:30 AM. (UTC-05:00) Eastern Time (US & Canada) Where: https://attcorp.webex.com/meet/rp2723 *~*~*~*~*~*~*~*~*~* REMINDER –Airship Design Call – New Added Call Based on the doodle votes the meeting length will be 90 mins Join us to continue Airship 2.0 Design discussions Etherpad for the Airship Open Design discussion https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions Storyboard in flight Specs https://storyboard.openstack.org/#!/project/openstack/airship-specs Github Airship Specs https://github.com/openstack/airship-specs/tree/master/specs Inflight/reviewing specs https://review.openstack.org/#/q/status:open+airship-specs __________________________________________ Join by video system i Dial rp2723 at attcorp.webex.com and enter your host PIN 02083790. You can also dial 173.243.2.68 and enter your meeting number. Join by phone 1-844-517-1415 United States Toll Free 1-618-230-6039 United States Toll Access code: 733 333 726 Host PIN: 02083790 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2671 bytes Desc: not available URL: From rp2723 at att.com Tue May 21 19:35:48 2019 From: rp2723 at att.com (PACHECO, RODOLFO J) Date: Tue, 21 May 2019 19:35:48 +0000 Subject: [Airship-discuss] Airship - Open Design Call - Thursdays Message-ID: <99088997CCAD0C4BA20008FD094344963C2685DA@MISOUT7MSGUSRDI.ITServices.sbc.com> When: Thursday, May 23, 2019 11:00 AM-12:30 PM. (UTC-05:00) Eastern Time (US & Canada) Where: https://attcorp.webex.com/meet/rp2723 *~*~*~*~*~*~*~*~*~* REMINDER –Airship Design Call Based on the doodle votes the meeting length will be 90 mins Join us to continue Airship 2.0 Design discussions Etherpad for the Airship Open Design discussion https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions Storyboard in flight Specs https://storyboard.openstack.org/#!/project/openstack/airship-specs Github Airship Specs https://github.com/openstack/airship-specs/tree/master/specs Inflight/reviewing specs https://review.openstack.org/#/q/status:open+airship-specs __________________________________________ Join by video system i Dial rp2723 at attcorp.webex.com and enter your host PIN 02083790. You can also dial 173.243.2.68 and enter your meeting number. Join by phone 1-844-517-1415 United States Toll Free 1-618-230-6039 United States Toll Access code: 733 333 726 Host PIN: 02083790 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2750 bytes Desc: not available URL: From calvinwhole at gmail.com Wed May 22 08:51:09 2019 From: calvinwhole at gmail.com (calvin whole) Date: Wed, 22 May 2019 16:51:09 +0800 Subject: [Airship-discuss] Airship-in-a-bottle (AIAB) deployment question Message-ID: Hi, We have tried to deploy AIAB by both methods described in the below available info from Airship site. Both methods failed to complete the deployment process and the result is also different. 1. With the "./airship-in-a-bottle.sh" method, it stuck in "mariadb not ready" state. 2. With the "vagrant up" method, we tried the libvirt and virtualbox providers. With virtualbox provider "sometimes" we can reach the "Genesis complete" step, but the Openstack step still failed to complete. With libvirt provider we always stuck in "mariadb not ready" or "deckhand-db-init". So it seems to us that the execution output is sensitive to the virtualization environment. For this reason, the documentation below may need to provide more details about the required environment setup. Can someone provide us a more detailed info about AIAB execution requirement ? Or, do Airship have a QA group that we can contact for help? Many thanks, Calvin =============================================================================== To get started, run the following in a fresh Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk). This will deploy Airship and Openstack Helm (OSH): sudo -i mkdir -p /root/deploy && cd "$_" git clone https://git.openstack.org/openstack/airship-in-a-bottle cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node ./airship-in-a-bottle.sh Or, alternatively, if you have Vagrant installed, just run the following (only libvirt/kvm hypervisor is tested, but vagrant box supports VMware Desktop/Workstation/Fusion, Parallels, and Hyper-V): curl -O https://git.airshipit.org/cgit/airship-in-a-bottle/plain/Vagrantfile vagrant up -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Wed May 22 13:52:55 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Wed, 22 May 2019 13:52:55 +0000 Subject: [Airship-discuss] Airship-in-a-bottle (AIAB) deployment question In-Reply-To: References: Message-ID: <7C64A75C21BB8D43BD75BB18635E4D89709515E0@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hi Calvin, Have you taken a look for the root cause of why mariadb is not ready, e.g. as Roman suggested, running `kubectl -n ucp describe pod mariadb-server-0` on the genenesis node? Are you running at the minimum 20GB or RAM? I’m wondering whether airship-in-a-bottle has outgrown that. If you’re able to run with additional resources it would be good to see if that changes the result. We don’t have a different QA mailing list, but you’re welcome to join the #airshipit channel on Freenode IRC as well and share logs / ask questions there! Thanks, Matt From: calvin whole Sent: Wednesday, May 22, 2019 3:51 AM To: airship-discuss at lists.airshipit.org; calvin whole ; Roman Gorshunov Subject: [Airship-discuss] Airship-in-a-bottle (AIAB) deployment question Hi, We have tried to deploy AIAB by both methods described in the below available info from Airship site. Both methods failed to complete the deployment process and the result is also different. 1. With the "./airship-in-a-bottle.sh" method, it stuck in "mariadb not ready" state. 2. With the "vagrant up" method, we tried the libvirt and virtualbox providers. With virtualbox provider "sometimes" we can reach the "Genesis complete" step, but the Openstack step still failed to complete. With libvirt provider we always stuck in "mariadb not ready" or "deckhand-db-init". So it seems to us that the execution output is sensitive to the virtualization environment. For this reason, the documentation below may need to provide more details about the required environment setup. Can someone provide us a more detailed info about AIAB execution requirement ? Or, do Airship have a QA group that we can contact for help? Many thanks, Calvin =============================================================================== To get started, run the following in a fresh Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk). This will deploy Airship and Openstack Helm (OSH): sudo -i mkdir -p /root/deploy && cd "$_" git clone https://git.openstack.org/openstack/airship-in-a-bottle cd /root/deploy/airship-in-a-bottle/manifests/dev_single_node ./airship-in-a-bottle.sh Or, alternatively, if you have Vagrant installed, just run the following (only libvirt/kvm hypervisor is tested, but vagrant box supports VMware Desktop/Workstation/Fusion, Parallels, and Hyper-V): curl -O https://git.airshipit.org/cgit/airship-in-a-bottle/plain/Vagrantfile vagrant up -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at mirantis.com Wed May 22 14:28:41 2019 From: eli at mirantis.com (Evgeny L) Date: Wed, 22 May 2019 07:28:41 -0700 Subject: [Airship-discuss] Adding collectd for airship monitoring In-Reply-To: References: Message-ID: Hi, for getting host metrics we currently use prometheus' node-exporter [1], do you have some specific use-cases in mind that require to use collectd instead? There are a couple of places where this can be discussed, we have a Airship weekly meeting on IRC [2], and there are OpenDesign calls on webex [3] twice a week. Thanks, [1] https://github.com/prometheus/node_exporter [2] http://eavesdrop.openstack.org/#Airship_Team_Meeting [3] https://etherpad.openstack.org/p/Airship_OpenDesignDiscussions On Mon, May 20, 2019 at 5:07 AM Li, Cheng1 wrote: > Hi Airshippers, > > > > I notice we have a plan for grafana/monitoring integration in 2.0 [1], I > would like to proposal enabling collectd as a data collector. (for host > resource usage monitoring, and many other resource monitoring) > > Not sure if you have any concern about collectd, anywhere to discuss about > the monitoring feature? > > > > [1] https://etherpad.openstack.org/p/airship-ptg-train > > > > Thanks, > > Cheng > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at mirantis.com Wed May 22 16:13:16 2019 From: eli at mirantis.com (Evgeny L) Date: Wed, 22 May 2019 09:13:16 -0700 Subject: [Airship-discuss] Airship-in-a-bottle (AIAB) deployment question In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D89709515E0@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D89709515E0@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: I have seen other people reporting similar problem, it looks as an environment specific problem (we don't see it in our gates), also it may be related [1][2] to nfs-provisioner that we use for airship in a bottle installation. [1] https://review.opendev.org/#/c/660544/ [2] https://review.opendev.org/#/c/659787/ On Wed, May 22, 2019 at 6:53 AM MCEUEN, MATT wrote: > Hi Calvin, > > > > Have you taken a look for the root cause of why mariadb is not ready, e.g. > as Roman suggested, running `kubectl -n ucp describe pod mariadb-server-0` > on the genenesis node? > > > > Are you running at the minimum 20GB or RAM? I’m wondering whether > airship-in-a-bottle has outgrown that. If you’re able to run with > additional resources it would be good to see if that changes the result. > > > > We don’t have a different QA mailing list, but you’re welcome to join the > #airshipit channel on Freenode IRC as well and share logs / ask questions > there! > > > > Thanks, > > Matt > > > > *From:* calvin whole > *Sent:* Wednesday, May 22, 2019 3:51 AM > *To:* airship-discuss at lists.airshipit.org; calvin whole < > calvinwhole at gmail.com>; Roman Gorshunov > *Subject:* [Airship-discuss] Airship-in-a-bottle (AIAB) deployment > question > > > > Hi, > > > > We have tried to deploy AIAB by both methods described in the below > available info from Airship site. > > > > Both methods failed to complete the deployment process and the result is > also different. > > 1. With the "./airship-*in*-a-bottle.sh" method, it stuck in "mariadb not > ready" state. > > > > 2. With the "vagrant up" method, we tried the libvirt and virtualbox > providers. > > With virtualbox provider "sometimes" we can reach the "Genesis complete" > step, but the Openstack step still failed to complete. > > With libvirt provider we always stuck in "mariadb not ready" or > "deckhand-db-init". > > > > So it seems to us that the execution output is sensitive to the > virtualization environment. > > For this reason, the documentation below may need to provide more details > about the required environment setup. > > > > Can someone provide us a more detailed info about AIAB execution > requirement ? > > > > Or, do Airship have a QA group that we can contact for help? > > > > Many thanks, > > Calvin > > > > > =============================================================================== > > To get started, run the following in a fresh Ubuntu 16.04 VM (minimum > 4vCPU/20GB RAM/32GB disk). This will deploy Airship and Openstack Helm > (OSH): > > sudo -*i* > > mkdir -*p* /root/deploy && cd "$_" > > git clone https:*//git.openstack.org/openstack/airship-in-a-bottle * > > cd /root/deploy/airship-*in*-a-bottle/manifests/dev_single_node > > ./airship-*in*-a-bottle.sh > > Or, alternatively, if you have Vagrant installed, just run the following > (only libvirt/kvm hypervisor is tested, but vagrant box supports VMware > Desktop/Workstation/Fusion, Parallels, and Hyper-V): > > curl -O https://git.airshipit.org /cgit/airship-*in*-a-bottle/plain/Vagrantfile > > vagrant up > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From calvinwhole at gmail.com Thu May 23 05:33:33 2019 From: calvinwhole at gmail.com (calvin whole) Date: Thu, 23 May 2019 13:33:33 +0800 Subject: [Airship-discuss] Airship-in-a-bottle (AIAB) deployment question In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D89709515E0@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: Hi Matt and All, The resources are not the problem. My ubuntu 16.04 VM has 8vcpu, 24G memory, and 40G disk. The symptom is that mariadb-server-0 is stuck in "running" state but never getting into the "ready" state. =========== output ===================== ...clip... kube-system nfs-provisioner-7799d64d59-zwbvk 1/1 Running 0 23m ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 Running 0 22m ucp ingress-86576d6599-gtgmn 1/1 Running 0 22m ucp ingress-error-pages-5c97bb46bb-jm8zz 1/1 Running 0 22m ucp mariadb-ingress-55794d94c8-5cb97 0/1 Running 0 9m ucp mariadb-ingress-55794d94c8-5mb54 0/1 Running 0 9m ucp mariadb-ingress-error-pages-85f96fbd-qzm8g 1/1 Running 0 9m ucp mariadb-server-0 0/1 Running 0 9m <================= ucp postgresql-0 1/1 Running 1 22m ... clip ... ... clip ... 2019-05-23 04:12:36.635 8 ERROR armada.handlers.tiller [-] [chart=ucp-mariadb]: Error while installing release airship-ucp-mariadb: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.UNKNOWN details = "release airship-ucp-mariadb failed: timed out waiting for the condition" debug_error_string = "{"created":"@1558584756.634751023","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release airship-ucp-mariadb failed: timed out waiting for the condition","grpc_status":2}" > 2019-05-23 04:12:36.635 8 ERROR armada.handlers.tiller Traceback (most recent call last): 2019-05-23 04:12:36.635 8 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/site-packages/armada/handlers/tiller.py", line 460, in install_release ... clip ... ======= root at n0:~/deploy# kubectl describe pod mariadb-server-0 -n ucp ===================== ..clip.... Containers: mariadb: Container ID: docker://6888ba24aacc946e92f2651540667cb4211c01f0232b66e194746cce1c051e95 Image: docker.io/openstackhelm/mariadb:10.2.18 Image ID: docker-pullable://openstackhelm/mariadb at sha256:e4fe4898baac370f56d781541456d32aa0f8c740f1 8d5b13882a86e80a6a67bb Ports: 3306/TCP, 4567/TCP Host Ports: 0/TCP, 0/TCP Command: /tmp/start.py State: Running Started: Thu, 23 May 2019 05:12:47 +0000 Ready: False Restart Count: 0 Readiness: exec [/tmp/readiness.sh] delay=30s timeout=3s period=30s #success=1 #failure=3 Environment: POD_NAMESPACE: ucp (v1:metadata.namespace) MARIADB_REPLICAS: 1 POD_NAME_PREFIX: mariadb-server DISCOVERY_DOMAIN: mariadb-discovery.ucp.svc.cluster.local DIRECT_SVC_NAME: mariadb-server WSREP_PORT: 4567 STATE_CONFIGMAP: airship-ucp-mariadb-mariadb-state MYSQL_ROOT_PASSWORD: Opt ional: false Mounts: /etc/mysql/admin_user.cnf from mariadb-secrets (ro) /etc/mysql/conf.d from mycnfd (rw) /etc/mysql/conf.d/00-base.cnf from mariadb-etc (ro) /etc/mysql/conf.d/20-override.cnf from mariadb-etc (ro) /etc/mysql/conf.d/99-force.cnf from mariadb-etc (ro) /etc/mysql/my.cnf from mariadb-etc (ro) /tmp/readiness.sh from mariadb-bin (ro) /tmp/start.py from mariadb-bin (ro) /tmp/stop.sh from mariadb-bin (ro) /var/lib/mysql from mysql-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from airship-ucp-mariadb-mariadb-token-ml52f (ro) Conditions: Type Status Initialized True Ready False PodScheduled True ... clip .... Events: ... clip ... Warning Unhealthy 7s (x3 over 1m) kubelet, n0 Readiness probe failed: <======== On Thu, May 23, 2019 at 12:13 AM Evgeny L wrote: > I have seen other people reporting similar problem, it looks as an > environment specific problem (we don't see it in our gates), also it may be > related [1][2] to nfs-provisioner that we use for airship in a bottle > installation. > > [1] https://review.opendev.org/#/c/660544/ > [2] https://review.opendev.org/#/c/659787/ > > On Wed, May 22, 2019 at 6:53 AM MCEUEN, MATT wrote: > >> Hi Calvin, >> >> >> >> Have you taken a look for the root cause of why mariadb is not ready, >> e.g. as Roman suggested, running `kubectl -n ucp describe pod >> mariadb-server-0` on the genenesis node? >> >> >> >> Are you running at the minimum 20GB or RAM? I’m wondering whether >> airship-in-a-bottle has outgrown that. If you’re able to run with >> additional resources it would be good to see if that changes the result. >> >> >> >> We don’t have a different QA mailing list, but you’re welcome to join the >> #airshipit channel on Freenode IRC as well and share logs / ask questions >> there! >> >> >> >> Thanks, >> >> Matt >> >> >> >> *From:* calvin whole >> *Sent:* Wednesday, May 22, 2019 3:51 AM >> *To:* airship-discuss at lists.airshipit.org; calvin whole < >> calvinwhole at gmail.com>; Roman Gorshunov >> *Subject:* [Airship-discuss] Airship-in-a-bottle (AIAB) deployment >> question >> >> >> >> Hi, >> >> >> >> We have tried to deploy AIAB by both methods described in the below >> available info from Airship site. >> >> >> >> Both methods failed to complete the deployment process and the result is >> also different. >> >> 1. With the "./airship-*in*-a-bottle.sh" method, it stuck in "mariadb >> not ready" state. >> >> >> >> 2. With the "vagrant up" method, we tried the libvirt and virtualbox >> providers. >> >> With virtualbox provider "sometimes" we can reach the "Genesis complete" >> step, but the Openstack step still failed to complete. >> >> With libvirt provider we always stuck in "mariadb not ready" or >> "deckhand-db-init". >> >> >> >> So it seems to us that the execution output is sensitive to the >> virtualization environment. >> >> For this reason, the documentation below may need to provide more details >> about the required environment setup. >> >> >> >> Can someone provide us a more detailed info about AIAB execution >> requirement ? >> >> >> >> Or, do Airship have a QA group that we can contact for help? >> >> >> >> Many thanks, >> >> Calvin >> >> >> >> >> =============================================================================== >> >> To get started, run the following in a fresh Ubuntu 16.04 VM (minimum >> 4vCPU/20GB RAM/32GB disk). This will deploy Airship and Openstack Helm >> (OSH): >> >> sudo -*i* >> >> mkdir -*p* /root/deploy && cd "$_" >> >> git clone https:*//git.openstack.org/openstack/airship-in-a-bottle * >> >> cd /root/deploy/airship-*in*-a-bottle/manifests/dev_single_node >> >> ./airship-*in*-a-bottle.sh >> >> Or, alternatively, if you have Vagrant installed, just run the following >> (only libvirt/kvm hypervisor is tested, but vagrant box supports VMware >> Desktop/Workstation/Fusion, Parallels, and Hyper-V): >> >> curl -O https://git.airshipit.org /cgit/airship-*in*-a-bottle/plain/Vagrantfile >> >> vagrant up >> >> _______________________________________________ >> Airship-discuss mailing list >> Airship-discuss at lists.airshipit.org >> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From calvinwhole at gmail.com Thu May 23 05:59:35 2019 From: calvinwhole at gmail.com (calvin whole) Date: Thu, 23 May 2019 13:59:35 +0800 Subject: [Airship-discuss] Airship-in-a-bottle (AIAB) deployment question In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D89709515E0@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: Hi Matt and All, Continue my last mail with additional log output. Hope this helps. Thanks, Calvin ========== root at n0:~/deploy# kubectl logs -n ucp mariadb-server-0 ============== 2019-05-23 05:35:48,939 - OpenStack-Helm Mariadb - INFO - This instance hostname: mariadb-server-0 2019-05-23 05:35:48,939 - OpenStack-Helm Mariadb - INFO - This instance number: 0 2019-05-23 05:35:49,046 - OpenStack-Helm Mariadb - INFO - Kubernetes API Version: v1.10.2 2019-05-23 05:35:49,050 - OpenStack-Helm Mariadb - INFO - Will use "airship-ucp-mariadb-mariadb-state" configmap for cluster state info 2019-05-23 05:35:49,050 - OpenStack-Helm Mariadb - INFO - Getting cluster state 2019-05-23 05:35:49,065 - OpenStack-Helm Mariadb - INFO - The cluster is currently in "init" state. 2019-05-23 05:35:49,065 - OpenStack-Helm Mariadb - INFO - Getting cluster state 2019-05-23 05:35:49,070 - OpenStack-Helm Mariadb - INFO - The cluster is currently in "init" state. 2019-05-23 05:35:49,070 - OpenStack-Helm Mariadb - INFO - Waiting for cluster to start running ... =========== root at n0:~/deploy# kubectl logs -n kube-system nfs-provisioner-7799d64d59-zwbvk ============== I0523 04:02:19.342442 1 main.go:65] Provisioner nfs/airship-nfs-provisioner specified I0523 04:02:19.342589 1 main.go:85] Starting NFS server! I0523 04:02:19.770374 1 server.go:139] starting RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576 I0523 04:02:19.770455 1 server.go:150] ending RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576 I0523 04:02:19.870411 1 controller.go:392] Starting provisioner controller 8f995b13-7d0f-11e9-b4cf-fe46167aea67! I0523 04:02:26.531687 1 controller.go:1052] scheduleOperation[lock-provision-ucp/postgresql-data-postgresql-0[93717e25-7d0f-11e9-8fc6-525400f8f6f1]] I0523 04:02:27.169375 1 leaderelection.go:154] attempting to acquire leader lease... I0523 04:02:27.483671 1 controller.go:1052] scheduleOperation[lock-provision-ucp/postgresql-data-postgresql-0[93717e25-7d0f-11e9-8fc6-525400f8f6f1]] I0523 04:02:27.734209 1 controller.go:1052] scheduleOperation[lock-provision-ucp/postgresql-data-postgresql-0[93717e25-7d0f-11e9-8fc6-525400f8f6f1]] E0523 04:02:27.734231 1 leaderelection.go:271] Failed to update lock: Operation cannot be fulfilled on persistentvolumeclaims "postgresql-data-postgresql-0": the object has been modified; please apply your changes to the latest version and try again ... E0523 04:02:40.024023 1 leaderelection.go:271] Failed to update lock: Operation cannot be fulfilled on persistentvolumeclaims "postgresql-data-postgresql-0": the object has been modified; please apply your changes to the latest version and try again ... E0523 04:02:43.328965 1 leaderelection.go:271] Failed to update lock: Operation cannot be fulfilled on persistentvolumeclaims "mysql-data-mariadb-server-0": the object has been modified; please apply your changes to the latest version and try again On Thu, May 23, 2019 at 1:33 PM calvin whole wrote: > Hi Matt and All, > > The resources are not the problem. My ubuntu 16.04 VM has 8vcpu, 24G > memory, and 40G disk. > > The symptom is that mariadb-server-0 is stuck in "running" state but never > getting into the "ready" state. > > =========== output ===================== > ...clip... > kube-system nfs-provisioner-7799d64d59-zwbvk 1/1 > Running 0 23m > ucp airship-ucp-rabbitmq-rabbitmq-0 1/1 > Running 0 22m > ucp ingress-86576d6599-gtgmn 1/1 > Running 0 22m > ucp ingress-error-pages-5c97bb46bb-jm8zz 1/1 > Running 0 22m > ucp mariadb-ingress-55794d94c8-5cb97 0/1 > Running 0 9m > ucp mariadb-ingress-55794d94c8-5mb54 0/1 > Running 0 9m > ucp mariadb-ingress-error-pages-85f96fbd-qzm8g 1/1 > Running 0 9m > ucp mariadb-server-0 0/1 > Running 0 9m <================= > ucp postgresql-0 1/1 > Running 1 22m > ... clip ... > > ... clip ... > 2019-05-23 04:12:36.635 8 ERROR armada.handlers.tiller [-] > [chart=ucp-mariadb]: Error while installing release airship-ucp-mariadb: > grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: > status = StatusCode.UNKNOWN > details = "release airship-ucp-mariadb failed: timed out waiting > for the condition" > debug_error_string = > "{"created":"@1558584756.634751023","description":"Error received from > peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release > airship-ucp-mariadb failed: timed out waiting for the > condition","grpc_status":2}" > > > 2019-05-23 04:12:36.635 8 ERROR armada.handlers.tiller Traceback (most > recent call last): > 2019-05-23 04:12:36.635 8 ERROR armada.handlers.tiller File > "/usr/local/lib/python3.6/site-packages/armada/handlers/tiller.py", line > 460, in install_release > ... clip ... > > ======= root at n0:~/deploy# kubectl describe pod mariadb-server-0 -n ucp > ===================== > ..clip.... > > Containers: > mariadb: > Container ID: > docker://6888ba24aacc946e92f2651540667cb4211c01f0232b66e194746cce1c051e95 > Image: docker.io/openstackhelm/mariadb:10.2.18 > Image ID: docker-pullable://openstackhelm/mariadb at sha256:e4fe4898baac370f56d781541456d32aa0f8c740f1 > 8d5b13882a86e80a6a67bb > Ports: 3306/TCP, 4567/TCP > Host Ports: 0/TCP, 0/TCP > Command: > /tmp/start.py > State: Running > Started: Thu, 23 May 2019 05:12:47 +0000 > Ready: False > Restart Count: 0 > Readiness: exec [/tmp/readiness.sh] delay=30s timeout=3s > period=30s #success=1 #failure=3 > Environment: > POD_NAMESPACE: ucp (v1:metadata.namespace) > MARIADB_REPLICAS: 1 > POD_NAME_PREFIX: mariadb-server > DISCOVERY_DOMAIN: mariadb-discovery.ucp.svc.cluster.local > DIRECT_SVC_NAME: mariadb-server > WSREP_PORT: 4567 > STATE_CONFIGMAP: airship-ucp-mariadb-mariadb-state > MYSQL_ROOT_PASSWORD: secret 'mariadb-db-root-password'> Opt ional: > false > Mounts: > /etc/mysql/admin_user.cnf from mariadb-secrets (ro) > /etc/mysql/conf.d from mycnfd (rw) > /etc/mysql/conf.d/00-base.cnf from mariadb-etc (ro) > /etc/mysql/conf.d/20-override.cnf from mariadb-etc (ro) > /etc/mysql/conf.d/99-force.cnf from mariadb-etc (ro) > /etc/mysql/my.cnf from mariadb-etc (ro) > /tmp/readiness.sh from mariadb-bin (ro) > /tmp/start.py from mariadb-bin (ro) > /tmp/stop.sh from mariadb-bin (ro) > /var/lib/mysql from mysql-data (rw) > /var/run/secrets/kubernetes.io/serviceaccount from > airship-ucp-mariadb-mariadb-token-ml52f (ro) > Conditions: > Type Status > Initialized True > Ready False > PodScheduled True > ... clip .... > Events: > ... clip ... > Warning Unhealthy 7s (x3 over 1m) kubelet, n0 > Readiness probe failed: <======== > > On Thu, May 23, 2019 at 12:13 AM Evgeny L wrote: > >> I have seen other people reporting similar problem, it looks as an >> environment specific problem (we don't see it in our gates), also it may be >> related [1][2] to nfs-provisioner that we use for airship in a bottle >> installation. >> >> [1] https://review.opendev.org/#/c/660544/ >> [2] https://review.opendev.org/#/c/659787/ >> >> On Wed, May 22, 2019 at 6:53 AM MCEUEN, MATT wrote: >> >>> Hi Calvin, >>> >>> >>> >>> Have you taken a look for the root cause of why mariadb is not ready, >>> e.g. as Roman suggested, running `kubectl -n ucp describe pod >>> mariadb-server-0` on the genenesis node? >>> >>> >>> >>> Are you running at the minimum 20GB or RAM? I’m wondering whether >>> airship-in-a-bottle has outgrown that. If you’re able to run with >>> additional resources it would be good to see if that changes the result. >>> >>> >>> >>> We don’t have a different QA mailing list, but you’re welcome to join >>> the #airshipit channel on Freenode IRC as well and share logs / ask >>> questions there! >>> >>> >>> >>> Thanks, >>> >>> Matt >>> >>> >>> >>> *From:* calvin whole >>> *Sent:* Wednesday, May 22, 2019 3:51 AM >>> *To:* airship-discuss at lists.airshipit.org; calvin whole < >>> calvinwhole at gmail.com>; Roman Gorshunov >>> *Subject:* [Airship-discuss] Airship-in-a-bottle (AIAB) deployment >>> question >>> >>> >>> >>> Hi, >>> >>> >>> >>> We have tried to deploy AIAB by both methods described in the below >>> available info from Airship site. >>> >>> >>> >>> Both methods failed to complete the deployment process and the result is >>> also different. >>> >>> 1. With the "./airship-*in*-a-bottle.sh" method, it stuck in "mariadb >>> not ready" state. >>> >>> >>> >>> 2. With the "vagrant up" method, we tried the libvirt and virtualbox >>> providers. >>> >>> With virtualbox provider "sometimes" we can reach the "Genesis complete" >>> step, but the Openstack step still failed to complete. >>> >>> With libvirt provider we always stuck in "mariadb not ready" or >>> "deckhand-db-init". >>> >>> >>> >>> So it seems to us that the execution output is sensitive to the >>> virtualization environment. >>> >>> For this reason, the documentation below may need to provide more >>> details about the required environment setup. >>> >>> >>> >>> Can someone provide us a more detailed info about AIAB execution >>> requirement ? >>> >>> >>> >>> Or, do Airship have a QA group that we can contact for help? >>> >>> >>> >>> Many thanks, >>> >>> Calvin >>> >>> >>> >>> >>> =============================================================================== >>> >>> To get started, run the following in a fresh Ubuntu 16.04 VM (minimum >>> 4vCPU/20GB RAM/32GB disk). This will deploy Airship and Openstack Helm >>> (OSH): >>> >>> sudo -*i* >>> >>> mkdir -*p* /root/deploy && cd "$_" >>> >>> git clone https:*//git.openstack.org/openstack/airship-in-a-bottle * >>> >>> cd /root/deploy/airship-*in*-a-bottle/manifests/dev_single_node >>> >>> ./airship-*in*-a-bottle.sh >>> >>> Or, alternatively, if you have Vagrant installed, just run the following >>> (only libvirt/kvm hypervisor is tested, but vagrant box supports VMware >>> Desktop/Workstation/Fusion, Parallels, and Hyper-V): >>> >>> curl -O https://git.airshipit.org /cgit/airship-*in*-a-bottle/plain/Vagrantfile >>> >>> vagrant up >>> >>> _______________________________________________ >>> Airship-discuss mailing list >>> Airship-discuss at lists.airshipit.org >>> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.legal.astellia at gmail.com Thu May 23 14:11:24 2019 From: l.legal.astellia at gmail.com (Loic Le Gal) Date: Thu, 23 May 2019 16:11:24 +0200 Subject: [Airship-discuss] CoreDNS problem with liveness/readiness probe In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D897094C263@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D897094C263@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: Thanks Matt for the infos. In fact I found the root cause of this problem which was not from the probe itself but from the dns server. It comes from a conflict between coreDNS and dnsmasq... because I'm trying to remove the virsh layer of the Aiab multinode Site to split the deployment in openstack VMs. So this is a side-effect of my modifications and a soon as dnsmasq is stopped, the CoreDNS start successfully without modifying the probe. So I succeed to finish the Genesis Phase and I have a functional Airflow, but as you can figure out I face other problems... to be described in another thread :) Loïc Le ven. 17 mai 2019 à 18:02, MCEUEN, MATT a écrit : > Hey Loic, > > > > Good to hear that your multinode testing is progressing well, and great > catch on the coredns probe! > > > > I think it’s a good idea to err on the side of caution with our coredns > probe. Would you mind submitting a patchset to promenade, making your > change here: > > > https://opendev.org/airship/promenade/src/branch/master/charts/coredns/templates/bin/_probe.sh.tpl#L15 > > > > That would allow folks to easily weigh in on whether those are the right > values? > > The patchset will also allow you to easily pull in the change to a AiaB > multi-node deployment – you can make a local edit to the AiaB versions.yaml > file: > > > https://opendev.org/airship/in-a-bottle/src/branch/master/deployment_files/global/v1.0demo/software/config/versions.yaml > > > > In versions.yaml, find data.charts.coredns, and make these changes: > > Change location to: https://review.opendev.org/airship/promenade > > Change reference to: the “refs/changes/x/y/z” reference from your > promenade patchset > > > > Thanks for catching this – it would be awesome if you can resolve it for > others. Let me know if any questions come up or if you need any pointers > around the gerrit submit/review process. > > > > Thanks, > > Matt > > > > *From:* Loic Le Gal > *Sent:* Friday, May 17, 2019 5:05 AM > *To:* airship-discuss at lists.airshipit.org > *Subject:* [Airship-discuss] CoreDNS problem with liveness/readiness probe > > > > Hello Airshipers, > > > > After succeeding to deploy a SingleNode config, I'm trying to deploy the > multi-nodes-gate site of Airship in a botttle. > > The first steps are Ok (Genesis creation and run on node n0) but the > Undercloud deployment fails in the creation of coredns pods in kube-system > namespace. (Endless Crashloop) > > > > I fix manually this problem but want to fix it automatically before > launching the whole Airship deployment (i.e. In an Openstack stack) because > the initial coredns failure leads to side-effects that needs a new > deployment... > > > > After investigation, it turns out that the problem comes from a timing > issue (may be from the infra I use) and I succeed to fix this manually by > modifying the ConfigMap used by CoreDNS's probe and recreate the pods The > problem was in the probe.sh file: > > > > this command sometimes fails: > > ==> dig +trace +time=2 +tries=1 att.com @127.0.0.1 > > > > Increasing the time and retries fix the problem: > > root at n0:/home/ubuntu# kubectl exec coredns-6494d65b66-dk8g9 -n > kube-system -- dig +trace +time=2 +tries=1 att.com @127.0.0.1; echo $? > > > > ; <<>> DiG 9.11.2-P1 <<>> +trace +time=2 +tries=1 att.com @127.0.0.1 > > ;; global options: +cmd > > ;; connection timed out; no servers could be reached > > command terminated with exit code 9 > > 9 > > root at n0:/home/ubuntu# kubectl exec coredns-6494d65b66-dk8g9 -n > kube-system -- dig +trace +time=10 +tries=5 att.com @127.0.0.1; echo $? > > > > ; <<>> DiG 9.11.2-P1 <<>> +trace +time=10 +tries=5 att.com @127.0.0.1 > > ;; global options: +cmd > > ;; Received 40 bytes from 127.0.0.1#53(127.0.0.1) in 5000 ms > > > > 0 > > > > (Inception inside!) I succeed in patching the Genesis stage to patch the > genesis.sh script and finally patch the template file: > > /etc/genesis/armada/assets/charts/coredns/templates/bin/_probe.sh.tpl > > > > Unfortunately this file doesn't seems to be the one used to create the > Pods' configmap. > > > > Can someone tell me the easiest way to patch this readiness probe before > after a Git clone and before launching Airship deployment ? > > > > BR, > > Loïc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.legal.astellia at gmail.com Thu May 23 15:09:24 2019 From: l.legal.astellia at gmail.com (Loic Le Gal) Date: Thu, 23 May 2019 17:09:24 +0200 Subject: [Airship-discuss] Aiab multinodes: Replacing virsh by Openstack VMs Message-ID: Hello Airshipers, Still experimenting with Airship, after a successful Singlenode deployment, I'm trying the multinodes Site. My aim is to be able to use the Airship built on top of Openstack VMs and not on real Hw nor AIO virsh VMs on a single node... So that I could increase VM size to install other Sw at the upper layer and troubleshoot easily what's going on below. So, I remove the Virsh network and VMs creation from the first stages (create_VMs, gate_setup) of the gate/multinode_deploy and replace them by Openstack VMs of the expected Size and with expected Network connexion and Fixed addresses 172.24.1.xx on an Openstack Network. With this config, except a dns conflict fixed, it works and I get a Genesis/Airflow access. (Yeah!) As you can figure out, things became more Airship specific later on, because in order to setup the nodes, the Workflow task "prepare_and_deploy_nodes" try to make them boot for pxe, using a libvirt driver. I've found in the deployment_files.yaml that configuration which seems to be the blocking point: data: hardware_profile: 'GenericVM' primary_network: 'gp' oob: type: 'libvirt' libvirt_uri: 'qemu+ssh://virtmgr at 172.24.1.1/system' So I think I have following options and I would prefer the simple one : - either replace the libvirt drydock driver in the site definition by another one (redfish?): But can the redfish driver drive Openstack VMs (In this case I need to add a Redfish Server VM ?) - or, remove the prepare_and_deploy_nodes task : But is it simple or need to modify Drydock code? Is it really feasible? Any simpler option is welcome... Thanks in advance, Loïc -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Thu May 23 18:02:19 2019 From: paye600 at gmail.com (Roman Gorshunov) Date: Thu, 23 May 2019 20:02:19 +0200 Subject: [Airship-discuss] Aiab multinodes: Replacing virsh by Openstack VMs In-Reply-To: References: Message-ID: Hello Loic, No, RedFish driver would not be able to deploy OpenStack VM. RedFish is just an alternative to the IPMI, it can change power state of node and set next boot to PXE. May be try manual driver? But by default it waits for 1 minute only, might be not enough. The prepare_and_deploy_nodes workflow task is defined in Shipyard DAG src/bin/shipyard_airflow/shipyard_airflow/dags/drydock_deploy_site.py. Best regards, -- Roman Gorshunov On Thu, May 23, 2019 at 5:09 PM Loic Le Gal wrote: > > Hello Airshipers, > > Still experimenting with Airship, after a successful Singlenode deployment, I'm trying the multinodes Site. > My aim is to be able to use the Airship built on top of Openstack VMs and not on real Hw nor AIO virsh VMs on a single node... So that I could increase VM size to install other Sw at the upper layer and troubleshoot easily what's going on below. > > So, I remove the Virsh network and VMs creation from the first stages (create_VMs, gate_setup) of the gate/multinode_deploy and replace them by Openstack VMs of the expected Size and with expected Network connexion and Fixed addresses 172.24.1.xx on an Openstack Network. > > With this config, except a dns conflict fixed, it works and I get a Genesis/Airflow access. (Yeah!) > > As you can figure out, things became more Airship specific later on, because in order to setup the nodes, the Workflow task "prepare_and_deploy_nodes" try to make them boot for pxe, using a libvirt driver. > > I've found in the deployment_files.yaml that configuration which seems to be the blocking point: > data: > hardware_profile: 'GenericVM' > primary_network: 'gp' > oob: > type: 'libvirt' > libvirt_uri: 'qemu+ssh://virtmgr at 172.24.1.1/system' > > So I think I have following options and I would prefer the simple one : > - either replace the libvirt drydock driver in the site definition by another one (redfish?): But can the redfish driver drive Openstack VMs (In this case I need to add a Redfish Server VM ?) > - or, remove the prepare_and_deploy_nodes task : But is it simple or need to modify Drydock code? > > Is it really feasible? Any simpler option is welcome... > > Thanks in advance, > > Loïc > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss From MM9745 at att.com Fri May 24 22:54:42 2019 From: MM9745 at att.com (MCEUEN, MATT) Date: Fri, 24 May 2019 22:54:42 +0000 Subject: [Airship-discuss] Aiab multinodes: Replacing virsh by Openstack VMs In-Reply-To: References: Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8970957099@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hey Loic, Great to hear that you're getting your environment working!! The only simpler option I can think of to solve for running in OpenStack VMs might be: 1. deploy a vanilla Kubernetes cluster on your VMs (e.g. standard KubeADM) 2. run the Airskiff scripts (skipping the initial one single-node k8s install) Basically, Airship with a "bring your own infra and kubernetes". This skips the Airship bare metal provisioning (like you want) with the downside of skipping over Airship k8s provisioning. However, it should work well for testing declarative workload (e.g. OpenStack) deployment and lifecycle management inside of your VMs. HTH, let us know how it goes! Matt -----Original Message----- From: Roman Gorshunov Sent: Thursday, May 23, 2019 1:02 PM To: Loic Le Gal Cc: airship-discuss at lists.airshipit.org Subject: Re: [Airship-discuss] Aiab multinodes: Replacing virsh by Openstack VMs Hello Loic, No, RedFish driver would not be able to deploy OpenStack VM. RedFish is just an alternative to the IPMI, it can change power state of node and set next boot to PXE. May be try manual driver? But by default it waits for 1 minute only, might be not enough. The prepare_and_deploy_nodes workflow task is defined in Shipyard DAG src/bin/shipyard_airflow/shipyard_airflow/dags/drydock_deploy_site.py. Best regards, -- Roman Gorshunov On Thu, May 23, 2019 at 5:09 PM Loic Le Gal wrote: > > Hello Airshipers, > > Still experimenting with Airship, after a successful Singlenode deployment, I'm trying the multinodes Site. > My aim is to be able to use the Airship built on top of Openstack VMs and not on real Hw nor AIO virsh VMs on a single node... So that I could increase VM size to install other Sw at the upper layer and troubleshoot easily what's going on below. > > So, I remove the Virsh network and VMs creation from the first stages (create_VMs, gate_setup) of the gate/multinode_deploy and replace them by Openstack VMs of the expected Size and with expected Network connexion and Fixed addresses 172.24.1.xx on an Openstack Network. > > With this config, except a dns conflict fixed, it works and I get a > Genesis/Airflow access. (Yeah!) > > As you can figure out, things became more Airship specific later on, because in order to setup the nodes, the Workflow task "prepare_and_deploy_nodes" try to make them boot for pxe, using a libvirt driver. > > I've found in the deployment_files.yaml that configuration which seems to be the blocking point: > data: > hardware_profile: 'GenericVM' > primary_network: 'gp' > oob: > type: 'libvirt' > libvirt_uri: 'qemu+ssh://virtmgr at 172.24.1.1/system' > > So I think I have following options and I would prefer the simple one : > - either replace the libvirt drydock driver in the site definition by > another one (redfish?): But can the redfish driver drive Openstack VMs > (In this case I need to add a Redfish Server VM ?) > - or, remove the prepare_and_deploy_nodes task : But is it simple or need to modify Drydock code? > > Is it really feasible? Any simpler option is welcome... > > Thanks in advance, > > Loïc > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.or > g_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMe > MTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=W2xfhYxi7B2i6JWY2my1HMZcj-oFQ0i8 > --ik1apG9m8&s=5eA1O44C2m9RU0O1Z_MOAB2wSQYckotwHHjxJu4-rDw&e= _______________________________________________ Airship-discuss mailing list Airship-discuss at lists.airshipit.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.airshipit.org_cgi-2Dbin_mailman_listinfo_airship-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=W2xfhYxi7B2i6JWY2my1HMZcj-oFQ0i8--ik1apG9m8&s=5eA1O44C2m9RU0O1Z_MOAB2wSQYckotwHHjxJu4-rDw&e= From cboylan at sapwetik.org Tue May 28 20:52:28 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 28 May 2019 13:52:28 -0700 Subject: [Airship-discuss] Gerrit Downtime May 31, 2019 beginning 1500UTC Message-ID: <0f37c087-21f3-4136-acc8-3cf7be699714@www.fastmail.com> Hello everyone, We'll be taking a short (hopefully no longer than an hour) Gerrit downtime so that we can rename some projects this Friday, May 31, 2019 beginning at 1500UTC. Some of these renames are cleanups after the great OpenDev git migration and others aren't, but all of them require us to stop Gerrit to update the database and contents on disk. If you'd like specifics on what repos are being renamed or what the process looks like, you can follow along at: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames https://etherpad.openstack.org/p/project-renames-2019-05-31 As always feel free to bring up questions or concerns and we'll do our best to answer/address them. Thank you for your patience and sorry for the interruption, Clark From cheng1.li at intel.com Thu May 30 02:59:20 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Thu, 30 May 2019 02:59:20 +0000 Subject: [Airship-discuss] [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment In-Reply-To: References: Message-ID: I have the same question. I haven’t seen any docs which guides how to deploy airsloop/air-seaworthy in virtual env. I am trying to deploy airsloop on libvirt/kvm driven virtual env. Two VMs, one for genesis, the other for compute. Virtualbmc for ipmi simulation. The genesis.sh scripts has been run on genesis node without error. But deploy_site fails at prepare_and_deploy_nodes task(action ‘set_node_boot’ timeout). I am still investigating this issue. It will be great if we have official document for this scenario. Thanks, Cheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Wednesday, May 29, 2019 3:31 PM To: airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, We want to test Production Ready Airship-Seaworthy in our virtual environment The link followed is https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html As per the document we need 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes. But we need to deploy our setup on Virtual Environment. Does Airship-Seaworthy support Installation on Virtual Environment? We have 2 Rack Servers with Dual-CPU Intel® Xeon® E5 26xx with 16 cores each and 128 GB RAM. Is it possible that we can create Virtual Machines on them and set up the complete environment. In that case, what possible infrastructure do we require for setting up the complete setup. Looking forward for your response. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Wed May 29 07:30:55 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Wed, 29 May 2019 07:30:55 -0000 Subject: [Airship-discuss] [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Message-ID: Hi Team, We want to test Production Ready Airship-Seaworthy in our virtual environment The link followed is https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html As per the document we need 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes. But we need to deploy our setup on Virtual Environment. Does Airship-Seaworthy support Installation on Virtual Environment? We have 2 Rack Servers with Dual-CPU Intel® Xeon® E5 26xx with 16 cores each and 128 GB RAM. Is it possible that we can create Virtual Machines on them and set up the complete environment. In that case, what possible infrastructure do we require for setting up the complete setup. Looking forward for your response. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From calvinwhole at gmail.com Tue May 14 05:59:26 2019 From: calvinwhole at gmail.com (calvin whole) Date: Tue, 14 May 2019 05:59:26 -0000 Subject: [Airship-discuss] airship-in-a-bottle deployment issue In-Reply-To: References: Message-ID: Hi Roman, Thanks a lot for looking into this issue. I re-ran the same process again and this time it successfully completed Genesis phase. The postgresql-0 and nfs-provisioner logs did not show any apparent errors. So It seems to me there is a consistency issue because all I did was destroy the n0 vm and "vagrant up" again. It finally succeeded once out of 5 tries. Btw, my virtualization environment is as below. vagrant at n0:~$ sudo virt-what virtualbox kvm However, the subsequent ./run_shipyard.sh commit configdocs failed as below. How can this failure be fixed? Since Genesis is complete, can we re-run the script without the Genesis part - i.e., skipping the Genesis part ? Btw, can you describe your environment setup, so we can try to follow your exact execution environment? (My environment: a physical server, installed ubuntu 16.04.5 , install virtualbox and vagrant, and run "vagrant up") Thanks again, Calvin ================================================= ==> n0: + export max_shipyard_count=60 ==> n0: + max_shipyard_count=60 ==> n0: + export shipyard_query_time=90 ==> n0: + shipyard_query_time=90 ==> n0: + bash execute_shipyard_action.sh deploy_site ==> n0: + run_action deploy_site ==> n0: + action=deploy_site ==> n0: + action_args= ==> n0: + NC='\033[0m' ==> n0: + RED='\033[0;31m' ==> n0: + GREEN='\033[0;32m' ==> n0: +++ dirname execute_shipyard_action.sh ==> n0: ++ cd . ==> n0: ++ pwd ==> n0: + DIR=/root/deploy/site ==> n0: + cd /root/deploy/site ==> n0: + source shipyard_docker_base_command.sh ==> n0: ++ NAMESPACE=ucp ==> n0: ++ SHIPYARD_IMAGE=quay.io/airshipit/shipyard:master ==> n0: +++ cat ==> n0: Execute deploy_site Dag... ==> n0: ++ base_docker_command='sudo -E docker run -t --rm --net=host ==> n0: -e http_proxy= ==> n0: -e https_proxy= ==> n0: -e no_proxy= ==> n0: -e OS_AUTH_URL=http://keystone.ucp.svc.cluster.local:80/v3 ==> n0: -e OS_USERNAME=shipyard ==> n0: -e OS_USER_DOMAIN_NAME=default ==> n0: -e OS_PASSWORD ==> n0: -e OS_PROJECT_DOMAIN_NAME=default ==> n0: -e OS_PROJECT_NAME=service' ==> n0: + echo -e 'Execute deploy_site Dag...\n' ==> n0: + sudo -E docker run -t --rm --net=host -e http_proxy= -e https_proxy= -e no_proxy= -e OS_AUTH_URL=http://keystone.ucp.svc./v3 -e OS_USERNAME=shipyard -e OS_USER_DOMAIN_NAME=default -e OS_PASSWORD -e OS_PROJECT_DOMAIN_NAME=default -e OS_PROJECT_NAME=servhipit/shipyard:master create action deploy_site ==> n0: Error: Unable to complete request to Airflow <======================== Failed ==> n0: Reason: Airflow could not be contacted properly by Shipyard. ==> n0: - Error: ==> n0: ==> n0: #### Errors: 1, Warnings: 0, Infos: 0, Other: 0 #### On Mon, May 13, 2019 at 10:13 PM Roman Gorshunov wrote: > Hello Calvin, > > Seems like PosgreSQL database was not able to properly write data onto > the disk. PostgreSQL runs as a postgresql-0 pod in ucp namespace, uses > a persistent volume claim postgresql-data-postgresql-0, and persistent > volume mounted via NFS. > > kubectl describe pod postgresql-0 -n ucp > kubectl logs -n ucp postgresql-0 > kubectl -n ucp describe pvc postgresql-data-postgresql-0 > kubectl describe pv pvc-0382c985-7572-11e9-b431-525400681552 # volume > name could be different) > > NFS is provisioned by nfs-provisioner-7799d64d59-ptsgk (last two parts > would be different in your case): > kubectl get pods -n kube-system | grep nfs > kubectl -n kube-system describe pod nfs-provisioner-7799d64d59-ptsgk > kubectl -n kube-system logs nfs-provisioner-7799d64d59-ptsgk > > Check if there are any problems with it (e.g. unable to mount NFS > share, or lack of free storage space - `df -h`). > > Also running kubectl get events --all-namespaces could help to > understand what went wrong. > > I have run an AIAB installation today twice, and it all worked fine. I > use `vagrant up` and my hypervisor is KVM, if that could help you. > > I hope it helps. > > Best regards, > -- Roman Gorshunov > > On Fri, May 10, 2019 at 7:01 AM calvin whole > wrote: > > > > Hi Roman, > > > > Not sure if my last email were out properly, its size is too big. Here > is a short one. Thanks for responding in advance. > > > > I re-ran the "vagrant up" and looking into the logs for > "deckhand-db-init-zs499" as showed below. > > It showed ERROR: checkpoint request failed > > HINT: Consult recent messages in the server log for details. > > > > What is the specific "server" log we should look into for details? > > > > Thanks for help. > > > > Sincerely, > > Calvin > > > > > > On Thu, May 9, 2019 at 12:17 PM calvin whole > wrote: > >> > >> Hi Roman, > >> > >> Btw, continue my last post, the kubectl describe pod > deckhand-db-init-zs499 output is as follows. > >> > >> Thanks, > >> Calvin > >> =========== kubectl describe pod deckhand-db-init-zs499 > ================= > >> root at n0:/home/vagrant# kubectl describe pod deckhand-db-init-zs499 -n > ucp > >> Name: deckhand-db-init-zs499 > >> Namespace: ucp > >> Node: n0/10.0.2.15 > >> Start Time: Thu, 09 May 2019 03:48:48 +0000 > >> Labels: application=deckhand > >> component=db-init > >> controller-uid=59f1bee0-720d-11e9-92ac-080027fc876e > >> job-name=deckhand-db-init > >> release_group=airship-ucp-deckhand > >> Annotations: > >> Status: Running > >> IP: 10.97.26.50 > >> Controlled By: Job/deckhand-db-init > >> Init Containers: > >> init: > >> Container ID: > docker://b58e8b6b7296df618cb8120b5226370afeba2a4e79dd70ee6894b5afd853c0db > >> Image: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 > >> Image ID: docker-pullable:// > quay.io/stackanetes/kubernetes-entrypoint at sha256:32b1b657ee4bcc9cc7a1529e31d8e1a06376172373ee020f97f3e78168fde4b6 > >> Port: > >> Host Port: > >> Command: > >> kubernetes-entrypoint > >> State: Terminated > >> Reason: Completed > >> Exit Code: 0 > >> Started: Thu, 09 May 2019 03:48:52 +0000 > >> Finished: Thu, 09 May 2019 03:48:54 +0000 > >> Ready: True > >> Restart Count: 0 > >> Environment: > >> POD_NAME: deckhand-db-init-zs499 (v1:metadata.name) > >> NAMESPACE: ucp (v1:metadata.namespace) > >> INTERFACE_NAME: eth0 > >> PATH: > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ > >> DEPENDENCY_SERVICE: ucp:postgresql > >> DEPENDENCY_DAEMONSET: > >> DEPENDENCY_CONTAINER: > >> DEPENDENCY_POD_JSON: > >> COMMAND: echo done > >> Mounts: > >> /var/run/secrets/kubernetes.io/serviceaccount from > deckhand-db-init-token-gczr5 (ro) > >> Containers: > >> deckhand-db-init: > >> Container ID: > docker://5dea2aa975c3718ca298536005b9cc0b21de47e08b2260cc73005e3455bb1350 > >> Image: docker.io/postgres:9.5 > >> Image ID: docker-pullable://postgres at sha256 > :0605b4b20a205c09ddd10eeeddd3ed7bf3cc442a8e9896ec34862ca882658be4 > >> Port: > >> Host Port: > >> Command: > >> /tmp/db-init.sh > >> State: Waiting > >> Reason: CrashLoopBackOff > >> Last State: Terminated > >> Reason: Error <======== > >> Exit Code: 1 > >> Started: Thu, 09 May 2019 04:10:29 +0000 > >> Finished: Thu, 09 May 2019 04:10:30 +0000 > >> Ready: False > >> Restart Count: 9 > >> Environment: > >> DECKHAND_DB_URL: 'deckhand-db-user'> Optional: false > >> DB_NAME: 'deckhand-db-user'> Optional: false > >> DB_SERVICE_USER: secret 'deckhand-db-user'> Optional: false > >> DB_SERVICE_PASSWORD: secret 'deckhand-db-user'> Optional: false > >> DB_FQDN: 'deckhand-db-user'> Optional: false > >> DB_PORT: 'deckhand-db-user'> Optional: false > >> DB_ADMIN_USER: secret 'deckhand-db-admin'> Optional: false > >> PGPASSWORD: secret 'deckhand-db-admin'> Optional: false > >> Mounts: > >> /etc/deckhand from etc-deckhand (rw) > >> /etc/deckhand/deckhand.conf from deckhand-etc (ro) > >> /tmp/db-init.sh from deckhand-bin (ro) > >> /var/run/secrets/kubernetes.io/serviceaccount from > deckhand-db-init-token-gczr5 (ro) > >> Conditions: > >> Type Status > >> Initialized True > >> Ready False > >> PodScheduled True > >> Volumes: > >> etc-deckhand: > >> Type: EmptyDir (a temporary directory that shares a pod's > lifetime) > >> Medium: > >> deckhand-etc: > >> Type: Secret (a volume populated by a Secret) > >> SecretName: deckhand-etc > >> Optional: false > >> deckhand-bin: > >> Type: ConfigMap (a volume populated by a ConfigMap) > >> Name: deckhand-bin > >> Optional: false > >> deckhand-db-init-token-gczr5: > >> Type: Secret (a volume populated by a Secret) > >> SecretName: deckhand-db-init-token-gczr5 > >> Optional: false > >> QoS Class: BestEffort > >> Node-Selectors: ucp-control-plane=enabled > >> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s > >> node.kubernetes.io/unreachable:NoExecute for 300s > >> Events: > >> Type Reason Age From > Message > >> ---- ------ ---- ---- > ------- > >> Normal Scheduled 24m default-scheduler > Successfully assigned deckhand-db-init-zs499 to n0 > >> Normal SuccessfulMountVolume 24m kubelet, n0 > MountVolume.SetUp succeeded for volume "etc-deckhand" > >> Normal SuccessfulMountVolume 24m kubelet, n0 > MountVolume.SetUp succeeded for volume "deckhand-bin" > >> Normal SuccessfulMountVolume 24m kubelet, n0 > MountVolume.SetUp succeeded for volume "deckhand-etc" > >> Normal SuccessfulMountVolume 24m kubelet, n0 > MountVolume.SetUp succeeded for volume "deckhand-db-init-token-gczr5" > >> Normal Pulled 24m kubelet, n0 > Container image "quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" > already present on machine > >> Normal Created 24m kubelet, n0 > Created container > >> Normal Started 24m kubelet, n0 > Started container > >> Normal Pulled 23m (x4 over 24m) kubelet, n0 > Container image "docker.io/postgres:9.5" already present on machine > >> Normal Created 23m (x4 over 24m) kubelet, n0 > Created container > >> Normal Started 23m (x4 over 24m) kubelet, n0 > Started container > >> Warning BackOff 4m (x90 over 24m) kubelet, n0 > Back-off restarting failed container > >> root at n0:/home/vagrant# > >> > >> On Thu, May 9, 2019 at 12:08 PM calvin whole > wrote: > >>> > >>> Hi Roman, > >>> > >>> Thanks for looking into this and gave us suggestions. > >>> > >>> I re-ran the "vagrant up" and looking into the logs for > "deckhand-db-init-zs499" as showed below. > >>> It showed ERROR: checkpoint request failed > >>> HINT: Consult recent messages in the server log for details. > >>> > >>> What is the specific "server" log we should look into for details? > >>> > >>> Thanks for help. > >>> > >>> Sincerely, > >>> Calvin > >>> > >>> ================== log for deckhand-db-init-zs499 > ================================== > >>> root at n0:/home/vagrant# kubectl logs deckhand-db-init-zs499 -n ucp > >>> + export HOME=/tmp > >>> + HOME=/tmp > >>> + pgsql_superuser_cmd 'SELECT 1 FROM pg_database WHERE datname = > '\''deckhand'\''' > >>> + grep -q 1 > >>> + DB_COMMAND='SELECT 1 FROM pg_database WHERE datname = > '\''deckhand'\''' > >>> + [[ ! -z '' ]] > >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres > '--command=SELECT 1 FROM pg_database WHERE datname = '\''deckhand'\''' > >>> + pgsql_superuser_cmd 'CREATE DATABASE deckhand' > >>> + DB_COMMAND='CREATE DATABASE deckhand' > >>> + [[ ! -z '' ]] > >>> + psql -h postgresql.ucp.svc.cluster.local -p 5432 -U postgres > '--command=CREATE DATABASE deckhand' > >>> ERROR: checkpoint request failed > >>> HINT: Consult recent messages in the server log for details. > >>> > >>> > ===================================================================================== > >>> ==> n0: NAMESPACE NAME > READY STATUS RESTARTS AGE > >>> ==> n0: kube-system auxiliary-etcd-n0 > 3/3 Running 0 49m > >>> ==> n0: kube-system bootstrap-armada-n0 > 4/4 Running 0 49m > >>> ==> n0: kube-system calico-etcd-anchor-ncl2p > 1/1 Running 0 47m > >>> ==> n0: kube-system calico-etcd-n0 > 1/1 Running 0 46m > >>> ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5csnn > 1/1 Running 0 46m > >>> ==> n0: kube-system calico-node-m4rtf > 1/1 Running 0 46m > >>> ==> n0: kube-system calico-settings-tkp6r > 0/1 Completed 0 46m > >>> ==> n0: kube-system coredns-84bdd76f4d-hhbcs > 1/1 Running 0 44m > >>> ==> n0: kube-system coredns-84bdd76f4d-k8tcc > 1/1 Running 0 44m > >>> ==> n0: kube-system coredns-84bdd76f4d-qp2xd > 1/1 Running 0 44m > >>> ==> n0: kube-system haproxy-n0 > 1/1 Running 0 50m > >>> ==> n0: kube-system ingress-error-pages-7c65f766d-dn2tw > 1/1 Running 0 41m > >>> ==> n0: kube-system ingress-gtvp8 > 2/2 Running 0 41m > >>> ==> n0: kube-system kubernetes-apiserver-anchor-99jhn > 1/1 Running 0 42m > >>> ==> n0: kube-system kubernetes-apiserver-n0 > 1/1 Running 0 41m > >>> ==> n0: kube-system kubernetes-controller-manager-anchor-vqddp > 1/1 Running 0 42m > >>> ==> n0: kube-system kubernetes-controller-manager-n0 > 1/1 Running 0 41m > >>> ==> n0: kube-system kubernetes-etcd-anchor-9jcpl > 1/1 Running 0 44m > >>> ==> n0: kube-system kubernetes-etcd-n0 > 1/1 Running 0 42m > >>> ==> n0: kube-system kubernetes-proxy-2m9t2 > 1/1 Running 0 47m > >>> ==> n0: kube-system kubernetes-scheduler-anchor-nl9fb > 1/1 Running 0 42m > >>> ==> n0: kube-system kubernetes-scheduler-n0 > 1/1 Running 0 41m > >>> ==> n0: kube-system nfs-provisioner-7799d64d59-vtkbd > 1/1 Running 0 40m > >>> ==> n0: kube-system tiller-deploy-7d88c6f956-qwfzb > 1/1 Running 0 27m > >>> ==> n0: ucp > airship-ucp-keystone-memcached-memcached-74d79d8896-vfl69 1/1 > Running 0 34m > >>> ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 > 1/1 Running 0 39m > >>> ==> n0: ucp armada-api-d5f757d5-6wl98 > 1/1 Running 0 15m > >>> ==> n0: ucp armada-ks-endpoints-vl9rs > 0/3 Completed 0 15m > >>> ==> n0: ucp armada-ks-service-vpcjd > 0/1 Completed 0 15m > >>> ==> n0: ucp armada-ks-user-rv4gs > 0/1 Completed 0 15m > >>> ==> n0: ucp barbican-api-5d7b88d8ff-8dd6w > 1/1 Running 0 13m > >>> ==> n0: ucp barbican-db-init-gqvt4 > 0/1 Completed 0 13m > >>> ==> n0: ucp barbican-db-sync-tqtgq > 0/1 Completed 0 13m > >>> ==> n0: ucp barbican-ks-endpoints-rwtql > 0/3 > >>> ==> n0: Completed 0 13m > >>> ==> n0: ucp barbican-ks-service-l2h6h > 0/1 Completed 0 13m > >>> ==> n0: ucp barbican-ks-user-wwvc7 > 0/1 Completed 0 13m > >>> ==> n0: ucp barbican-rabbit-init-6spq4 > 0/1 Completed 0 13m > >>> ==> n0: ucp deckhand-api-78b9644f96-5686f > 0/1 Running 0 11m > >>> ==> n0: ucp deckhand-db-init-zs499 > 0/1 CrashLoopBackOff 7 11m <=== > >>> ==> n0: ucp deckhand-db-sync-ct7wl > 0/1 Init:0/1 0 11m > >>> ==> n0: ucp deckhand-ks-endpoints-x4hd9 > 0/3 Completed 0 11m > >>> ==> n0: ucp deckhand-ks-service-ms6n5 > 0/1 Completed 0 11m > >>> ==> n0: ucp deckhand-ks-user-7fnvt > 0/1 Completed 0 11m > >>> ==> n0: ucp divingbell-apparmor-default-hth8z > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-apt-default-r965m > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-ethtool-default-ldcmc > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-exec-default-f7h7x > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-limits-default-sp9mj > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-mounts-default-8f5a00a2-frbl2 > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-perm-default-d7wxp > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-sysctl-default-c8pnp > 1/1 Running 0 27m > >>> ==> n0: ucp divingbell-uamlite-default-rfct6 > 1/1 Running 0 27m > >>> ==> n0: ucp ingress-86576d6599-mdgj4 > 1/1 Running 0 39m > >>> ==> n0: ucp ingress-error-pages-5c97bb46bb-7lg5l > 1/1 Running 0 39m > >>> ==> n0: ucp keystone-api-678fc44bdd-594bb > 1/1 Running 0 34m > >>> ==> n0: ucp keystone-bootstrap-rprr6 > 0/1 Completed 0 34m > >>> ==> n0: ucp keystone-credential-setup-zkjgs > 0/1 Completed 0 34m > >>> ==> n0: ucp keystone-db-init-xkgxm > 0/1 Completed 0 34m > >>> ==> n0: ucp keystone-db-sync-lm6xs > 0/1 Completed 0 34m > >>> ==> n0: ucp keystone-domain-manage-9pzjq > 0/1 Completed 0 34m > >>> ==> n0: ucp keystone-fernet-setup-q7t8p > 0/1 Completed 0 34m > >>> ==> n0: ucp keystone-rabbit-init-qpvgt > 0/1 Completed 0 34m > >>> ==> n0: ucp maas-bootstrap-admin-user-8npgw > 0/1 Completed 0 26m > >>> ==> n0: ucp maas-db-init-9z86n > 0/1 Completed 0 26m > >>> ==> n0: ucp maas-db-sync-r7rkg > 0/1 Completed 0 26m > >>> ==> n0: ucp maas-export-api-key-n2gz4 > 0/1 Completed 1 26m > >>> ==> n0: ucp maas-import-resources-prlml > 0/1 Completed 0 26m > >>> ==> n0: ucp maas-ingress-756f6f9d6-h65nj > 2/2 Running 0 26m > >>> ==> n0: ucp maas-ingress-errors-8686d56d98-swfg9 > >>> ==> n0: 1/1 Running 0 > 26m > >>> ==> n0: ucp maas-rack-0 > 1/1 Running 0 26m > >>> ==> n0: ucp maas-region-0 > 1/1 Running 0 26m > >>> ==> n0: ucp mariadb-ingress-55794d94c8-dsw5w > 1/1 Running 0 39m > >>> ==> n0: ucp mariadb-ingress-55794d94c8-jczmh > 1/1 Running 0 39m > >>> ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-jrqsg > 1/1 Running 0 39m > >>> ==> n0: ucp mariadb-server-0 > 1/1 Running 0 39m > >>> ==> n0: ucp postgresql-0 > 1/1 Running 1 39m > >>> > >>> > >>> On Tue, May 7, 2019 at 6:25 PM Roman Gorshunov > wrote: > >>>> > >>>> Hello Calvin, > >>>> > >>>> Try to get some kubectl logs and describe deckhand-db-init-r9jvg pod. > >>>> kubectl describe pod deckhand-db-init-r9jvg -u ucp > >>>> May be it would help to understand what is happening there. > >>>> > >>>> Thank you for trying Airship. > >>>> > >>>> Best regards, > >>>> -- Roman Gorshunov > >>>> > >>>> On Tue, May 7, 2019 at 7:51 AM calvin whole > wrote: > >>>> > > >>>> > Hi, > >>>> > > >>>> > We are trying to deploy AIIB. > >>>> > > >>>> > I have a physical server with Ubuntu 16.04.5 OS, installed > virtualbox and vagrant. > >>>> > The process is straightforward by following > https://opendev.org/airship/in-a-bottle/ > >>>> > We created ~/deploy directory, downloaded Vagrantfile, and do > "vagrant up". > >>>> > > >>>> > However it stuck in the error below: > >>>> > deckhand-db-init-r9jvg 0/1 > CrashLoopBackOff 16 > >>>> > > >>>> > Could anyone help to resolve this? Many thanks in advance. > >>>> > > >>>> > Sincerely, > >>>> > Calvin > >>>> > > >>>> > ==> n0: NAMESPACE NAME > READY STATUS RESTARTS AGE > >>>> > ==> n0: kube-system auxiliary-etcd-n0 > 3/3 Running 0 1h > >>>> > ==> n0: kube-system bootstrap-armada-n0 > 4/4 Running 0 1h > >>>> > ==> n0: kube-system calico-etcd-anchor-5tqhk > 1/1 Running 0 1h > >>>> > ==> n0: kube-system calico-etcd-n0 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system calico-kube-controllers-56c54d8cf8-5ssl6 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system calico-node-pbsxh > 1/1 Running 0 1h > >>>> > ==> n0: kube-system calico-settings-lzpk9 > 0/1 Completed 0 1h > >>>> > ==> n0: kube-system coredns-84bdd76f4d-6cwnl > 1/1 Running 0 1h > >>>> > ==> n0: kube-system coredns-84bdd76f4d-d4p8c > 1/1 Running 0 1h > >>>> > ==> n0: kube-system coredns-84bdd76f4d-xrknz > 1/1 Running 0 1h > >>>> > ==> n0: kube-system haproxy-n0 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system ingress-9pkmx > 2/2 Running 0 1h > >>>> > ==> n0: kube-system ingress-error-pages-7c65f766d-2pqfx > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-apiserver-anchor-hszbf > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-apiserver-n0 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-controller-manager-anchor-h49vz > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-controller-manager-n0 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-etcd-anchor-nnjbb > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-etcd-n0 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-proxy-vgzjp > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-scheduler-anchor-bq2gk > 1/1 Running 0 1h > >>>> > ==> n0: kube-system kubernetes-scheduler-n0 > 1/1 Running 0 1h > >>>> > ==> n0: kube-system nfs-provisioner-7799d64d59-jx7hq > 1/1 Running 0 1h > >>>> > ==> n0: kube-system tiller-deploy-7d88c6f956-d9kzg > 1/1 Running 0 1h > >>>> > ==> n0: ucp > airship-ucp-keystone-memcached-memcached-74d79d8896-q9wqx 1/1 > Running 0 1h > >>>> > ==> n0: ucp airship-ucp-rabbitmq-rabbitmq-0 > 1/1 Running 0 1h > >>>> > ==> n0: ucp armada-api-d5f757d5-d9l9h > 1/1 Running 0 1h > >>>> > ==> n0: ucp armada-ks-endpoints-qwbtg > 0/3 Completed 0 1h > >>>> > ==> n0: ucp armada-ks-service-lg8kq > 0/1 Completed 0 1h > >>>> > ==> n0: ucp armada-ks-user-g2j6v > 0/1 Completed 0 1h > >>>> > ==> n0: ucp barbican-api-84665dd99d-qv5fz > 1/1 Running 0 1h > >>>> > ==> n0: ucp barbican-db-init-ndx58 > 0/1 Completed 0 1h > >>>> > ==> n0: ucp barbican-db-sync-sh7c9 > 0/1 Completed 0 1h > >>>> > ==> n0: ucp barbican-ks-endpoints-bv7xv > 0/3 Completed 0 1h > >>>> > ==> n0: ucp barbican-ks-service-46hjk > 0/1 Completed 0 1h > >>>> > ==> n0: ucp barbican-ks-user-6df74 > 0/1 Completed 0 1h > >>>> > ==> n0: ucp barbican-rabbit-init-gnvfl > 0/1 Completed 0 1h > >>>> > ==> n0: ucp deckhand-api-6cd9c4479d-wc5cw > 0/1 Running 0 1h > >>>> > ==> n0: ucp deckhand-db-init-r9jvg > 0/1 CrashLoopBackOff 17 1h <===== > >>>> > ==> n0: ucp deckhand-db-sync-llstv > 0/1 Init:0/1 0 1h > >>>> > ==> n0: ucp deckhand-ks-endpoints-4gqfj > 0/3 Completed 0 1h > >>>> > ==> n0: ucp deckhand-ks-service-c6gbq > 0/1 Completed 0 1h > >>>> > ==> n0: ucp deckhand-ks-user-5skng > 0/1 Completed 0 1h > >>>> > ==> n0: ucp divingbell-apparmor-default-lkcl6 > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-apt-default-7jgtv > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-ethtool-default-tm2w4 > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-exec-default-l45m8 > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-limits-default-q84pr > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-mounts-default-29420945-nrdsz > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-perm-default-wdgld > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-sysctl-default-t7f2m > 1/1 Running 0 1h > >>>> > ==> n0: ucp divingbell-uamlite-default-fc4jx > 1/1 Running 0 1h > >>>> > ==> n0: ucp ingress-86576d6599-q8ng4 > 1/1 Running 0 1h > >>>> > ==> n0: ucp ingress-error-pages-5c97bb46bb-pjz9m > 1/1 Running 0 1h > >>>> > ==> n0: ucp keystone-api-678fc44bdd-ncxc2 > 1/1 Running 0 1h > >>>> > ==> n0: ucp keystone-bootstrap-28l4g > 0/1 Completed 0 1h > >>>> > ==> n0: ucp keystone-credential-setup-rq5d4 > 0/1 Completed 0 1h > >>>> > ==> n0: ucp keystone-db-init-z8x4w > 0/1 Completed 0 1h > >>>> > ==> n0: ucp keystone-db-sync-9hvb5 > 0/1 Completed 0 1h > >>>> > ==> n0: ucp keystone-domain-manage-tzcnf > 0/1 Completed 0 1h > >>>> > ==> n0: ucp keystone-fernet-setup-bzdpb > 0/1 Completed 0 1h > >>>> > ==> n0: ucp keystone-rabbit-init-cxpc6 > 0/1 Completed 0 1h > >>>> > ==> n0: ucp maas-bootstrap-admin-user-g99rl > 0/1 Completed 0 1h > >>>> > ==> n0: ucp maas-db-init-h4llm > 0/1 Completed 0 1h > >>>> > ==> n0: ucp maas-db-sync-6tsqj > 0/1 Completed 0 1h > >>>> > ==> n0: ucp maas-export-api-key-c8rdb > 0/1 Completed 0 1h > >>>> > ==> n0: ucp maas-import-resources-hhq7f > 0/1 Completed 1 1h > >>>> > ==> n0: ucp maas-ingress-756f6f9d6-dpcp9 > 2/2 Running 0 1h > >>>> > ==> n0: ucp maas-ingress-errors-8686d56d98-jr6xx > 1/1 Running 0 1h > >>>> > ==> n0: u > >>>> > ==> n0: cp maas-rack-0 > 1/1 Running 0 1h > >>>> > ==> n0: ucp maas-region-0 > 1/1 Running 0 1h > >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-mhjjf > 1/1 Running 0 1h > >>>> > ==> n0: ucp mariadb-ingress-55794d94c8-vglbv > 1/1 Running 0 1h > >>>> > ==> n0: ucp mariadb-ingress-error-pages-85f96fbd-28cdv > 1/1 Running 0 1h > >>>> > ==> n0: ucp mariadb-server-0 > 1/1 Running 0 1h > >>>> > ==> n0: ucp postgresql-0 > 1/1 Running 1 1h > >>>> > _______________________________________________ > >>>> > Airship-discuss mailing list > >>>> > Airship-discuss at lists.airshipit.org > >>>> > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From se136c at att.com Mon May 13 20:47:50 2019 From: se136c at att.com (EAGAN, SEAN) Date: Mon, 13 May 2019 20:47:50 -0000 Subject: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D897092FA16@MOSTLS1MSGUSRFF.ITServices.sbc.com>, Message-ID: <5B9C5644A0B2414E90A7F40249CF6A955EE7E678@MOSTLS1MSGUSRFA.ITServices.sbc.com> +1 ________________________________ From: Drew Walters [drewwalters96 at gmail.com] Sent: Monday, May 13, 2019 2:42 PM To: MCEUEN, MATT Cc: airship-discuss at lists.airshipit.org Subject: Re: [Airship-discuss] Spyglass/Pegleg core reviewer nominations In line with our discussion at the PTG, I would like to nominate two project-specific core reviewers: Alex Hughes (alexanderhughes): Spyglass, Pegleg projects Ian Pittwood (ian-pittwood): Spyglass project A simple +1/-1 will be interpreted as being for both of the folks/repos above; if you have more specific votes please specify in your response +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: