Table of Contents
In parts 1 through 3, I’ve prepared my hosts and network environment for a smooth deployment. Please see those previous articles if you are using this as a reference.
Now comes the actual deployment.
Since this is my first OpenStack deployment (other than an AiO RDO deploy in a VM), I used the openstack-ansible “test” configuration so I could have a fairly complete, functional setup.
To customize the install, you have to copy relevant files to
This is essentially all the auth keys for all of the services. OSA includes a key generator:
~# /etc/openstack-ansible/scripts/pw-token-gen.py \ --file /etc/openstack_deploy/user_secrets.yml
The admin password for the web UI is
You can leave the defaults if desired. I only set
install_method: source as a precaution, as it’s supposed to be default, and the public URI protocol was taken from the test config.
--- openstack_service_publicuri_proto: http install_method: source debug: false
This is essentially an unmodified
test config. All I changed were the hostnames (infra1 to infra), and network addresses (172.29.236⁄22 to 10.3.2⁄23, for example)
--- cidr_networks: container: 10.3.2.0/23 tunnel: 10.3.4.0/24 storage: 10.3.5.0/24 used_ips: - "10.3.3.1,10.3.3.254" - "10.3.4.1,10.3.4.50" - "10.3.5.1,10.3.5.50" global_overrides: # The internal and external VIP should be different IPs, however they # do not need to be on separate networks. external_lb_vip_address: 10.3.3.10 internal_lb_vip_address: 10.3.3.11 tunnel_bridge: "br-vxlan" management_bridge: "br-mgmt" provider_networks: - network: container_bridge: "br-mgmt" container_type: "veth" container_interface: "eth1" ip_from_q: "container" type: "raw" group_binds: - all_containers - hosts is_container_address: true is_ssh_address: true - network: container_bridge: "br-vxlan" container_type: "veth" container_interface: "eth10" ip_from_q: "tunnel" type: "vxlan" range: "1:1000" net_name: "vxlan" group_binds: - neutron_linuxbridge_agent - network: container_bridge: "br-vlan" container_type: "veth" container_interface: "eth12" host_bind_override: "eth12" type: "flat" net_name: "flat" group_binds: - neutron_linuxbridge_agent - network: container_bridge: "br-vlan" container_type: "veth" container_interface: "eth11" type: "vlan" range: "101:200,301:400" net_name: "vlan" group_binds: - neutron_linuxbridge_agent - network: container_bridge: "br-storage" container_type: "veth" container_interface: "eth2" ip_from_q: "storage" type: "raw" group_binds: - glance_api - cinder_api - cinder_volume - nova_compute ### ### Infrastructure ### # galera, memcache, rabbitmq, utility shared-infra_hosts: infra: ip: 10.3.3.11 # repository (apt cache, python packages, etc) repo-infra_hosts: infra: ip: 10.3.3.11 # load balancer haproxy_hosts: infra: ip: 10.3.3.11 ### ### OpenStack ### # keystone identity_hosts: infra: ip: 10.3.3.11 # cinder api services storage-infra_hosts: infra: ip: 10.3.3.11 # glance image_hosts: infra: ip: 10.3.3.11 # nova api, conductor, etc services compute-infra_hosts: infra: ip: 10.3.3.11 # heat orchestration_hosts: infra: ip: 10.3.3.11 # horizon dashboard_hosts: infra: ip: 10.3.3.11 # neutron server, agents (L3, etc) network_hosts: infra: ip: 10.3.3.11 # nova hypervisors compute_hosts: compute: ip: 10.3.3.12 # cinder storage host (LVM-backed) storage_hosts: storage: ip: 10.3.3.13 container_vars: cinder_backends: limit_container_types: cinder_volume lvm: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver volume_backend_name: LVM_iSCSI iscsi_ip_address: "10.3.5.13"
I followed the step by step instructions provided in the documentation, and everything went fairly smoothly.
However, I did encounter two issues:
One of the nodes didn’t end up with this installed before it was required. As such, I’ve updated the node preparation script in part one to include it.
There is an issue in the version of OSA that is currently deployed by scripts. The upstream of openstack-ansible-galera_client is corrected, but you have to manually download it.
[root@infra ~]# cd /etc/ansible/roles [root@infra roles]# rm -rf ./galera_client [root@infra roles]# git clone http://github.com/openstack/openstack-ansible-galera_client galera_client
This fixes an issue where yum fails to clean up a temporary version of mariadb, due to case insensitivity. The updated version uses rpm.
Downloading the updated role and rerunning the failed install script works.
The system is up and running. The only caveats now are that OpenStack Ansible doesn’t provide firewall configuration, and so the firewall is temporarily disabled, and I need to tighten allowed VLANs per switch port.
But in the meantime, the platform is up and ready for me to build on.