In this tutorial we will install OpenStack Kilo release from RDO repository on three nodes (Controller, Network, Compute) based on CentOS 7 operating system using packstack automated script. The following installation utilizes VLAN based internal software network infrastructure for communication between instances.
Environment used:
public network (Floating IP network): 192.168.2.0/24
internal network (on each node): no IP space, physical connection only (eth1)
controller node public IP: 192.168.2.12 (eth0)
network node public IP: 192.168.2.13 (eth0)
compute node public IP: 192.168.2.14 (eth0)
OS version (each node): CentOS Linux release 7.2.1511 (Core)
Controller node interfaces configuration before OpenStack installation:
[root@controller ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:cb:3f brd ff:ff:ff:ff:ff:ff
inet 192.168.2.12/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 78230sec preferred_lft 78230sec
inet6 fe80::5054:ff:fe00:cb3f/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:14:1f:a8 brd ff:ff:ff:ff:ff:ff
Network node interfaces configuration before OpenStack installation:
[root@network ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:31:b1:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.2.13/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 79917sec preferred_lft 79917sec
inet6 fe80::5054:ff:fe31:b1ca/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:0c:98 brd ff:ff:ff:ff:ff:ff
Compute node interfaces configuration before OpenStack installation:
[root@compute ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.2.14/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 84744sec preferred_lft 84744sec
inet6 fe80::5054:ff:fe53:9d7b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff
Steps:
1. Update system on all nodes (Controller, Network, Compute):
[root@controller ~]# yum update
[root@network ~]# yum update
[root@compute ~]# yum update
2. Install RDO repository (Controller node):
[root@controller ~]# yum install https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
Verify installed RDO package:
[root@controller ~]# rpm -qa | grep rdo-release
rdo-release-kilo-1.noarch
3. Install packstack automated installer (Controller node):
[root@controller ~]# yum install openstack-packstack
4. Disable and Stop NetworkManager on all nodes (Controller, Network, Compute)
Neutron currently (as per OpenStack Kilo release) doesn’t support NetworkManager, so we have to stop and disable it on all nodes:
[root@controller ~]# systemctl stop NetworkManager
[root@controller ~]# systemctl disable NetworkManager
[root@network ~]# systemctl stop NetworkManager
[root@network ~]# systemctl disable NetworkManager
[root@compute ~]# systemctl stop NetworkManager
[root@compute ~]# systemctl disable NetworkManager
5. Generate answer file for packstack automated installation (Controller node):
[root@controller ~]# packstack --gen-answer-file=/root/answers.txt
Backup answer file (/root/answers.txt) file before we start modifying it:
[root@controller ~]# cp /root/answers.txt /root/answers.txt.backup
Now edit answer file (/root/answers.txt) and modify below parameters (Controller node):
CONFIG_NTP_SERVERS=0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
CONFIG_NAGIOS_INSTALL=n
CONFIG_CONTROLLER_HOST=192.168.2.12
CONFIG_COMPUTE_HOSTS=192.168.2.14
CONFIG_NETWORK_HOSTS=192.168.2.13
CONFIG_USE_EPEL=y
CONFIG_RH_OPTIONAL=n
CONFIG_KEYSTONE_ADMIN_PW=password
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1000:2000
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_PROVISION_DEMO=n
Here we attach answers.txt file used during our 3 node Openstack Kilo installation.
Note: we left the rest of the parameters with their default values, as they are not critical for the installation to succeed, but of course feel free to modify them according to your needs.
6. Install OpenStack Kilo using packstack (Controller node)
Launch packstack automated installation (Controller node):
[root@controller ~]# packstack --answer-file=/root/answers.txt
Installation takes about 1 hour, packstack will prompt us for root password for each node (Controller, Network, Compute) in order to be able to deploy Openstack services on all nodes using Puppet script:
Welcome to the Packstack setup utility
The installation log file is available at: /var/tmp/packstack/20160320-230116-mT1aV6/openstack-setup.log
Installing:
Clean Up [ DONE ]
Discovering ip protocol version [ DONE ]
root@192.168.2.12's password:
root@192.168.2.13's password:
root@192.168.2.14's password:
Setting up ssh keys [ DONE ]
Preparing servers [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Installing time synchronization via NTP [ DONE ]
Setting up CACERT [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MariaDB manifest entries [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Cinder Keystone manifest entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Cinder manifest entries [ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron FWaaS Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding Swift Keystone manifest entries [ DONE ]
Adding Swift builder manifest entries [ DONE ]
Adding Swift proxy manifest entries [ DONE ]
Adding Swift storage manifest entries [ DONE ]
Adding Swift common manifest entries [ DONE ]
Adding MongoDB manifest entries [ DONE ]
Adding Redis manifest entries [ DONE ]
Adding Ceilometer manifest entries [ DONE ]
Adding Ceilometer Keystone manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.2.12_prescript.pp
Applying 192.168.2.13_prescript.pp
Applying 192.168.2.14_prescript.pp
192.168.2.12_prescript.pp: [ DONE ]
192.168.2.14_prescript.pp: [ DONE ]
192.168.2.13_prescript.pp: [ DONE ]
Applying 192.168.2.12_chrony.pp
Applying 192.168.2.13_chrony.pp
Applying 192.168.2.14_chrony.pp
192.168.2.13_chrony.pp: [ DONE ]
192.168.2.12_chrony.pp: [ DONE ]
192.168.2.14_chrony.pp: [ DONE ]
Applying 192.168.2.12_amqp.pp
Applying 192.168.2.12_mariadb.pp
192.168.2.12_amqp.pp: [ DONE ]
192.168.2.12_mariadb.pp: [ DONE ]
Applying 192.168.2.12_keystone.pp
Applying 192.168.2.12_glance.pp
Applying 192.168.2.12_cinder.pp
192.168.2.12_keystone.pp: [ DONE ]
192.168.2.12_glance.pp: [ DONE ]
192.168.2.12_cinder.pp: [ DONE ]
Applying 192.168.2.12_api_nova.pp
192.168.2.12_api_nova.pp: [ DONE ]
Applying 192.168.2.12_nova.pp
Applying 192.168.2.14_nova.pp
192.168.2.12_nova.pp: [ DONE ]
192.168.2.14_nova.pp: [ DONE ]
Applying 192.168.2.12_neutron.pp
Applying 192.168.2.13_neutron.pp
Applying 192.168.2.14_neutron.pp
192.168.2.14_neutron.pp: [ DONE ]
192.168.2.12_neutron.pp: [ DONE ]
192.168.2.13_neutron.pp: [ DONE ]
Applying 192.168.2.12_osclient.pp
Applying 192.168.2.12_horizon.pp
192.168.2.12_osclient.pp: [ DONE ]
192.168.2.12_horizon.pp: [ DONE ]
Applying 192.168.2.12_ring_swift.pp
192.168.2.12_ring_swift.pp: [ DONE ]
Applying 192.168.2.12_swift.pp
192.168.2.12_swift.pp: [ DONE ]
Applying 192.168.2.12_mongodb.pp
Applying 192.168.2.12_redis.pp
192.168.2.12_mongodb.pp: [ DONE ]
192.168.2.12_redis.pp: [ DONE ]
Applying 192.168.2.12_ceilometer.pp
192.168.2.12_ceilometer.pp: [ DONE ]
Applying 192.168.2.12_postscript.pp
Applying 192.168.2.13_postscript.pp
Applying 192.168.2.14_postscript.pp
192.168.2.13_postscript.pp: [ DONE ]
192.168.2.12_postscript.pp: [ DONE ]
192.168.2.14_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]
**** Installation completed successfully ******
Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.2.12. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.2.12/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* Because of the kernel update the host 192.168.2.12 requires reboot.
* The installation log file is available at: /var/tmp/packstack/20160320-230116-mT1aV6/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20160320-230116-mT1aV6/manifests
Time to test our installation – log in to the Horizon (OpenStack Dashboard), type the following in your web browser:
http://192.168.2.12/dashboard
You should see Dashboard Login screen, type login and password (in our case: admin/password):
Network interfaces configuraton on Controller node right after OpenStack installation:
[root@controller ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:cb:3f brd ff:ff:ff:ff:ff:ff
inet 192.168.2.12/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 86123sec preferred_lft 86123sec
inet6 fe80::5054:ff:fe00:cb3f/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 52:54:00:14:1f:a8 brd ff:ff:ff:ff:ff:ff
Network interfaces configuraton on Network node right after OpenStack installation:
[root@network ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:31:b1:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.2.13/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 86119sec preferred_lft 86119sec
inet6 fe80::5054:ff:fe31:b1ca/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 52:54:00:00:0c:98 brd ff:ff:ff:ff:ff:ff
4: ovs-system: mtu 1500 qdisc noop state DOWN
link/ether 6a:a3:0b:de:21:96 brd ff:ff:ff:ff:ff:ff
5: br-ex: mtu 1500 qdisc noop state DOWN
link/ether 16:72:f3:cc:df:47 brd ff:ff:ff:ff:ff:ff
6: br-int: mtu 1500 qdisc noop state DOWN
link/ether 8a:bf:ea:78:6d:46 brd ff:ff:ff:ff:ff:ff
7: br-eth1: mtu 1500 qdisc noop state DOWN
link/ether a2:19:a2:50:ed:46 brd ff:ff:ff:ff:ff:ff
Network interfaces configuraton on Compute node right after OpenStack installation:
[root@compute ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.2.14/24 brd 192.168.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe53:9d7b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fee3:b2d4/64 scope link
valid_lft forever preferred_lft forever
4: ovs-system: mtu 1500 qdisc noop state DOWN
link/ether ea:2e:e8:dd:5b:a7 brd ff:ff:ff:ff:ff:ff
5: br-eth1: mtu 1500 qdisc noop state DOWN
link/ether b2:c8:a3:20:45:4d brd ff:ff:ff:ff:ff:ff
6: br-int: mtu 1500 qdisc noop state DOWN
link/ether 8a:ec:90:42:80:43 brd ff:ff:ff:ff:ff:ff
Services on Controller node:
[root@controller ~]# systemctl list-unit-files | grep openstack
openstack-ceilometer-alarm-evaluator.service enabled
openstack-ceilometer-alarm-notifier.service enabled
openstack-ceilometer-api.service enabled
openstack-ceilometer-central.service enabled
openstack-ceilometer-collector.service enabled
openstack-ceilometer-notification.service enabled
openstack-ceilometer-polling.service disabled
openstack-cinder-api.service enabled
openstack-cinder-backup.service enabled
openstack-cinder-scheduler.service enabled
openstack-cinder-volume.service enabled
openstack-glance-api.service enabled
openstack-glance-registry.service enabled
openstack-glance-scrubber.service disabled
openstack-keystone.service disabled
openstack-losetup.service enabled
openstack-nova-api.service enabled
openstack-nova-cert.service enabled
openstack-nova-conductor.service enabled
openstack-nova-console.service disabled
openstack-nova-consoleauth.service enabled
openstack-nova-metadata-api.service disabled
openstack-nova-novncproxy.service enabled
openstack-nova-scheduler.service enabled
openstack-nova-xvpvncproxy.service disabled
openstack-swift-account-auditor.service enabled
openstack-swift-account-auditor@.service disabled
openstack-swift-account-reaper.service enabled
openstack-swift-account-reaper@.service disabled
openstack-swift-account-replicator.service enabled
openstack-swift-account-replicator@.service disabled
openstack-swift-account.service enabled
openstack-swift-account@.service disabled
openstack-swift-container-auditor.service enabled
openstack-swift-container-auditor@.service disabled
openstack-swift-container-reconciler.service disabled
openstack-swift-container-replicator.service enabled
openstack-swift-container-replicator@.service disabled
openstack-swift-container-updater.service enabled
openstack-swift-container-updater@.service disabled
openstack-swift-container.service enabled
openstack-swift-container@.service disabled
openstack-swift-object-auditor.service enabled
openstack-swift-object-auditor@.service disabled
openstack-swift-object-expirer.service enabled
openstack-swift-object-replicator.service enabled
openstack-swift-object-replicator@.service disabled
openstack-swift-object-updater.service enabled
openstack-swift-object-updater@.service disabled
openstack-swift-object.service enabled
openstack-swift-object@.service disabled
openstack-swift-proxy.service enabled
[root@controller ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service disabled
neutron-l3-agent.service disabled
neutron-metadata-agent.service disabled
neutron-netns-cleanup.service disabled
neutron-ovs-cleanup.service disabled
neutron-server.service enabled
[root@controller ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service disabled
Services on Network node:
[root@network ~]# systemctl list-unit-files | grep openstack
[root@network ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service enabled
neutron-l3-agent.service enabled
neutron-metadata-agent.service enabled
neutron-netns-cleanup.service disabled
neutron-openvswitch-agent.service enabled
neutron-ovs-cleanup.service enabled
neutron-server.service disabled
[root@network ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service enabled
Services on Compute node:
[root@compute ~]# systemctl list-unit-files | grep openstack
openstack-ceilometer-compute.service enabled
openstack-ceilometer-polling.service disabled
openstack-nova-compute.service enabled
[root@compute ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service disabled
neutron-l3-agent.service disabled
neutron-metadata-agent.service disabled
neutron-netns-cleanup.service disabled
neutron-openvswitch-agent.service enabled
neutron-ovs-cleanup.service enabled
neutron-server.service disabled
[root@compute ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service enabled
OVS configuration on Controller node:
[root@controller ~]# ovs-vsctl show
-bash: ovs-vsctl: command not found
OVS configuration on Network node:
[root@network ~]# ovs-vsctl show
b2afe2a2-5573-4108-9c2a-347e8d91183e
Bridge "br-eth1"
Port "br-eth1"
Interface "br-eth1"
type: internal
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Bridge br-int
fail_mode: secure
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.3.1"
OVS configuration on Compute node:
[root@compute ~]# ovs-vsctl show
413d0132-aff0-4d98-ad6f-50b64b4bb13f
Bridge "br-eth1"
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Port "br-eth1"
Interface "br-eth1"
type: internal
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
ovs_version: "2.3.1"
As we can see the majority of network interfaces were created on Network node, which will be now responsible for inbound / outbound traffic handling. All the Neutron critical services are now installed on Network node.
Now create OVS (openvswitch) bridges and bind them to physical network interfaces on OpenStack nodes.
Note: we will not perform any modifications on Controller node interfaces, as Controller is not running any network related Openstack services, so it’s not involved in any traffic to / from Openstack Instances.
7. Configure network interfaces and Open vSwitch (OVS)
Backup / create new network interface files on Network node:
[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ifcfg-eth0.backup
[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex
[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth1 /root/ifcfg-eth1.backup
Modify ifcfg-eth0 file on Network node to look like below:
DEVICE=eth0
ONBOOT=yes
Modify ifcfg-br-ex file on Network node to look like below:
DEVICE=br-ex
TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=192.168.2.13
PREFIX=24
GATEWAY=192.168.2.1
PEERDNS=yes
DNS1=8.8.8.8
DNS2=8.8.4.4
Modify ifcfg-eth1 file on Network node to look like below:
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
Connect eth0 interface as a port to br-ex bridge on Network node:
Note: below command will trigger network restart, so you will lose network connection for a while! The network connection should be brought up again, if you modified ifcfg-eth0 and ifcfg-br-ex files correctly.
[root@network ~]# ovs-vsctl add-port br-ex eth0; systemctl restart network
Now let’s connect eth1 interface as a port to br-eth1 bridge on Network node (this will restart network too):
[root@network ~]# ovs-vsctl add-port br-eth1 eth1; systemctl restart network
Verify new network interfaces configuration on Network node after our modifications (public IP is now assigned to br-ex interface):
[root@network ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 52:54:00:31:b1:ca brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe31:b1ca/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 52:54:00:00:0c:98 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe00:c98/64 scope link
valid_lft forever preferred_lft forever
4: ovs-system: mtu 1500 qdisc noop state DOWN
link/ether 6a:a3:0b:de:21:96 brd ff:ff:ff:ff:ff:ff
5: br-ex: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 16:72:f3:cc:df:47 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.13/24 brd 192.168.2.255 scope global br-ex
valid_lft forever preferred_lft forever
inet6 fe80::1472:f3ff:fecc:df47/64 scope link
valid_lft forever preferred_lft forever
6: br-int: mtu 1500 qdisc noop state DOWN
link/ether 8a:bf:ea:78:6d:46 brd ff:ff:ff:ff:ff:ff
7: br-eth1: mtu 1500 qdisc noop state DOWN
link/ether a2:19:a2:50:ed:46 brd ff:ff:ff:ff:ff:ff
Verify OVS configuration on Network node. Now port eth0 should be assigned to br-ex and port eth1 should be assigned to br-eth1:
[root@network ~]# ovs-vsctl show
dbc8c4e4-d717-482c-83da-3f3aafe50ed5
Bridge "br-eth1"
Port "eth1"
Interface "eth1"
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Port "br-eth1"
Interface "br-eth1"
type: internal
Bridge br-int
fail_mode: secure
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth0"
Interface "eth0"
ovs_version: "2.4.0"
Backup ifcfg-eth1 network interface file on Compute node:
[root@compute ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth1 /root/ifcfg-eth1.backup
Modify ifcfg-eth1 file on Compute node to look like below:
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
Connect eth1 interface as a port to br-eth1 bridge on Compute node (this will restart network):
[root@compute ~]# ovs-vsctl add-port br-eth1 eth1; systemctl restart network
Verify network interfaces configuration on Compute node after our modifications (eth1 interface should be UP now):
[root@compute ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.2.14/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 85399sec preferred_lft 85399sec
inet6 fe80::5054:ff:fe53:9d7b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fee3:b2d4/64 scope link
valid_lft forever preferred_lft forever
4: ovs-system: mtu 1500 qdisc noop state DOWN
link/ether 1e:45:da:ed:53:db brd ff:ff:ff:ff:ff:ff
5: br-eth1: mtu 1500 qdisc noop state DOWN
link/ether b2:c8:a3:20:45:4d brd ff:ff:ff:ff:ff:ff
6: br-int: mtu 1500 qdisc noop state DOWN
link/ether 8a:ec:90:42:80:43 brd ff:ff:ff:ff:ff:ff
Verify OVS configuration on Compute node. Now port eth1 should be assigned to br-eth1:
[root@compute ~]# ovs-vsctl show
413d0132-aff0-4d98-ad6f-50b64b4bb13f
Bridge "br-eth1"
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Port "eth1"
Interface "eth1"
Port "br-eth1"
Interface "br-eth1"
type: internal
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
ovs_version: "2.3.1"
On Compute node set vnc proxy client IP address in /etc/nova/nova.conf file:
vncserver_proxyclient_address=192.168.2.14
Note: above parameter value allows us to connect to Instance via VNC Dashboard Console.
Restart openstack-nova-compute service on Compute node:
[root@compute ~]# systemctl restart openstack-nova-compute
8. Verify OpenStack services
After packstack based Openstack installation a file /root/keystonerc_admin is created on Controller node. This file contains admin credentials and other authentication parameters needed to operate and maintain our cloud. It looks like below:
unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://192.168.2.12:5000/v2.0
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
Let’s source this file to import OpenStack admin credentials into Linux system variables, to avoid being prompted for password each time we want to invoke OpenStack command:
[root@controller ~]# source /root/keystonerc_admin
[root@controller ~(keystone_admin)]#
Note: after sourcing the file our prompt should now include keystone_admin phrase
Verify OpenStack status:
[root@controller ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: active
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== Horizon service ==
openstack-dashboard: active
== neutron services ==
neutron-server: active
neutron-dhcp-agent: inactive (disabled on boot)
neutron-l3-agent: inactive (disabled on boot)
neutron-metadata-agent: inactive (disabled on boot)
== Swift services ==
openstack-swift-proxy: active
openstack-swift-account: active
openstack-swift-container: active
openstack-swift-object: active
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
openstack-cinder-backup: active
== Ceilometer services ==
openstack-ceilometer-api: active
openstack-ceilometer-central: active
openstack-ceilometer-compute: inactive (disabled on boot)
openstack-ceilometer-collector: active
openstack-ceilometer-alarm-notifier: active
openstack-ceilometer-alarm-evaluator: active
openstack-ceilometer-notification: active
== Support services ==
mysqld: inactive (disabled on boot)
dbus: active
target: active
rabbitmq-server: active
memcached: active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+----------------------+
| 741c84ddf2e648d0938a8009417707eb | admin | True | root@localhost |
| 72829abede3048deb8f59757d663bd76 | ceilometer | True | ceilometer@localhost |
| 349ec7422da94c61915e0efda2916c38 | cinder | True | cinder@localhost |
| 874ff05c243a465ab8c00d558f390e56 | glance | True | glance@localhost |
| d818b9abe3174c04a2cfb0955d1f0751 | neutron | True | neutron@localhost |
| 4f8f8913302148168b048053f808730b | nova | True | nova@localhost |
| 752db7525a1f474dad33141323297977 | swift | True | swift@localhost |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+
== Nova managed services ==
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2016-03-24T19:41:24.000000 | - |
| 2 | nova-scheduler | controller | internal | enabled | up | 2016-03-24T19:41:24.000000 | - |
| 3 | nova-conductor | controller | internal | enabled | up | 2016-03-24T19:41:24.000000 | - |
| 4 | nova-cert | controller | internal | enabled | up | 2016-03-24T19:41:24.000000 | - |
| 5 | nova-compute | compute | nova | enabled | up | 2016-03-24T19:41:24.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+----+-------+------+
| ID | Label | Cidr |
+----+-------+------+
+----+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
Verify Neutron agent list:
[root@controller ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 38cfed83-711e-47e7-a519-a8121f8e54a9 | Open vSwitch agent | network | ":-)" | True | neutron-openvswitch-agent |
| 5fe9033c-58c2-4998-b65b-19c4e0435c28 | L3 agent | network | ":-)" | True | neutron-l3-agent |
| 78ff9457-a950-4d61-9879-83cbf01cbd5c | Metadata agent | network | ":-)" | True | neutron-metadata-agent |
| 7cbb9bc6-ef97-4e0e-9620-421bc30d390d | DHCP agent | network | ":-)" | True | neutron-dhcp-agent |
| cf34648f-754e-4d2c-984b-27ee7d726a72 | Open vSwitch agent | compute | ":-)" | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
Verify host list:
[root@controller ~(keystone_admin)]# nova-manage host list
host zone
controller internal
compute nova
Verify service list:
[root@controller ~(keystone_admin)]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth controller internal enabled ":-)" 2016-03-24 19:49:24
nova-scheduler controller internal enabled ":-)" 2016-03-24 19:49:24
nova-conductor controller internal enabled ":-)" 2016-03-24 19:49:24
nova-cert controller internal enabled ":-)" 2016-03-24 19:49:24
nova-compute compute nova enabled ":-)" 2016-03-24 19:49:24
…and that’s it, we have successfuly installed and configured Openstack Kilo on 3 nodes.
We can now create Project Tenant and launch OpenStack Instances.
Hello
Is a great article. I have a question…Because is not necessary ip address eth1. What is the function of private network?. Thanks for your help.
Hi
eth1 is needed to provide physical connection for internal network inside OpenStack to connect Instances so, that they could communicate each other inside cloud.
eth0 is needed to provide external network with Floating IP (external IP) for the Instances to let them communicate with WAN (outside cloud) and for communication between OpenStack nodes (Controller – Compute – Network).
Hi Grzegorz,
Thanks for the amazing writeup!
On the bridged interface on the network node, do we need to set eth0 and br-ex to be in promiscuous mode?
Hi Matthew
We don’t need to set eth0 or br-ex to promiscuous mode and to be honest I never did so during my OpenStack installations.
Hi Grzegorz,
Is a great article.
I have some doubts about the installation.. Can I use vxlan instead of vlan? If the node Controller crashes, will node Network or Computer take over?
Can I use the some configuration that was listed on the “step by step” with the mitaka version? Thanks for your help.
Hi Alexandre
Of course you can use VXLANs which are included in answer file by default, it’s even easier to install OpenStack from packstack using VXLANs (less modifications in answer file)
If the Compute node fails, you will lose it’s functionality.
I didn’t see Mitaka step-by-step docs, but I guess it’s about manual OpenStack installation service-by-service, it’s possible but takes lot of time.
Hi Grzegorz,
The step by step that I meant was this one, on this page (openstack Kilo).
Can I use this configuration on Mitaka?
Now edit answer file (/root/answers.txt) and modify below parameters (Controller node):
CONFIG_NTP_SERVERS=0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
CONFIG_NAGIOS_INSTALL=n
CONFIG_CONTROLLER_HOST=192.168.2.12
CONFIG_COMPUTE_HOSTS=192.168.2.14
CONFIG_NETWORK_HOSTS=192.168.2.13
CONFIG_USE_EPEL=y
CONFIG_RH_OPTIONAL=n
CONFIG_KEYSTONE_ADMIN_PW=password
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1000:2000
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_PROVISION_DEMO=n
Thanks for your help.
Hi Alexandre
Sorry for misunderstanding.
Yes, you can use this configuration on Mitaka for VLAN based installation.
Hi Grzegors,
I’m here again ..rsrs
Could you clarify two more doubts? If I add a new computer, do I need another interface in the server network, example: eth2? Or can I use the same interface eth1?
In my installation I left in the same server the Controller and the Network, and in the other server, the Compute. Are all of the machines that i’ve been creating by default inside the VOLUME-CINDER that the installation creates? Default directory /var/lib/cinder/
Thanks for your help.
Hi Alexandre
You should use eth1 interface.
cinder-volumes VG is placed on Controller, but of course it is accessible for all compute nodes and volumes created in cinder-volumes group are accessible on all instances on any compute node.
Hi Grzegors,
I made a new installation OpenStack and I following the step by step, but the interface Port “int-br-eth1” don’t create in my computer.
Would you help me?
Thank yoou
[root@compute ~]# ovs-vsctl show
9f0a95c5-d612-4512-a8ed-95e2465251b1
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
ovs_version: “2.5.0”
[root@network ~]# ovs-vsctl show
63c54839-6b1a-4f8c-b57b-fc4b86423632
Bridge br-int
fail_mode: secure
Port “int-br-eth1”
Interface “int-br-eth1″
type: patch
options: {peer=”phy-br-eth1”}
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge “br-eth1”
Port “phy-br-eth1”
Interface “phy-br-eth1″
type: patch
options: {peer=”int-br-eth1”}
Port “br-eth1”
Interface “br-eth1”
type: internal
ovs_version: “2.5.0”
Hi Alexandre
Are you sure you installed Kilo?
This looks like Liberty installation, because there was a bug in packstack for Liberty:
https://bugzilla.redhat.com/show_bug.cgi?id=1315725
Hi Grzegors,
What modification do I need in my answers file if I wanted to use vxlan, instead of vlan?
Thanks!
-jlan421
Hi
Below I paste a part of my answer.txt file with vxlan configuration that worked for me:
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:4000:4094
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1000:1100
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
Thank Grzegors! Have you tried this multinode configuration with Mitaka? Interested in doing a muilti-node install with vxlan, some of the parameters in the answer file you listed doesn’t exists.
Hi
No it wasn’t Mitaka for sure, it was couple of months ago I installed with VXLAN, so it could have been even Icehouse, so new parameters could be added in the meantime, anyway you should see some similarities to catch the idea of VXLAN installation. I am planning to make a tutorial about Mitaka VXLAN based installation very soon.
I used kilo and in may case also there is no br-eth1 created. I can see in ur case also just after openstck install , when you did “ip a” , there is no br-eth1″
“Network interfaces configuraton on Compute node right after OpenStack installation:
[root@compute ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.2.14/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 86116sec preferred_lft 86116sec
inet6 fe80::5054:ff:fe53:9d7b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff
4: ovs-system: mtu 1500 qdisc noop state DOWN
link/ether d6:30:cb:b1:1b:92 brd ff:ff:ff:ff:ff:ff
5: br-int: mtu 1500 qdisc noop state DOWN
link/ether 46:1d:49:1e:f5:4b brd ff:ff:ff:ff:ff:ff
”
Could you please let us know what should I do if br-eth1 is not created on compute node after install ?
Hi Shailendra
Of course there was a mistake for Compute node!!!
I must have inserted wrong code block when writing this article, SORRY for that, I have just corrected it and now it’s OK (how it’s meant to be).
Anyway my configuration DOES HAVE! br-eth1 created after installation, so something must be wrong with Your setup.
br-eth1 is mapped in answers.txt in the following parameter:
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
…and this forces packstack to create br-eth1 interface upon installation.
To fix the issue you have two options:
1. Reinstall OpenStack using release different than Liberty (easy and fastest way)
2. Manually create br-eth1 bridge and attach it as port to OVS using ovs-vsctl commands (I never had to recreate it manually myself, so you will have to read ovs-vsctl manual yourself)
Thanks a lot
Hi Shailendra
I found your problem on Red Hat Support, looks like on Mitaka you need to configure more parameters in answer file:
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1
CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-eth1
details under this link:
https://bugzilla.redhat.com/show_bug.cgi?id=1338012
…although haven’t tested it myself on Mitaka yet.
Hi Grzegorz!
First of all, thank you for your post: it was really helpful in the last days while I was trying to deploy an OpenStack environment.
In many trials I have used a physical switch (in bridge mode) to connect the eth1 interfaces between my network node and my compute node. However, I only could get it working when I followed your clear suggestion to use a “physical connection only”, i.e., an ethernet cat5 cable connecting directly both interfaces.
My question is: how may I expand this scenario if I need to connect another Compute Node? Should I have another physical connection in other physical controller interface? In a huge cloud environments (e.g. Amazon EC2) is it made in the same way?
I try to find this answer in the similar questions along your blog or in the openstack documentation, but it is still unclear for me and I would appreciate any help!
Hi Tiago
The concept of “physical connection only” in all my OpenStack related posts means actually, that you don’t need to set IP addresses (although you can) on eth1 interfaces on Compute nodes, because OVS handles all the Layer 3 related traffic on those interfaces, so you only need to provide Layer 2 connection (I called it “physical connection”, but in fact it’s “Data Link Layer” of ISO/OSI). That means, that if you have to connect more than two nodes together (for example: 1xNetwork + 2xCompute), you should use Layer 2 switch anyway (even if you don’t set IP on those interfaces), so all eth1 interfaces on all involved nodes should be connected to L2 switch dedicated to internal network only.
Hi Grzegorz!
Thank you for the help, it was really useful! The problem was I was attempting to use VLAN when configuring eth1 for OVS communication and it started to work when we set a traditional L2 network, as you said. Actually we do believe it would be possible when using a L2 switch supporting QinQ encapsulation. As soon as we test it, I will share with you the results.
Now I’m dealing with a new problem: the openstack environment has worked appropriately for 20 days. So while I was activating a backup routine, a yum-update was run by cron and some system packages were updated. Since then, I could never set Openstack Kilo environment to work again. I tried another installation — in the same way you explained here — but in every attempts I got an error on keystone, horizon or glance. Considering the update logs, I suspect there are some broken packages after Openstack-Newton release (it started on 6th Oct). Did you find something similar? Which repositories are you using when installing Kilo on CentOS 7?
Hi Tiago
During OpenStack Kilo installation I used: Kilo Repo and EPEL, exactly as written in the article, nothing more.
Your situation right now seems to be irreversible.
But you can try to rollback the packages, this link should be helpful:
https://access.redhat.com/solutions/29617
Avoid OpenStack automatic updates in the future.
Openstack RDO release should be upgraded by upgrading services one by one on each node with small steps, this is pretty annoying though.
Hi Grzegorz,
I was wondering using your same concept but shrinking it down to 2 nodes, one node (controller+compute) , the other (network+compute) and with the same eth1 interlink without IP works. Any thoughts?
Hi
Putting Controller and Compute on one node is possible and it works (I tested it), but you need be aware that creating new tenants can quickly consume resources and server may run out of resources needed for Controller processes.
Regarding Network and Compute services on the same node – I never installed and never tested such configuration, If You succeed, please let me know, because I’m curious 🙂
Anyway I think there is a small chance that it will work.
Hi Grzegorz,
Quick question. When I disable NetworkManager, the link for eth1 (physical connection) goes down on both nodes, is that proper or I have done something wrong?
Hi
It’s pretty common, but you shouldn’t worry about this. Once you install OpenStack and configure eth1 in OVS for internal communication, the interface should be brought up by OpenStack. If it isn’t for some reason, you should bring it up manually:
$ ifup eth1
and add appropriate parameter in /etc/sysconfig/network-scripts/ifcfg-eth1:
ONBOOT=yes
…to let the interface get up after reboot.
The most common symptom of internal network interface (eth1) down are problems with the Instances not getting IP address from internal DHCP.
Hi I am getting a status like XXX for compute!! but I am still didin’t create any instance yet!
[root@controller ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth controller.lan internal enabled XXX 2016-12-05 19:03:10
nova-scheduler controller.lan internal enabled XXX 2016-12-05 19:03:10
nova-conductor controller.lan internal enabled XXX 2016-12-05 19:03:10
nova-cert controller.lan internal enabled XXX 2016-12-05 19:03:10
nova-compute compute.lan nova enabled XXX 2016-12-05 19:03:10
[root@controller ~]#
</php
Hi, I just restart the service and the back and work with smile face, but I’m having another issue which is once create an instance, I am getting the following error message:
“Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500)”
I hope you can advice me how to solve it please note that I have followed your article with everything.
Regards
Hi Yas3r
Looks like you are running out of resources on your OpenStack nodes, when trying to perform some action, like launching an Instance and other services like Horizon may be impacted.
I would check resource utilization on all nodes:
for example:
available memory:
[root@node ~]# free -m
check processes:
[root@node ~]# top
check free space on disk:
[root@node ~]# df -hT
By the way, what hardware do you use?
CPU speed?
memory amount?
Can you help me with answer file for the OpenStack Newton VLAN based installation on 3(one node must be controller+network+compute and the last 2 only compute node) CentOS 7 nodes?
Also, how can I configure that all internal traffic to use VLAN ID 1 (untagged traffic) and what means physnet1 from CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
Thx,
mb
Hi
Let’s say this is your environment as you mentioned:
192.168.2.12: controller+network+compute (all-in-one)
192.168.2.13: compute
192.168.2.14: compute
then in answer file it should look like below:
CONFIG_CONTROLLER_HOST=192.168.2.12
CONFIG_COMPUTE_HOSTS=192.168.2.12,192.168.2.13,192.168.2.14
CONFIG_NETWORK_HOSTS=192.168.2.12
This should work for you.
To start from VLAN ID 1, use the following example:
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1:20
With above example openstack will create fist VLAN with ID=1 and the last one with ID=20.
ONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
means that internal traffic (including all those VLANs) is assigned to br-eth1 interface, which is based on eth1 interface.
Verify useful post. Thanks a lot Grzegorz Juszczak 🙂
Thank you for feedback 🙂
Hello,
#yum install https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
Link not found 404.
Hi Aditya
Thanks for the remark.
Nothing lasts forever, even RDO links :).
Looks like Kilo rpm was moved to EOL (End Of Life) directory on the RDO repo, since it became obsolete. Here is the link to kilo-2:
https://repos.fedorapeople.org/repos/openstack/EOL/openstack-kilo/rdo-release-kilo-2.noarch.rpm
I don’t want to provide external network with Floating IP (external IP) to instances on openstack,I need only private ip.
Can you please suggest some idea that how should I proceed with network configurations .
Hi Karan
If you don’t want external network, then don’t create br-ex bridge during packstack deployment and after deployment don’t create ext_net in Horizon.
Hi Grzegorz,
It was such a nice article, Thanks for taking out time and shared such a wonderful article. It helps people like me who are new to openstack!!
I have once issue hope I will get ur response.
I followed your entire article and all the configurations are almost similar to yours. But I have got into one situation where when I create instance, It doesn’t take private IP from DHCP. Its boots perfectly fine but no pvt. IP is assigned to it. By looking into instance logs, I noticed that it is facing issues while connecting to Openstack’s HTTP metadata server.
Error is like:
” util.py[WARNING]: ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [34/120s]: url error [[Errno 113] No route to host]”
On further investigation of it, I found that I am unable to telnet to 169.254.169.254 on port 80 from “network server” however; when I try to telnet to 169.254.169.254 on port 80 from controller server, it does work.
Could you help to understand and resolve this issue please?
Thanks,
Jaspal
Hi Jaspal
How are your Instances connected to the network?
Are you connecting them directly to external network or to internal network and then access them via Floating IP and qrouter?
I had the same issue like yours on Liberty when I connected Instances directly to the external network.
For metadata connectivity the Instances must have been connected to qrouter in order to get routing to metadata services.
Please let me know.
Hi Grzegorz,
Thanks for your reply!!
I am not able to connect them at all as no IP is being assigned to themthrough DHCP. As per document, these instances should have taken DHCP IP from local subnet(DHCP is configured in local subnet ) but its not happening as expected. I am also allocating floating IP’s which are external one but when I try to ping these floating IP’s from other servers (compuet/controller/network), I am unable to ping them cause no DHCP IP is assigned to them.
Any pointers where I can look into?
Thanks a lot!!
Jaspal
Hi Jaspal
What I would do is to try to reboot the Instance and during the reboot launch tcpdump/wireshark on ports 67,68 on the integration bridge (br-int) of the Compute host hosting the instance to check if any DHCP requests are leaving compute towards network node. If DHCP request is not leaving the compute node, then you have to check all the interfaces from br-int up to Instance including gbr,qvb, vnet, leading to the instance. If DHCP requests are leaving Compute node and are not reaching Network node, then you are facing some connectivity problem between nodes.
NOTE that if you are running OpenStack on physical hardware, the network switch between OpenStack nodes must support 802.1q standard and allow specific VLAN tags, which you are using for local subnet, otherwise connectivity between nodes will not work and you instance will never receive IP via DHCP server.
I hope it can help you.
BR
Grzegorz Juszczak
Hi Grzegorz,
Nice Post,thats really help during openstack deployment.
Just a request, Could you please write a article for “openstack with sriov”.
Thanks
Hi Ankit
Unfortunately, I don’t have SRIOV cards, if I ever purchase SRIOV cards, I will create such an article.
Hi Grzegorz Juszczak
What about if you have 2 more compute node . how can be connected physically with network node ?
What is a different between physnet1 and extnet ?
Best regards
Hi
You connect them of course using switch, if you use VLANs in tenant networks, then you will need switch that complies with 802.1q standard. If you use VXLANs in tenant networks, then a simple L2 unmanaged switch will suffice.
Physnet represents physical network, needed to create VLANs of Flat network in OpenStack, but it’s not needed for VXLANs since they are IP-based protocols. So if you create VLANs or Flat network, you need to have some physnet defined in Neutron ml2 plugin and bound to some outgoing bridge interface. External network connects OpenStack to the outside world, external network can be defined in OpenStack using physnet if it’s a VLAN/Flat network or can be defined for example as VLAN.
Buena tardes , podrían proporcionar información sobre la instalación de openstack en su versión rocky para 3 nodos (controlador,computo,red).
Proporcionándonos algún tutorial escrito o vídeo sobre las configuraciones en la instalación, ya que hemos seguidos este tutorial y la instalación falla.
Good afternoon, could provide information about the installation of openstack in its rocky version for 3 nodes (controller, computer, network).
Providing us with a written tutorial or video about the configurations in the installation, since we have followed this tutorial and the installation fails.
Good afternoon, could provide information about the installation of openstack in its rocky version for 3 nodes (controller, computer, network).
Providing us with a written tutorial or video about the configurations in the installation, since we have followed this tutorial and the installation fails.
Hi Alexander, I am not planning testing Rocky release yet because it’s too fresh, and I guess it maight be buggy, I usually wait several months in case of new release, until it becomes mature enough and some critical bugs are fixed.
Hi Grzegorz
Thank you for an excellent blog re: OpenStack, very helpful.
I wondered if you had any success with the cleaning steps and HP Proliant servers (specifically Gen9)? do you have any experience with that at all? I am running through this but hitting an issue when importing the node, the error I seem to hit is “IloClient has no attribute ‘get_pending_bios_settings’ however the same query works fine when performed manually via python script and proliantutils, its very strange. (proliantutils 2.7).
Thanks
Hi
I deployed RDO OpenStack Pike and Red Hat OpenStack Platform 12 on HP Gen8 Blade Servers using TripleO method with success.
If you have problems importing the node for Gen9, the first thing that comes to my mind is that maybe you are using ipmi driver instead of pxe_ilo driver, that is valid for Gen8 and Gen9.
Or maybe your instackenv.json file has some errors, or illegal parameters for chosen type of driver.
Hey Grzegorz,
Open to teaching people how to install this live? I am willing to pay to learn from you.
Let me know
Thanks,
I have couple of queries :
Can we install ceph storage on controller node (3 nodes)?
Can we use the controller and compute nodes in separate network? (3 control/30 compute)
Which network type is better – vlan and vxlan? (we need 15-20 vlans)
Appreciate your help. Thank you for sharing this blog, really helpful.
Hi
Yes, you can install ceph storage nodes on controllers and computes – this configuration is called hyper convergent.
Can you explain more precisely where the separate network would be?
If you have a switch that handles 802.1q, then choose VLANs, vxlans usually require special equipment like VTEPs to spread them outside your local lab.
Last but not least – if you are planning a production OpenStack deployment, then Packstack is not a good choice.
Hey Grzegorz,
We want to use separate network interface for storage on controller nodes ( Controller+Ceph) – like eth0 for management, eth1 for provider/external and eth2 for storage.
So my question is, what changes are needed for storage network. Obviously they should be configured like other interface but how storage traffic will be segregated.
Thank you,
Vineet
Hi Vineet
If you use a separate VLAN for storage traffic, then it will be separated by the VLAN tagging, so you don’t need any other separation.
By the way, if you need help with setting up a production-ready OpenStack cloud, then our team of engineers at CinderCloud.com can walk you through the deployment procedure.
Regards
Grzegorz Juszczak