Create tenant in OpenStack Newton using command line interface

Dec 28, 2016 Bash, Cloud Computing

Create tenant in Openstack Newton using command line interface
OpenStack comes out of the box with it’s dashboard called Horizon. Horizon provides GUI, which let us manage our OpenStack environment in pretty easy and inuitive way. However basic tasks, like tenant creation or instance commissioning, can be time consuming when performed in Horizon. Using command line interface with previously prepared command templates can be more efficient and faster.

In this tutorial we present how to create Project Tenant in OpenStack Newton using command line intrerface and launch Cirros based Instances inside the Tenant.

Some time ago OpenStack Community introduced new tool called OpenStackClient (OSC) with it’s openstack command utility to unify OpenStack management, which encompasses the following components: Compute, Identity, Image, Object Storage and Block Storage APIs. So far keystone command utility was withdrawn from OpenStack as deprecated and replaced by mentioned openstack command utility. In this tutorial for Newton release we are going to use openstack commands where possible to become familiar with OpenStackClient CLI.

Steps:

1. Source keystone file with admin credentials

After packstack installation Controller / All-in-one node contains file /root/keystonerc_admin. We need to source this file in order to load admin credentials to system environment variables – this prevents from being prompted for admin password everytime we want to execute OpenStack CLI relatged command:

[root@allinone ~]# source keystonerc_admin 
[root@allinone ~(keystone_admin)]#

2. Create project tenant

Let’s create project tenant named tuxfixer:

[root@allinone ~(keystone_admin)]# openstack project create --description "project owned by tuxfixer" tuxfixer
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | project owned by tuxfixer        |
| enabled     | True                             |
| id          | dfff1754be584c7bae5342302140d7a7 |
| name        | tuxfixer                         |
+-------------+----------------------------------+

3. Create user for new project

Now create user named tuxfixer with password tuxpassword and assign it to project tuxfixer:

[root@allinone ~(keystone_admin)]# openstack user create --project tuxfixer --email admin@tuxfixer.com --password tuxpassword tuxfixer
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| email      | admin@tuxfixer.com               |
| enabled    | True                             |
| id         | 8825001d51cc40d3b07ba047652a1780 |
| name       | tuxfixer                         |
| project_id | dfff1754be584c7bae5342302140d7a7 |
| username   | tuxfixer                         |
+------------+----------------------------------+

It is vital to create tuxfixer’s keystone file /root/keystonerc_tuxfixer in order to be able to execute commands as user tuxfixer inside tuxfixer project tenant.

Copy keystonerc_admin file to keystonerc_tuxfixer file:

[root@allinone ~(keystone_admin)]# cp /root/keystonerc_admin /root/keystonerc_tuxfixer

Next, modify keystonerc_tuxfixer file contents to look like below:

unset OS_SERVICE_TOKEN
    export OS_USERNAME=tuxfixer
    export OS_PASSWORD=tuxpassword
    export OS_AUTH_URL=http://192.168.2.26:5000/v2.0
    export PS1='[\u@\h \W(keystone_tuxfixer)]\$ '
    
export OS_TENANT_NAME=tuxfixer
export OS_REGION_NAME=RegionOne

We will need to source keystonerc_tuxfixer file later in our tutorial.

4. Create public network

Create public / provider network pub_net (hence external flag) available for all tenants including admin (shared network):

[root@allinone ~(keystone_admin)]# openstack network create --external --share pub_net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2016-12-30T23:51:29Z                 |
| description               |                                      |
| headers                   |                                      |
| id                        | 81eed501-4b95-4c0d-9f56-367aef8f88e5 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| mtu                       | 1450                                 |
| name                      | pub_net                              |
| project_id                | 1dd3e1af05b840f8b6ef2e795fe0f749     |
| project_id                | 1dd3e1af05b840f8b6ef2e795fe0f749     |
| provider:network_type     | vxlan                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | 57                                   |
| revision_number           | 2                                    |
| router:external           | External                             |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      | []                                   |
| updated_at                | 2016-12-30T23:51:30Z                 |
+---------------------------+--------------------------------------+

Create public / provider network subnet named pub_subnet with specified CIDR, Gateway and IP pool range:

[root@allinone ~(keystone_admin)]# openstack subnet create --subnet-range 192.168.2.0/24 --no-dhcp --gateway 192.168.2.1 --network pub_net --allocation-pool start=192.168.2.70,end=192.168.2.80 pub_subnet
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 192.168.2.70-192.168.2.80            |
| cidr              | 192.168.2.0/24                       |
| created_at        | 2016-12-30T23:54:52Z                 |
| description       |                                      |
| dns_nameservers   |                                      |
| enable_dhcp       | False                                |
| gateway_ip        | 192.168.2.1                          |
| headers           |                                      |
| host_routes       |                                      |
| id                | 61ad0154-e4a3-4ef1-b63a-151ccf97cbb6 |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | pub_subnet                           |
| network_id        | 81eed501-4b95-4c0d-9f56-367aef8f88e5 |
| project_id        | 1dd3e1af05b840f8b6ef2e795fe0f749     |
| project_id        | 1dd3e1af05b840f8b6ef2e795fe0f749     |
| revision_number   | 2                                    |
| service_types     | []                                   |
| subnetpool_id     | None                                 |
| updated_at        | 2016-12-30T23:54:52Z                 |
+-------------------+--------------------------------------+

Note: we specified here the allocation pool of OpenStack IP addresses (192.168.2.70192.168.2.80) for public network, because we can’t use the whole IP range, since there are other devices/servers connected to that network in our lab environment. This is pretty common practise to limit IP range used by particular OpenStack cloud.

5. Create tenant network

Create private / tenant network named tux_net for project tuxfixer:

[root@allinone ~(keystone_admin)]# openstack network create --project tuxfixer --internal tux_net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-01-06T12:55:42Z                 |
| description               |                                      |
| headers                   |                                      |
| id                        | afa6327d-8f6c-445a-b79a-e18ea17b9d44 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| mtu                       | 1450                                 |
| name                      | tux_net                              |
| project_id                | dfff1754be584c7bae5342302140d7a7     |
| project_id                | dfff1754be584c7bae5342302140d7a7     |
| provider:network_type     | vxlan                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | 13                                   |
| revision_number           | 2                                    |
| router:external           | Internal                             |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      | []                                   |
| updated_at                | 2017-01-06T12:55:42Z                 |
+---------------------------+--------------------------------------+

Create private / tenant network subnet named tux_subnet for project tuxfixer with specified CIDR, Gateway and DHCP:

[root@allinone ~(keystone_admin)]# openstack subnet create --project tuxfixer --subnet-range 192.168.20.0/24 --gateway 192.168.20.1 --network tux_net --dhcp tux_subnet
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| allocation_pools  | 192.168.20.2-192.168.20.254          |
| cidr              | 192.168.20.0/24                      |
| created_at        | 2017-01-06T13:02:03Z                 |
| description       |                                      |
| dns_nameservers   |                                      |
| enable_dhcp       | True                                 |
| gateway_ip        | 192.168.20.1                         |
| headers           |                                      |
| host_routes       |                                      |
| id                | 12054a30-e00a-42e7-9f8c-e19fd47e0fdd |
| ip_version        | 4                                    |
| ipv6_address_mode | None                                 |
| ipv6_ra_mode      | None                                 |
| name              | tux_subnet                           |
| network_id        | afa6327d-8f6c-445a-b79a-e18ea17b9d44 |
| project_id        | dfff1754be584c7bae5342302140d7a7     |
| project_id        | dfff1754be584c7bae5342302140d7a7     |
| revision_number   | 2                                    |
| service_types     | []                                   |
| subnetpool_id     | None                                 |
| updated_at        | 2017-01-06T13:02:03Z                 |
+-------------------+--------------------------------------+

6. Create router to connect networks

Now we need to create router named tux_router to connect tenant network with public network:

[root@allinone ~(keystone_admin)]# openstack router create --project tuxfixer tux_router
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | UP                                   |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| created_at              | 2017-01-06T15:08:49Z                 |
| description             |                                      |
| distributed             | False                                |
| external_gateway_info   | null                                 |
| flavor_id               | None                                 |
| ha                      | False                                |
| headers                 |                                      |
| id                      | 4fe0e757-114a-44d6-811f-0ceb3f5dcd23 |
| name                    | tux_router                           |
| project_id              | dfff1754be584c7bae5342302140d7a7     |
| project_id              | dfff1754be584c7bae5342302140d7a7     |
| revision_number         | 2                                    |
| routes                  |                                      |
| status                  | ACTIVE                               |
| updated_at              | 2017-01-06T15:08:49Z                 |
+-------------------------+--------------------------------------+

Now set gateway for tux_router to our public / provider network pub_net (connect tux_router to pub_net).

[root@allinone ~(keystone_admin)]# openstack router set tux_router --external-gateway pub_net

Note: if you have problems with above command (openstack router set: error: unrecognized arguments: –external-gateway), use neutron command instead:

[root@allinone ~(keystone_admin)]# neutron router-gateway-set tux_router pub_net
Set gateway for router tux_router

Next, link tux_router to tux_subnet (connect tux_subnet to tux_router):

[root@allinone ~(keystone_admin)]# openstack router add subnet tux_router tux_subnet

7. Create custom flavor

OpenStack by default comes with couple of predefined flavors for use with newly created instances:

[root@allinone ~(keystone_admin)]# openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+

In many cases these flavors are sufficient, but we will create our ultra small flavor named m2.tiny (1 vCPU, 128MB RAM, 1GB Disk) for use with Cirros images:

[root@allinone ~(keystone_admin)]# openstack flavor create --public --vcpus 1 --ram 128 --disk 1 --id 6 m2.tiny
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 6       |
| name                       | m2.tiny |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 128     |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+

Verify once again the flavor list:

[root@allinone ~(keystone_admin)]# openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
| 6  | m2.tiny   |   128 |    1 |         0 |     1 | True      |
+----+-----------+-------+------+-----------+-------+-----------+

8. Create OpenStack image

For the purpose of this article we will create Cirros image in our OpenStack cloud.

Let’s download Cirros image to our Controller node:

[root@allinone ~(keystone_admin)]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Create new image from uploaded file:

[root@allinone ~(keystone_admin)]# openstack image create --container-format bare --file cirros-0.3.4-x86_64-disk.img --public cirros_0.3.4 
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                     |
| container_format | bare                                                 |
| created_at       | 2017-01-06T16:24:02Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/121dad69-d043-4044-a13e-3a16124e6620/file |
| id               | 121dad69-d043-4044-a13e-3a16124e6620                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirros_0.3.4                                         |
| owner            | 1dd3e1af05b840f8b6ef2e795fe0f749                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13287936                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2017-01-06T16:24:04Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+

9. Create dedicated security group

Security Groups control network access to / from instances inside the tenant.
Let’s create new security group named tux_sec for project tuxfixer:

[root@allinone ~(keystone_admin)]# openstack security group create --project tuxfixer tux_sec
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field           | Value                                                                                                                                                          |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at      | 2017-01-06T18:31:27Z                                                                                                                                           |
| description     | tux_sec                                                                                                                                                        |
| headers         |                                                                                                                                                                |
| id              | dd845701-e3db-44cc-90e6-fe182fdc685e                                                                                                                           |
| name            | tux_sec                                                                                                                                                        |
| project_id      | dfff1754be584c7bae5342302140d7a7                                                                                                                               |
| project_id      | dfff1754be584c7bae5342302140d7a7                                                                                                                               |
| revision_number | 1                                                                                                                                                              |
| rules           | created_at='2017-01-06T18:31:27Z', direction='egress', ethertype='IPv4', id='5f45933a-8017-46e6-a733-b5cd99079842',                                            |
|                 | project_id='dfff1754be584c7bae5342302140d7a7', revision_number='1', updated_at='2017-01-06T18:31:27Z'                                                          |
|                 | created_at='2017-01-06T18:31:27Z', direction='egress', ethertype='IPv6', id='4b034498-b4b2-4ce7-9d34-bd56579a283d',                                            |
|                 | project_id='dfff1754be584c7bae5342302140d7a7', revision_number='1', updated_at='2017-01-06T18:31:27Z'                                                          |
| updated_at      | 2017-01-06T18:31:27Z                                                                                                                                           |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+

Add rule for tux_sec group to permit incoming ICMP ECHO (ping):

[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol icmp --ingress --project tuxfixer tux_sec
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| created_at        | 2017-01-06T20:02:17Z                 |
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| headers           |                                      |
| id                | cd631dd2-1bf9-4b75-9b71-9ebff7814ffe |
| port_range_max    | None                                 |
| port_range_min    | None                                 |
| project_id        | dfff1754be584c7bae5342302140d7a7     |
| project_id        | dfff1754be584c7bae5342302140d7a7     |
| protocol          | icmp                                 |
| remote_group_id   | None                                 |
| remote_ip_prefix  | 0.0.0.0/0                            |
| revision_number   | 1                                    |
| security_group_id | dd845701-e3db-44cc-90e6-fe182fdc685e |
| updated_at        | 2017-01-06T20:02:17Z                 |
+-------------------+--------------------------------------+

Add rule for tux_sec group to permit incoming SSH access:

[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol tcp --dst-port 22 --ingress --project tuxfixer tux_sec
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| created_at        | 2017-01-06T20:04:12Z                 |
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| headers           |                                      |
| id                | bb530270-7827-41bc-b7ef-8bba7f49d7fd |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| project_id        | dfff1754be584c7bae5342302140d7a7     |
| project_id        | dfff1754be584c7bae5342302140d7a7     |
| protocol          | tcp                                  |
| remote_group_id   | None                                 |
| remote_ip_prefix  | 0.0.0.0/0                            |
| revision_number   | 1                                    |
| security_group_id | dd845701-e3db-44cc-90e6-fe182fdc685e |
| updated_at        | 2017-01-06T20:04:12Z                 |
+-------------------+--------------------------------------+

10. Assign floating IPs

We need to assign floating IPs for new Instances to be accessible from public / external network.

Unlike previous commands, which we were able to execute as admin user, to assign floating IPs to the tuxfixer’s project we need to source keystonerc_tuxfixer file:

[root@allinone ~(keystone_admin)]# source keystonerc_tuxfixer
[root@allinone ~(keystone_tuxfixer)]#

Create / assign two floating IPs for the tuxfixer project:

[root@allinone ~(keystone_tuxfixer)]# openstack floating ip create pub_net
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2017-01-06T19:46:29Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 192.168.2.78                         |
| floating_network_id | 81eed501-4b95-4c0d-9f56-367aef8f88e5 |
| headers             |                                      |
| id                  | 16aca4b8-59ff-41ea-ada3-a82ac8508bb2 |
| port_id             | None                                 |
| project_id          | dfff1754be584c7bae5342302140d7a7     |
| project_id          | dfff1754be584c7bae5342302140d7a7     |
| revision_number     | 1                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| updated_at          | 2017-01-06T19:46:29Z                 |
+---------------------+--------------------------------------+
[root@allinone ~(keystone_tuxfixer)]# openstack floating ip create pub_net
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2017-01-06T19:47:05Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 192.168.2.71                         |
| floating_network_id | 81eed501-4b95-4c0d-9f56-367aef8f88e5 |
| headers             |                                      |
| id                  | 5dcc1879-f63b-4995-b898-ddd342ffdd9e |
| port_id             | None                                 |
| project_id          | dfff1754be584c7bae5342302140d7a7     |
| project_id          | dfff1754be584c7bae5342302140d7a7     |
| revision_number     | 1                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| updated_at          | 2017-01-06T19:47:05Z                 |
+---------------------+--------------------------------------+

11. Launch instances

We have now everything needed to launch two instances (cirros_inst_1, cirros_inst_2) based on Cirros image and m2.tiny flavor inside tuxfixer project:

[root@allinone ~(keystone_tuxfixer)]# openstack server create --flavor m2.tiny --image cirros_0.3.4 --nic net-id=tux_net --security-group tux_sec cirros_inst_1
+--------------------------------------+-----------------------------------------------------+
| Field                                | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          |                                                     |
| OS-EXT-STS:power_state               | NOSTATE                                             |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | None                                                |
| OS-SRV-USG:terminated_at             | None                                                |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| addresses                            |                                                     |
| adminPass                            | gc8G5La3NRH5                                        |
| config_drive                         |                                                     |
| created                              | 2017-01-06T20:11:07Z                                |
| flavor                               | m2.tiny (6)                                         |
| hostId                               |                                                     |
| id                                   | 39247992-8315-4895-8de9-08195e65624d                |
| image                                | cirros_0.3.4 (121dad69-d043-4044-a13e-3a16124e6620) |
| key_name                             | None                                                |
| name                                 | cirros_inst_1                                       |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| project_id                           | dfff1754be584c7bae5342302140d7a7                    |
| properties                           |                                                     |
| security_groups                      | [{u'name': u'tux_sec'}]                             |
| status                               | BUILD                                               |
| updated                              | 2017-01-06T20:11:09Z                                |
| user_id                              | 8825001d51cc40d3b07ba047652a1780                    |
+--------------------------------------+-----------------------------------------------------+
[root@allinone ~(keystone_tuxfixer)]# openstack server create --flavor m2.tiny --image cirros_0.3.4 --nic net-id=tux_net --security-group tux_sec cirros_inst_2
+--------------------------------------+-----------------------------------------------------+
| Field                                | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          |                                                     |
| OS-EXT-STS:power_state               | NOSTATE                                             |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | None                                                |
| OS-SRV-USG:terminated_at             | None                                                |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| addresses                            |                                                     |
| adminPass                            | GDoiy8QAQsHP                                        |
| config_drive                         |                                                     |
| created                              | 2017-01-06T21:36:49Z                                |
| flavor                               | m2.tiny (6)                                         |
| hostId                               |                                                     |
| id                                   | 7eae5912-cafc-4832-9e89-9a0fa3cb334d                |
| image                                | cirros_0.3.4 (121dad69-d043-4044-a13e-3a16124e6620) |
| key_name                             | None                                                |
| name                                 | cirros_inst_2                                       |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| project_id                           | dfff1754be584c7bae5342302140d7a7                    |
| properties                           |                                                     |
| security_groups                      | [{u'name': u'tux_sec'}]                             |
| status                               | BUILD                                               |
| updated                              | 2017-01-06T21:36:51Z                                |
| user_id                              | 8825001d51cc40d3b07ba047652a1780                    |
+--------------------------------------+-----------------------------------------------------+

Assign floating IPs to the instances:

[root@allinone ~(keystone_tuxfixer)]# openstack server add floating ip cirros_inst_1 192.168.2.71
[root@allinone ~(keystone_tuxfixer)]# openstack server add floating ip cirros_inst_2 192.168.2.78

12. Test instances connectivity

Now it’s time to test our instances. We need to connect to both instances from public / external network (i.e. from some machine in external network) to test their connectivity via floating IPs.

Ping floating IPs of both instances:

[gjuszczak@fixxxer ~]$ ping 192.168.2.71
PING 192.168.2.71 (192.168.2.71) 56(84) bytes of data.
64 bytes from 192.168.2.71: icmp_seq=1 ttl=63 time=13.2 ms
64 bytes from 192.168.2.71: icmp_seq=2 ttl=63 time=6.57 ms
64 bytes from 192.168.2.71: icmp_seq=3 ttl=63 time=3.11 ms
...
[gjuszczak@fixxxer ~]$ ping 192.168.2.78
PING 192.168.2.78 (192.168.2.78) 56(84) bytes of data.
64 bytes from 192.168.2.78: icmp_seq=1 ttl=63 time=174 ms
64 bytes from 192.168.2.78: icmp_seq=2 ttl=63 time=9.36 ms
64 bytes from 192.168.2.78: icmp_seq=3 ttl=63 time=0.736 ms
...

Connect to the floating IP of cirros_inst_1 instance (192.168.2.71) from computer in public network:

[gjuszczak@fixxxer ~]$ ssh cirros@192.168.2.71
The authenticity of host '192.168.2.71 (192.168.2.71)' can't be established.
RSA key fingerprint is SHA256:HUFOaBp4XPU2B4XwwxoY5zUX5xLikR/TH8VvSRiaN6A.
RSA key fingerprint is MD5:1f:bd:3a:8d:58:b1:6b:3a:0d:2c:23:b4:72:4e:00:db.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.71' (RSA) to the list of known hosts.
cirros@192.168.2.71's password: 
$ hostname
cirros-inst-1
$ exit
Connection to 192.168.2.71 closed.
[gjuszczak@fixxxer ~]$ ssh cirros@192.168.2.78
The authenticity of host '192.168.2.78 (192.168.2.78)' can't be established.
RSA key fingerprint is SHA256:VG6dKa2fPLPlU4egbDwsD773IU1O50ep08ECu8djngQ.
RSA key fingerprint is MD5:ba:3f:15:49:5c:8a:09:d6:d4:b7:b0:62:02:2a:66:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.78' (RSA) to the list of known hosts.
cirros@192.168.2.78's password: 
$ hostname
cirros-inst-2
$ exit
Connection to 192.168.2.78 closed.

Credentials to both instances: cirros/cubswin:)

At the very end check internal connectivity between instances. Once again log in to the first instance cirros_inst_1 from public / external network to it’s foating IP (192.168.2.71) and from this instance ping and connect to cirros_inst_2 via internal network to it’s internal IP (192.168.20.11):

[gjuszczak@fixxxer ~]$ ssh cirros@192.168.2.71
cirros@192.168.2.71's password: 
$ hostname
cirros-inst-1
$ ip a
1: lo:  mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:9a:a6:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.10/24 brd 192.168.20.255 scope global eth0
    inet6 fe80::f816:3eff:fe9a:a6f5/64 scope link 
       valid_lft forever preferred_lft forever
$ ping 192.168.20.11
PING 192.168.20.11 (192.168.20.11): 56 data bytes
64 bytes from 192.168.20.11: seq=0 ttl=64 time=18.977 ms
64 bytes from 192.168.20.11: seq=1 ttl=64 time=1.370 ms
64 bytes from 192.168.20.11: seq=2 ttl=64 time=1.004 ms
^C
--- 192.168.20.11 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.004/7.117/18.977 ms
$ ssh cirros@192.168.20.11

Host '192.168.20.11' is not in the trusted hosts file.
(fingerprint md5 ba:3f:15:49:5c:8a:09:d6:d4:b7:b0:62:02:2a:66:e9)
Do you want to continue connecting? (y/n) y
cirros@192.168.20.11's password: 
$ hostname
cirros-inst-2
$ ip a
1: lo:  mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:94:ec:e5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.11/24 brd 192.168.20.255 scope global eth0
    inet6 fe80::f816:3eff:fe94:ece5/64 scope link 
       valid_lft forever preferred_lft forever
$ exit
$ exit
Connection to 192.168.2.71 closed.

Now once again log in to the second instance cirros_inst_2 from public / external network to it’s foating IP (192.168.2.78) and from this instance ping and connect to cirros_inst_1 via internal network to it’s internal IP (192.168.20.10):

[gjuszczak@fixxxer ~]$ ssh cirros@192.168.2.78
cirros@192.168.2.78's password: 
$ hostname
cirros-inst-2
$ ip a
1: lo:  mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:94:ec:e5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.11/24 brd 192.168.20.255 scope global eth0
    inet6 fe80::f816:3eff:fe94:ece5/64 scope link 
       valid_lft forever preferred_lft forever
$ ssh cirros@192.168.20.10

Host '192.168.20.10' is not in the trusted hosts file.
(fingerprint md5 1f:bd:3a:8d:58:b1:6b:3a:0d:2c:23:b4:72:4e:00:db)
Do you want to continue connecting? (y/n) y
cirros@192.168.20.10's password: 
$ hostname
cirros-inst-1
$ ip a
1: lo:  mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:9a:a6:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.10/24 brd 192.168.20.255 scope global eth0
    inet6 fe80::f816:3eff:fe9a:a6f5/64 scope link 
       valid_lft forever preferred_lft forever
$ exit
$ exit
Connection to 192.168.2.78 closed.

Note: as you have probably already noticed both instances have their internal IPs (assigned via internal Neutron DHCP server to their eth0 interfaces) for internal communication and floating IPs (mapped to their internal IPs) for external connectivity from public / external network.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.