Heat is an OpenStack Orchestration service, which implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. Heat service is able to read YAML (.yaml, .yml) files and perform different tasks inside OpenStack environment included in YAML components. Using Heat Orchestration we can create instances, networks or even whole tenants with just single mouse click in OpenStack dashboard (Horizon), if we have previously prepared YAML file with Heat instructions to be performed in OpenStack cloud.
In this tutorial we will create example .yaml file for Heat orchestration containing instructions and components needed to deploy project tenant in OpenStack and launch instances inside the tenant. Next, we will create our stack on single OpenStack all-in-one node based on CentOS 7.3 operating system.
In this tutorial we assume, that you already have working OpenStack environment including Heat orchestration service. Find out how to Install OpenStack Newton All In One with Heat Service on CentOS 7.
Environment used:
OpenStack node: all-in-one, CentOS 7.3 x86_64
public network: 192.168.2.0/24
public IP pool reservation: 192.168.2.230-239
Prerequisites:
Create public shared network
Fresh OpenStack installation may not yet include public shared network to connect tenant networks to. Since we don’t want public shared network creation to become a part of particular tenant deployment .yaml files, we will create public shared network right now in command line.
Source admin keystone file in order to get access to OpenStack admin shell commands:
[root@allinone ~]# source keystonerc_admin
Create shared pub_net with external flag:
[root@allinone ~(keystone_admin)]# openstack network create --external --share pub_net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-02-11T17:12:09Z |
| description | |
| headers | |
| id | fcc7a53d-c873-4d81-862c-07c7ef20ca6b |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| mtu | 1450 |
| name | pub_net |
| project_id | 1dd3e1af05b840f8b6ef2e795fe0f749 |
| project_id | 1dd3e1af05b840f8b6ef2e795fe0f749 |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 58 |
| revision_number | 2 |
| router:external | External |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2017-02-11T17:12:09Z |
+---------------------------+--------------------------------------+
Create pub_subnet with specified CIDR, Gateway and IP pool range:
[root@allinone ~(keystone_admin)]# openstack subnet create --subnet-range 192.168.2.0/24 --no-dhcp --gateway 192.168.2.1 --network pub_net --allocation-pool start=192.168.2.230,end=192.168.2.239 pub_subnet
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.2.230-192.168.2.239 |
| cidr | 192.168.2.0/24 |
| created_at | 2017-02-11T17:20:08Z |
| description | |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.2.1 |
| headers | |
| host_routes | |
| id | e453d55b-4136-42e5-a0ec-86838cbcb25c |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | pub_subnet |
| network_id | fcc7a53d-c873-4d81-862c-07c7ef20ca6b |
| project_id | 1dd3e1af05b840f8b6ef2e795fe0f749 |
| project_id | 1dd3e1af05b840f8b6ef2e795fe0f749 |
| revision_number | 2 |
| service_types | [] |
| subnetpool_id | None |
| updated_at | 2017-02-11T17:20:09Z |
+-------------------+--------------------------------------+
Upload OpenStack image
For the purpose of this article we will download Cirros image to be used by our stack.
Let’s download Cirros image somewhere to our OpenStack all-in-one node:
[root@allinone ~(keystone_admin)]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Now create new image from uploaded file:
[root@allinone ~(keystone_admin)]# openstack image create --container-format bare --file cirros-0.3.4-x86_64-disk.img --public cirros_0.3.4
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2017-01-06T16:24:02Z |
| disk_format | raw |
| file | /v2/images/121dad69-d043-4044-a13e-3a16124e6620/file |
| id | 121dad69-d043-4044-a13e-3a16124e6620 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros_0.3.4 |
| owner | 1dd3e1af05b840f8b6ef2e795fe0f749 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2017-01-06T16:24:04Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
Steps:
1. Create Heat stack .yaml template files
According to OpenStack documentation for resource type: OS::Nova::Server, there is no possibility to define in .yaml template file which tenant the new instances are supposed to belong to, from the other hand, for the resource type: OS::Neutron::Port we are not able to define tenant_id property (encountered error: property doesn’t exist), that’s why we will break down project tenant creation process into two .yaml templates.
First template: create_tenant_and_user.yaml, executed as admin, creates:
- new project tenant
- tenant user / member
Second template: deploy_tenant.yaml, executed as tenant user, creates:
- tenant net / subnet
- tenant router
- tenant security group
- subnet ports for instances
- instances
- floating IPs for instances
First template file create_tenant_and_user.yaml:
heat_template_version: newton
description: tuxfixer.com - create project tenant and assign user / member
parameters:
tenant_name:
type: string
description: tenant name
tenant_user_role:
type: string
description: tenant user roles
default: heat_stack_owner
tenant_user:
type: string
description: tenant user
tenant_user_password:
type: string
description: tenant user password
resources:
tenant:
type: OS::Keystone::Project
properties:
name: { get_param: tenant_name }
domain: default
description: Example project tenant
user:
type: OS::Keystone::User
properties:
name: { get_param: tenant_user }
domain: default
description: Example project tenant user
default_project: { get_resource: tenant }
password: { get_param: tenant_user_password }
roles:
- role: { get_param: tenant_user_role }
project: { get_resource: tenant }
Above template includes project tenant creation, new tenant user / member creation and role assignment for the user. All the mentioned resources are parametrized and the parameters can be passed to the template upon it’s execution, moreover tenant_user_role parameter has it’s default value heat_stack_owner, so if this parameter won’t be passed to the template, the default value will be taken to consideration.
Second template file deploy_tenant.yaml:
heat_template_version: newton
description: tuxfixer.com - deploy project tenant and launch instances
parameters:
tenant_id:
type: string
description: tenant id
public_net_name:
type: string
description: public/external net name
default: pub_net
tenant_net_name:
type: string
description: tenant internal net name
default: int_net
tenant_subnet_name:
type: string
description: tenant internal subnet name
default: int_subnet
tenant_subnet_cidr:
type: string
description: CIDR of tenant internal subnet name
default: 192.168.3.0/24
tenant_subnet_gateway:
type: string
description: gateway for tenant internal subnet
default: 192.168.3.1
tenant_router_name:
type: string
description: tenant router name
default: tenant_router
sec_group_name:
type: string
description: security group name
default: sec_group
flavor_name:
type: string
description: flavor name
default: m1.tiny
image_name:
type: string
description: image name
default: cirros_0.3.4
instance1_name:
type: string
description: first instance name
default: instance1
instance1_port_name:
type: string
description: first instance port name
default: inst1_port
instance1_port_ip:
type: string
description: first instance IP address
default: 192.168.3.10
instance2_name:
type: string
description: second instance name
default: instance2
instance2_port_name:
type: string
description: second instance port name
default: inst2_port
instance2_port_ip:
type: string
description: second instance IP address
default: 192.168.3.11
resources:
tenant_net:
type: OS::Neutron::Net
properties:
name: { get_param: tenant_net_name }
tenant_id: { get_param: tenant_id }
tenant_subnet:
type: OS::Neutron::Subnet
properties:
name: { get_param: tenant_subnet_name }
network_id: { get_resource: tenant_net }
cidr: { get_param: tenant_subnet_cidr }
gateway_ip: { get_param: tenant_subnet_gateway }
enable_dhcp: True
tenant_id: { get_param: tenant_id }
tenant_router:
type: OS::Neutron::Router
properties:
name: { get_param: tenant_router_name }
external_gateway_info:
network: { get_param: public_net_name }
value_specs:
tenant_id: { get_param: tenant_id }
tenant_router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: tenant_router }
subnet_id: { get_resource: tenant_subnet }
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: { get_param: sec_group_name }
description: ICMP and all IP ports
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 1
port_range_max: 65535
instance1_port:
type: OS::Neutron::Port
properties:
name: { get_param: instance1_port_name }
network_id: { get_resource: tenant_net }
fixed_ips:
- subnet_id: { get_resource: tenant_subnet }
ip_address: { get_param: instance1_port_ip }
security_groups: [{ get_resource: security_group }]
instance1:
type: OS::Nova::Server
properties:
name: { get_param: instance1_name }
image: { get_param: image_name }
flavor: { get_param: flavor_name }
networks:
- port: { get_resource: instance1_port }
subnet: { get_resource: tenant_subnet }
instance1_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_net_name }
instance1_floating_ip_associate:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: { get_resource: instance1_floating_ip }
port_id: { get_resource: instance1_port }
instance2_port:
type: OS::Neutron::Port
properties:
name: { get_param: instance2_port_name }
network_id: { get_resource: tenant_net }
fixed_ips:
- subnet_id: { get_resource: tenant_subnet }
ip_address: { get_param: instance2_port_ip }
security_groups: [{ get_resource: security_group }]
instance2:
type: OS::Nova::Server
properties:
name: { get_param: instance2_name }
image: { get_param: image_name }
flavor: { get_param: flavor_name }
networks:
- port: { get_resource: instance2_port }
subnet: { get_resource: tenant_subnet }
instance2_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_net_name }
instance2_floating_ip_associate:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: { get_resource: instance2_floating_ip }
port_id: { get_resource: instance2_port }
Second template is executed as tenant user to let the instances be created inside the particular tenant and encompasses networking, security groups, floating IPs and instances deployment. Almost all parameters, include default values, so we don’t need to pass them all upon template execution. The only exception is tenant_id, which must be given in stack creation command.
2. Creating Heat stack from yaml template files
First, create project tenant and assign tenant member from create_tenant_and_user.yaml file as admin user.
Source admin keystone file to import admin credentials:
[root@allinone ~]# source keystonerc_admin
Launch first stack named create_tenant_stack as admin:
[root@allinone ~(keystone_admin)]# openstack stack create -t create_tenant_and_user.yaml --parameter tenant_name=tuxfixer --parameter tenant_user=tuxfixer --parameter tenant_user_password=tux_password create_tenant_stack
+---------------------+---------------------------------------------------------------+
| Field | Value |
+---------------------+---------------------------------------------------------------+
| id | aa3a02d1-6755-4d3b-bb20-d9bf05ad052d |
| stack_name | create_tenant_stack |
| description | tuxfixer.com - create project tenant and assign user / member |
| creation_time | 2017-02-13T20:51:47Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+---------------------------------------------------------------+
Verify tenant ID of the newly created tenant tuxfixer:
[root@allinone ~(keystone_admin)]# openstack project list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 1dd3e1af05b840f8b6ef2e795fe0f749 | admin |
| 6226efa7728f48b8bf598a34d4d0b29e | services |
| 689e3771001f42c4a8867e84241af891 | tuxfixer |
+----------------------------------+----------+
[root@allinone ~(keystone_admin)]# openstack project show tuxfixer
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Example project tenant |
| enabled | True |
| id | 689e3771001f42c4a8867e84241af891 |
| name | tuxfixer |
| properties | |
+-------------+----------------------------------+
Since we need to execute the second stack as tenant user, we need to prepare a keystone file for tenant user and source the file in console.
Prepare tuxfixer tenant user keystone file:
[root@allinone ~(keystone_admin)]# cp keystonerc_admin keystonerc_tuxfixer
Modify file to look like below:
unset OS_SERVICE_TOKEN
export OS_USERNAME=tuxfixer
export OS_PASSWORD=tux_password
export OS_AUTH_URL=http://192.168.2.26:5000/v2.0
export PS1='[\u@\h \W(keystone_tuxfixer)]\$ '
export OS_TENANT_NAME=tuxfixer
export OS_REGION_NAME=RegionOne
Source tuxfixer keystone file to import tuxfixer user credentials:
[root@allinone ~(keystone_admin)]# source keystonerc_tuxfixer
Launch second stack named deploy_tenant_stack as tuxfixer using deploy_tenant.yaml template file, pass newly created tenant ID as parameter:
[root@allinone ~(keystone_tuxfixer)]# openstack stack create -t deploy_tenant.yaml --parameter tenant_id=689e3771001f42c4a8867e84241af891 deploy_tenant_stack
+---------------------+-----------------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------------+
| id | a4f6f984-4ff6-4e44-b47a-c65969f3285a |
| stack_name | deploy_tenant_stack |
| description | tuxfixer.com - deploy project tenant and launch instances |
| creation_time | 2017-02-13T21:14:54Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+-----------------------------------------------------------+
3. Test newly created Heat stacks
Let’s verify briefly our new stacks to see, if all stack components were created succesfully.
Source again admin credentials:
[root@allinone ~(keystone_tuxfixer)]# source keystonerc_admin
List created stacks:
[root@allinone ~(keystone_admin)]# openstack stack list
+--------------------------------------+---------------------+-----------------+----------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+---------------------+-----------------+----------------------+--------------+
| a4f6f984-4ff6-4e44-b47a-c65969f3285a | deploy_tenant_stack | CREATE_COMPLETE | 2017-02-13T21:14:54Z | None |
| aa3a02d1-6755-4d3b-bb20-d9bf05ad052d | create_tenant_stack | CREATE_COMPLETE | 2017-02-13T20:51:47Z | None |
+--------------------------------------+---------------------+-----------------+----------------------+--------------+
Display first stack details:
[root@allinone ~(keystone_admin)]# openstack stack show create_tenant_stack
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| id | aa3a02d1-6755-4d3b-bb20-d9bf05ad052d |
| stack_name | create_tenant_stack |
| description | tuxfixer.com - create project tenant and assign user / member |
| creation_time | 2017-02-13T20:51:47Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |
| parameters | OS::project_id: 1dd3e1af05b840f8b6ef2e795fe0f749 |
| | OS::stack_id: aa3a02d1-6755-4d3b-bb20-d9bf05ad052d |
| | OS::stack_name: create_tenant_stack |
| | tenant_name: tuxfixer |
| | tenant_user: tuxfixer |
| | tenant_user_password: tux_password |
| | tenant_user_role: heat_stack_owner |
| | |
| outputs | [] |
| | |
| links | - href: http://192.168.2.26:8004/v1/1dd3e1af05b840f8b6ef2e795fe0f749/stacks/create_tenant_stack/aa3a02d1-6755-4d3b-bb20-d9bf05ad052d |
| | rel: self |
| | |
| parent | None |
| disable_rollback | True |
| deletion_time | None |
| stack_user_project_id | 8956feb7c77447368aaac06ac538386f |
| capabilities | [] |
| notification_topics | [] |
| stack_owner | None |
| timeout_mins | None |
| tags | null |
| | ... |
| | |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
Source again tuxfixer credentials:
[root@allinone ~(keystone_admin)]# source keystonerc_tuxfixer
Display second stack details:
[root@allinone ~(keystone_tuxfixer)]# openstack stack show deploy_tenant_stack
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| id | a4f6f984-4ff6-4e44-b47a-c65969f3285a |
| stack_name | deploy_tenant_stack |
| description | tuxfixer.com - deploy project tenant and launch instances |
| creation_time | 2017-02-13T21:14:54Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |
| parameters | OS::project_id: 689e3771001f42c4a8867e84241af891 |
| | OS::stack_id: a4f6f984-4ff6-4e44-b47a-c65969f3285a |
| | OS::stack_name: deploy_tenant_stack |
| | flavor_name: m1.tiny |
| | image_name: cirros_0.3.4 |
| | instance1_name: instance1 |
| | instance1_port_ip: 192.168.3.10 |
| | instance1_port_name: inst1_port |
| | instance2_name: instance2 |
| | instance2_port_ip: 192.168.3.11 |
| | instance2_port_name: inst2_port |
| | public_net_name: pub_net |
| | sec_group_name: sec_group |
| | tenant_id: 689e3771001f42c4a8867e84241af891 |
| | tenant_net_name: int_net |
| | tenant_router_name: tenant_router |
| | tenant_subnet_cidr: 192.168.3.0/24 |
| | tenant_subnet_gateway: 192.168.3.1 |
| | tenant_subnet_name: int_subnet |
| | |
| outputs | [] |
| | |
| links | - href: http://192.168.2.26:8004/v1/689e3771001f42c4a8867e84241af891/stacks/deploy_tenant_stack/a4f6f984-4ff6-4e44-b47a-c65969f3285a |
| | rel: self |
| | |
| parent | None |
| disable_rollback | True |
| deletion_time | None |
| stack_user_project_id | 5e4cb7c7847e4d1ea6ad198115626037 |
| capabilities | [] |
| notification_topics | [] |
| stack_owner | None |
| timeout_mins | None |
| tags | null |
| | ... |
| | |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+
Test newly created instances (instance1, instance2) connectivity.
Verify instance list in tuxfixer tenant:
[root@allinone ~(keystone_tuxfixer)]# openstack server list
+--------------------------------------+-----------+--------+-------------------------------------+--------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-----------+--------+-------------------------------------+--------------+
| a02dd428-5183-4aff-973f-4c5f4c351442 | instance2 | ACTIVE | int_net=192.168.3.11, 192.168.2.237 | cirros_0.3.4 |
| 825564fc-fd54-4ed0-a60a-bf44b596eda2 | instance1 | ACTIVE | int_net=192.168.3.10, 192.168.2.232 | cirros_0.3.4 |
+--------------------------------------+-----------+--------+-------------------------------------+--------------+
Note: SSH credentials for Cirros based instances: cirros / cubswin:)
Check connectivity to instance1 from machine in public network:
[gjuszczak@tuxfixer ~]$ ssh cirros@192.168.2.232
The authenticity of host '192.168.2.232 (192.168.2.232)' can't be established.
RSA key fingerprint is SHA256:tKZ7fdIHvES+GU5Zh/XGUg6HyxPeFFYcRcFhwoW/qHg.
RSA key fingerprint is MD5:63:93:7c:eb:7d:c8:57:8c:3e:62:f4:30:86:e4:bc:46.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.232' (RSA) to the list of known hosts.
cirros@192.168.2.232's password:
$ hostname
instance1
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:18:55:27 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.10/24 brd 192.168.3.255 scope global eth0
inet6 fe80::f816:3eff:fe18:5527/64 scope link
valid_lft forever preferred_lft forever
$ exit
Connection to 192.168.2.232 closed.
Check connectivity to instance2 from machine in public network:
[gjuszczak@tuxfixer ~]$ ssh cirros@192.168.2.237
The authenticity of host '192.168.2.237 (192.168.2.237)' can't be established.
RSA key fingerprint is SHA256:2w8Yndo9ipMYPXHmw/cqxUgn96qm2fU9sjGdSLtyLDc.
RSA key fingerprint is MD5:aa:26:39:89:21:8a:63:11:64:4c:a7:14:19:5e:c1:78.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.237' (RSA) to the list of known hosts.
cirros@192.168.2.237's password:
$ hostname
instance2
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:0d:fe:61 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.11/24 brd 192.168.3.255 scope global eth0
inet6 fe80::f816:3eff:fe0d:fe61/64 scope link
valid_lft forever preferred_lft forever
$ exit
Connection to 192.168.2.237 closed.