Installation and Configuration of Openstack and AWS

Dr. Aly, O.
Computer Science

Abstract

The purpose of this project was to articulate all the steps for the installation and configuration of OpenStack and Amazon Web Services.  The project begins with an overview of OpenStack.  It is divided into three main phases.  The first Phase discusses and analyzes the differences between the Networking techniques in AWS and OpenStack.  Phase 2 discusses the required configurations to deploy the OpenStack Controller.  Phase 2 also discusses and analyzes the expansion of OpenStack to include additional node as the Compute node.   Phase 3 discusses the issues encountered during the installation and configuration of OpenStack and AWS services. A virtual bridge for the provider network was configured where all VMs traffic reaches the Internet through the external bridge.   The floating IP also must be disallowed to avoid dropping the packet when they reach AWS.  In this project, OpenStack using the Controller Node and an additional Compute Node is deployed and accessed successfully using Horizon dashboard.  Elastic Cloud Compute (EC2) is also installed and configured successfully using the default VPC, the default Security Group and Access Control List.  

Keywords: OpenStack, Amazon Web Services (AWS).

Introduction

            OpenStack is a result of initiatives from Rackspace and NASA in 2010 because NASA could not store its data in the Public Cloud for security reasons.  OpenStack is an open source project which can be utilized by leading vendors to bring AWS-like ability and agility to the private cloud.  OpenStack has been growing since its inception in 2010 to include 500 member companies as part of the OpenStack Foundation with platinum and gold members from the largest IT vendors globally.  Examples of these platinum members include RedHat, Suse, IBM, Hewlett Packard Enterprise, Ubuntu, AT&T and Rackspace (Armstrong, 2016).

            OpenStack provides primarily an Infrastructure-as-s-Service (IaaS) function within the Private cloud, where it makes centralized storage, commodity computes, and networking features available to end users to self-service their needs, through the Horizon dashboard or a set of common APIs.  Many organizations are deploying OpenStack in-house to develop their own data centers.  The implementation of the OpenStack is less likely to fail when utilizing professional service support from known vendors and can create alternative solutions to Microsoft Azure and AWS.  Examples of these professional service vendors include Red Hat, Suse, HP, Canonical, Mirantis, and so forth.  They provide different methods of installing the platform  (Armstrong, 2016). 

            The release cycle of the OpenStack is six months during which an upstream release is created.  OpenStack Foundation creates the upstream release and governs it.  Example of the public cloud deployment of OpenStack includes AT&T, RackSpace, and GoDaddy. Thus, OpenStack is not exclusively used for private cloud.  However, OpenStack has been increasingly popular as a Private Cloud alternative to AWS Public Cloud. OpenStack is now widely used for Network Function Virtualization (NFV) (Armstrong, 2016).  

OpenStack and AWS utilize different approaches to Networking.  This section begins with AWS Networking, followed by OpenStack Networking.

Phase 1:  OpenStack Networking vs. AWS Networking

1.1       AWS Networking

Virtual Private Cloud (VPC) is a hybrid cloud comprising of public and private clouds.  The VPC is the default setting for new AWS users.  The VPC can also be connected to a network of users or the private data center of the organization.  The underlying concept of connecting the VPC to the private data center of the organization is the use of the gateway and virtual private cloud gateway (VPG).  The VPG is two redundant VPN tunnels, which gets instantiated from the private network of the user or the organization.  The gateway of the organization exposes a set of external static addresses from the site of the organization, which are using Network Address Translation-Traversal (NAT-T) to hide the address.  The organization can use one gateway device to access multiple VPCs.   The VPC provides an isolated view of all provisioned instances.  Identity and Access Management (IAM) of AWS is used to set up user account to access the VPC.   Figure 1 illustrates an example of the AWS VPC with virtual machines or instances mapped with one or more security groups and connected to different subnets connected to the VPC router (Armstrong, 2016; AWS, 2017). 

Figure 1.  VPC of AWS showing multiple instances using Security Group.

            The networking is simplified by VPC using software and allowing users and organizations to perform a certain set of networking operations such as mapping the subnet, using Domain Name System (DNS), Public and Private IP addresses assignments, security group and access control list application.   When organizations create a virtual machine or instance, a default VPC is assigned to it automatically.  All VPC comes with a default router which can have additional custom routes and the routing priority to forwarding traffic to specific subnets based on the requirements of the organizations and users.   Figure 2 illustrates VPC using Private IP, Public IPs, and the Main Route Table, adapted from (Armstrong, 2016; AWS, 2017).

Figure 2.  AWS VPC Configuration Example (AWS, 2017).

            With respect to the IP Addressing of AWS, a mandatory private IP is assigned automatically to every virtual machine or instance, also a public IP and DNS entry unless the instance is a dedicated instance.  The Private IP is used to route traffic among instances when there is a need for a virtual machine to communicate with another virtual machine that is close to it on the same subnet.  The Public IP, on the other hand, are accessible through the Internet.   If there is a need for a persistent Public IP address for a virtual machine, the Elastic IP addressed feature is provided by AWS which is limited to five per VPC account only.  When using Elastic IP addresses, the IP address can be mapped quickly to another instance in case of a failure of the instance.  When using AWS, it can take up to 24 hours for the DNS Time to live (TTL) of a Public IP address to propagate.  Moreover, AWS supports a Maximum Transmission Unit (MTU) of 1,500 regarding throughput which can be passed to an instance in AWS.  The organization must consider this feature for application performance consideration (Armstrong, 2016; AWS, 2017).

            AWS uses Security Groups and Access Control Lists.  The SG in AWS is used to group a collection of access control rules with implicit denies.  The SG in AWS can be associated with one or more network interfaces of instances.  The SG acts as the firewall for the instances.  There is a default SG which gets applied automatically if no other security group is specified with the instantiated instance.  The default SG allow all outbound traffic and all inbound traffic only from other instances within the same VPC.  The default SG group cannot be deleted.  With the custom SG, no inbound traffic, but all outbound traffic allowed.  The user can add Access Control List (ACL) rules are associated with the SG governing the inbound traffic using AWS console (Armstrong, 2016; AWS, 2017).

            The VPC of AWS ha access to different regions and availability zone of shared computer dictating the data center which the instance and virtual machine will be deployed in.  The availability zone AZ is an isolated location residing in a region which is a geographic area isolated by design.  Thus,  AZ can be a subset of a region.  Organizations and users can place resources in different locations for redundancy for recovery consideration.  AWS supports the use of more than one AV when deploying production workloads on AWS.   Moreover, organizations and users can replicate the instances and data across regions (Armstrong, 2016; AWS, 2017).

            Elastic Load Balancing (ELB) feature is also offered by AWS, which can be configured within a VPC.   The ELB can be external or internal.  When the ELB is external, it allows the creation of the internet-facing entry point into the VP using an associated DNS entry and balances load among the instances in the VPC.  The SG is assigned to the ELB to control the access to ports which need to be used (Armstrong, 2016; AWS, 2017).   

1.2       OpenStack Networking

            OpenStack is deployed in a data center on multiple controllers.  These controllers contain all services of the OpenStack. These controllers can be installed on virtual machine, bare metal physical servers, or containers.  When these controllers get deployed in a production environment, they host all OpenStack services in a high availability and redundancy platform.   Different installers to install OpenStack are offered by different OpenStack vendors.  Examples of these installers include RedHat Director, Mirantis Fuel, HPs HPE installed, and Juju for Canonical. All these installers install controllers.  They are also used to scale out compute nodes on the OpenStack cloud (Armstrong, 2016; OpenStack, 2018b).

            With respect to the services of the OpenStack, there are eleven core services which are installed on the OpenStack controlled.  These core services include Keystone, Heat, Glance, Cinder, Nova, Horizon, Rabbitmq, Galera, Swift, Ironic and Neutron.  Figure 3 summarizes each core service of the OpenStack (OpenStack, 2018a).  The Neutron architecture is similar in constructs to AWS regarding Neutron Networking services (Armstrong, 2016; OpenStack, 2018b). 

Figure 3.  Summary of OpenStack Core Services (OpenStack, 2018a)

In OpenStack, a Project is referred to as a Tenant providing an isolated view of everything which a team has provisioned in the OpenStack cloud.  Using the Keystone Identity service, different users can be set up for a Project (Tenant).  These accounts can be integrated with LDAP such as Active Directory to support customizable permission model (Armstrong, 2016; OpenStack, 2018b).

The Neutron Service of OpenStack performs all networking related tasks and functions.  These functions and tasks include seven major steps. The first step includes the creation of instances or virtual machine mapped to networks.  The second step includes the assignment of IP addresses using the built-in DHCP service.  The third step includes the application of DNS entries to instances from named servers.  The fourth step includes the assignment of Private and Floating IP addressed.  The fifth step incluse the creation or the associatoiin of the network subnet, followed by creating the routers.  The last step is the application of the Security Groups (Armstrong, 2016; OpenStack, 2018b).

The compute nodes of the OpenStack are deployed using a Hypervisor which uses Open vSwitch.  Most vendor distributions of OpenStack provide KVM Hypervisor by default, which gets deployed and configured on each computes node by the OpenStack Installer.  The compute nodes in OpenStack are connected to the access layer of the STP 3-tier model. In modern networks, they are connected to the Leave switches, with VLANs connected to each computes node in the OpenStack cloud.  The networks of the Tenant are used to provide isolation among tenants and use VXLAN and GRE tunneling to connect the layer two network (Armstrong, 2016; OpenStack, 2018b).

The configuration and setup of simple networking using Neutron in a Project (Tenant) network requires two different networks; an internal network and an external network.  The internal network is used for traffic among instances in the Project, where the subnet name and range are specified in the Subnet.  The external network is used to make the internal network accessible from outside of the OpenStack.   A router is also used in OpenStack to route packets to the network, which will be associated with the networks.  The external network needs to be set as the router’s gateway.  The last step in the network configuration connects one router to an internal and external network.   Instances are provisioned in OpenStack onto the internal Private Network by selecting the Private Network NIC during the deployment of the instance.  OpenStack assigns pools of Public IPs known as Floating IP addresses from an external network for instances which need to be externally routable outside of the OpenStack (Armstrong, 2016; OpenStack, 2018b).

OpenStack uses SG like AWS to set up firewall rules between instances.  However, OpenStack, unlike AWS, supports both ingress and egress ACL rules, whereas AWS allows all outbound communications.  OpenStack can work with both ingress and egress rules.   SSH access must be configured as an ACL rule against the parent SG in OpenStack which is pushed down to Open vSwitch into kernel space on each Hypervisor.  When the internal and external networks are set up and configured for the Project (Tenant), instances are ready to be launched on the Private network.  Users can access the instance from Horizon dashboard (Armstrong, 2016; OpenStack, 2018b).

With respect to regions and availability zones in OpenStack, like AWS, OpenStack uses regions and AZ.  The compute nodes in OpenStack (Hypervisors) can be assigned to different AZ, which is a virtual separation of computing resources.  The AZ in OpenStack can be segmented into host aggregated. However, a compute node can be assigned to only one AZ in OpenStack, while it can be a part of multiple host aggregates in the same AZ (Armstrong, 2016; OpenStack, 2018b). 

OpenStack offers Load-Balancer-as-a-Service (LBaaS) which allows incoming requests to be distributed evenly among the designated instances using a Virtual IP (VIP).  Examples of the popular LBaaS plugins in OpenStack include Citrix NetScaler, F5, HaProxy, and Avi networks.  The underlying concept of LBaaS on OpenStack is to allow organizations and users to use LBaaS as a broker to the load balancing solutions, using APIs of the OpenStack or using the Horizon dashboard to configure the Load Balancer (Armstrong, 2016; OpenStack, 2018b). 

Phase 2:  AWS and OpenStack Setup and Configuration

            This project deployed OpenStack on AWS and limited to the configuration of the controller node.  In the same project, the OpenStack cloud is expanded to add a compute node.   The topology for this project is illustrated in Figure 4.   Port 9000 will be configured to be accessed from the browser on the client.  The Compute Node VM will be using a different IP address than that IP address for the OpenStack Node. A Private Network will be configured using the Vagrant software.   NAT interface will be configured and mapped to the Compute Node and the OpenStack Controller Node as illustrated in Figure 4.

Figure 4.  This Project’s Topology.

The Controller Node is configured to have one processor, 4 GB memory, and 5 G storage.  The Compute Node is configured to have one processor, 2 GB memory, and 10 GB storage.  The installation must be performed on a 64bit version of distribution on each node.  VirtualBox is used in this project.  The Vagrant software is also used in this project.  Another software called Sublime Text is installed to configure the Vagrant file and avoid any control characters at the end of each line which can cause problems.  The project is using the Pike release.

2.1 Amazon Machine Images (AMI) Elastic Cloud Compute (EC2) AMI Configuration

The project requires AWS account, to select the image which can be used for OpenStack Deployment.  Multi-Factor Authentication is implemented to access the account.  Amazon Machine Image (AMI) Elastic Compute Cloud (EC2) is selected from the pool of the AMIs for this project.  The Free Tier EC2 instance is configured with the default Security Group (SG) and Access Control List (ACL) rules as discussed earlier.   EC2 AMI is a template which contains the software configuration such as operating system, application server, and applications required to launch and instantiate the instance.  The EC2 AMI is configured to use the default VPC. 

2.2 OpenStack Controller Node Configuration

The Controller Node is configured first to use the IP Address identified in the topology.  This configuration is implemented using Vagrant software and Vagrant file. 

  • Connect to the controller using the Vagrant software.  To start the Controller from Vagrant, execute:
    • $vagrant up the controller. 
  • Verify the Controller is running successfully.
    • $vagrant status
  • Verify the NAT address using eth0. 
    • $ifconfig -a
  • Verify the Private IP Address using eth1.  The IP address shows the same IP address configured in the configuration file.

Access the Controller Node of the OpenStack from the Browser using the Port 9000. 

  • Verify the Hypervisors from Horizon interface. 

2.3 OpenStack Compute Node Configuration

The OpenStack Cloud is expanded by adding a Compute Node.  The configuration of the compute node is performed using the Vagrant file.

  • Connect to the computer using Vagrant command.  The Compute Node is using node1 as the hostname.  To start the Compute Node from Vagrant, execute the following command:
    • $vagrant up node1. 
  • Verify the Compute Node is running successfully.
    • $vagrant status
  • Access node1 using SSH.
  • Check OpenStack Services:
    • $sudo systemctl list-units devstack@*
  • Verify the NAT address using eth0. 
    • $ifconfig -a
  • Verify the Private IP Address using eth1.  The IP address shows the same IP address configured in the configuration file.

Access the Controller Node of the OpenStack from the Browser using the Port 9000.  Verify the Hypervisors from Horizon interface. 

Phase 3:  Issues Deploying OpenStack on AWS

There are some issues encountered during the deployment of OpenStack on AWS.   The issue which impacted EC2 AMI involved the MAC address which must be registered in the AWS network environment.  Moreover, the MAC address and the IP address must be mapped together because the packets will not be allowed to flow if the MAC address and the IP address are different.  

3.1 Neutron Networking

                During the configuration of the OpenStack Neutron Networking, a virtual bridge for the Provider Network is configured where all VMs traffic will reach the Internet through the external bridge which is followed by the actual physical NIC of eth1.  Thus, NIC with a special type of configuration will be configured as the external interface as shown in the topology for this project (Figure 4).   

 3.2 Disable Floating IP

            The floating IP must be disabled because it will send the packet through the router’s gateway with the IP address as a floating IP address, which will result in dropping the packets once they reach AWS because they will reach the switch with no registered IP and MAC address.  In this project, the NAT is configured to access the public address externally as shown in the topology in Figure 4.

Conclusion

The purpose of this project was to articulate all the steps for the installation and configuration of OpenStack and Amazon Web Services.  The project began with an overview of OpenStack.  It is divided into three main phases.  The first Phase discussed and analyzed the differences between the Networking techniques in AWS and OpenStack.  Phase 2 discussed the required configurations to deploy the OpenStack Controller.  Phase 2 also discussed and analyzed the expansion of OpenStack to include additional node as the Compute node.   Phase 3 discussed the issues encountered during the installation and configuration of OpenStack and AWS services. A virtual bridge for the provider network was configured where all VMs traffic reaches the Internet through the external bridge.   The floating IP also must be disallowed to avoid dropping the packet when they reach AWS.  In this project, OpenStack using the Controller Node and an additional Compute Node was deployed and accessed successfully using Horizon dashboard.  Elastic Cloud Compute (EC2) was also installed and configured successfully using the default VPC, the default Security Group, and Access Control List.  

References

Armstrong, S. (2016). DevOps for Networking: Packt Publishing Ltd.

AWS. (2017). Virtual Private Cloud:  User Guide. Retrieved from: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ug.pdf.

OpenStack. (2018a). Introduction to OpenStack. Retrieved from https://docs.openstack.org/security-guide/introduction/introduction-to-openstack.html.

OpenStack. (2018b). OpenStack Overview. Retrieved from https://docs.openstack.org/install-guide/overview.html.