Ubuntu OpenStack

This document will outline the steps to create a simple OpenStack cloud.

1) Network Architecture

The following is the target initial network design. The high power workstations will be pulled into the system as compute and storage. A redundant controller will need created, and replication of storage to an data center.

2) Controller Server Configuration

Our first server will have two 500G hard drive in a RAID1 mirror. This is where Ubuntu and the OpenStack ‘Controller’ will be placed. This server will also have two 1TB hard drives for storage.

2.1) Load Ubuntu 12.04 Server

Make sure to install SSH. This make installation much easier as you can SSH into the server. Make sure the hostname is ‘cloud-ctrlr’ and IP address is ‘192.168.69.30’.

2.2) Dell 2950 Specifics for Ubuntu

Please refer to the following URL: http://linux.dell.com/repo/community/ubuntu/

cd /home/administrator
echo 'deb http://linux.dell.com/repo/community/ubuntu precise openmanage' | sudo tee -a /etc/apt/sources.list.d/linux.dell.com.sources.list
echo 'deb http://linux.dell.com/repo/community/ubuntu precise openmanage/730' | sudo tee -a /etc/apt/sources.list.d/linux.dell.com.sources.list
gpg --keyserver pool.sks-keyservers.net --recv-key 1285491434D8786F 
gpg -a --export 1285491434D8786F | sudo apt-key add - 
apt-get update
apt-get install srvadmin-all srvadmin-storageservices
vi /opt/dell/srvadmin/etc/omarolemap
administrator     *     Administrator
service dsm_om_connsvc start
update-rc.d dsm_om_connsvc defaults

Go to https://:1311/ in your browser to access OMSA.

You must go to the following website from the same IP address as the server you’r setting up and accept the terms.

wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/6.602.03.00_MR%20Linux_Driver.tgz
mkdir MegaRAID
cd MegaRAID
tar -vxzf ../6.602.03.00_MR\ Linux_Driver.tgz
cd ubuntu/rpms-2
dpkg -i megaraid_sas_06.602.03.00-2-ubuntu12.04_x86_64.deb
wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/8.07.14_MegaCLI.zip 
apt-get install alien dpkg-dev debhelper build-essential unzip
mkdir MegaCLI
cd MegaCLI
unzip ../8.07.14_MegaCLI.zip
cd Linux
alien --script MegaCli-8.07.14-1.noarch.rpm
dpkg -i megacli_8.07.14-2_all.deb

2.3) Basic OS Configuration

vi /etc/network/interfaces
# Local Area Network
auto eth0
iface eth0 inet static
	address 192.168.69.30
	netmask 255.255.255.0
	gateway 192.168.69.1
	dns-nameservers 192.168.69.11 8.8.8.8

# Cloud Area Network
auto eth1
iface eth1 inet static
	address 10.0.0.30
	netmask 255.255.255.0
service networking restart
vi /etc/hosts
# Cloud Hosts
10.0.0.30	cloud-ctrlr
10.0.0.31	cloud-one
10.0.0.32	cloud-two
10.0.0.33	cloud-three
10.0.0.34	cloud-four
10.0.0.35	cloud-five
10.0.0.36	cloud-six
10.0.0.37	cloud-seven
10.0.0.38	cloud-eight
10.0.0.39	cloud-nine
apt-get install ntp
openssl rand -hex 8

This command can be used to generate passwords. We will use the following for passwords:

UBUNTU_PASS Ubuntu Password
DB_PASS Root password for the database
RABBIT_PASS Password of user guest of RabbitMQ
KEYSTONE_DBPASS Database password of Identity service
ADMIN_PASS Password of user admin
GLANCE_DBPASS Database password for Image Service
GLANCE_PASS Password of Image Service user glance
NOVA_DBPASS Database password for Compute service
NOVA_PASS Password of Compute service user nova
DASH_DBPASS Database password for the dashboard
CINDER_DBPASS Database password for the Block Storage Service
CINDER_PASS Password of Block Storage Service user cinder
NEUTRON_DBPASS Database password for the Networking service
NEUTRON_PASS Password of Networking service user neutron
HEAT_DBPASS Database password for the Orchestration service
HEAT_PASS Password of Orchestration service user heat
CEILOMETER_DBPASS Database password for the Telemetry service
CEILOMETER_PASS Password of Telemetry service user ceilometer
apt-get install python-mysqldb mysql-server python-software-properties
vi /etc/mysql/my.cnf
[mysqld]
...
# bind to specific adapter
bind-address = 10.0.0.30
# use for all adapters
# bind-address = 0.0.0.0
mysql_install_db
mysql_secure_installation
add-apt-repository cloud-archive:havana
apt-get update
apt-get dist-upgrade
reboot
apt-get install rabbitmq-server
rabbitmqctl change_password guest RABBIT_PASS

2.4) Identity Service (keystone)

apt-get install keystone
vi /etc/keystone/keystone.conf
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = ADMIN_TOKEN
...
[sql]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@cloud-ctrlr/keystone
...
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
quit
keystone-manage db_sync
service keystone restart
export OS_SERVICE_TOKEN=ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://cloud-ctrlr:35357/v2.0
keystone tenant-create --name=admin --description="Admin Tenant"
keystone tenant-create --name=service --description="Service Tenant"
keystone user-create --name=admin --pass=ADMIN_PASS 
--email=shawn@internetworkconsulting.net
keystone role-create --name=admin
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"

Please note the service ID from above and used it below in the ABOVE_SERVICE_ID field.

keystone endpoint-create --service-id=ABOVE_SERVICE_ID --publicurl=http://cloud-ctrlr:5000/v2.0 --internalurl=http://cloud-ctrlr:5000/v2.0 --adminurl=http://cloud-ctrlr:35357/v2.0
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
keystone --os-username=admin --os-password=ADMIN_PASS --os-auth-url=http://cloud-ctrlr:35357/v2.0 token-get
vi /home/administrator/keystonerc
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://cloud-ctrlr:35357/v2.0
source /home/administrator/keystonerc
keystone token-get
keystone user-list

2.5) Image Service (glance)

apt-get install glance python-glanceclient
vi /etc/glance/glance-api.conf
vi /etc/glance/glance-registry.conf

Use the following for both files listed above.

...
[DEFAULT]
...
sql_connection = mysql://glance:GLANCE_DBPASS@cloud-ctrlr/glance
...
[keystone_authtoken]
...
auth_uri = http://cloud-ctrlr:5000
auth_host = cloud-ctrlr
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS
...
[paste_deploy]
...
flavor = keystone
mv /var/lib/glance/glance.sqlite /var/lib/glance/glance.sqlite-ORIGINAL
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
quit
glance-manage db_sync
keystone user-create --name=glance --pass=GLANCE_PASS --email=shawn@internetworkconsulting.net
keystone user-role-add --user=glance --tenant=service --role=admin
vi /etc/glance/glance-api-paste.ini
vi /etc/glance/glance-registry-paste.ini

Use the following for both files listed above.

[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=cloud-ctrlr
admin_user=glance
admin_tenant_name=service
admin_password=GLANCE_PASS
keystone service-create --name=glance --type=image --description="Glance Image Service"

Please note the service ID from above and used it below in the ABOVE_SERVICE_ID field.

keystone endpoint-create --service-id=ABOVE_SERVICE_ID --publicurl=http://cloud-ctrlr:9292 --internalurl=http://cloud-ctrlr:9292 --adminurl=http://cloud-ctrlr:9292
service glance-registry restart
service glance-api restart
wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img
glance image-list

2.6) Compute Services (nova)

apt-get install nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert nova-conductor nova-consoleauth nova-doc nova-scheduler python-novaclient nova-compute-kvm python-guestfs nova-network

When prompted to create a supermin appliance, respond yes.

apt-get install nova-api-metadata
vi /etc/nova/nova.conf
[DEFAULT]
...
auth_strategy=keystone
glance_host=cloud-ctrlr
my_ip=10.0.0.30
novncproxy_base_url=http://cloud-ctrlr:6080/vnc_auto.html
rabbit_host = cloud-ctrlr
rabbit_password = RABBIT_PASS
rpc_backend = nova.rpc.impl_kombu
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=10.0.0.30
vncserver_proxyclient_address=10.0.0.30

network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=False
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_network_bridge=br100
flat_interface=eth1
public_interface=eth1

	[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:NOVA_DBPASS@cloud-ctrlr/nova

[keystone_authtoken]
auth_host = cloud-ctrlr
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
mv /var/lib/nova/nova.sqlite /var/lib/nova/nova.sqlite-ORIGINAL
mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
quit
nova-manage db sync
keystone user-create --name=nova --pass=NOVA_PASS --email=shawn@internetworkconsulting.net
keystone user-role-add --user=nova --tenant=service --role=admin
vi /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = cloud-ctrlr
auth_port = 35357
auth_protocol = http
auth_uri = http://cloud-ctrlr:5000/v2.0
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
keystone service-create --name=nova --type=compute --description="Nova Compute service"

Please use the ID from above in the ABOVE_ID variable below.

keystone endpoint-create --service-id=ABOVE_ID --publicurl=http://cloud-ctrlr:8774/v2/%\(tenant_id\)s --internalurl=http://cloud-ctrlr:8774/v2/%\(tenant_id\)s --adminurl=http://cloud-ctrlr:8774/v2/%\(tenant_id\)s
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
nova image-list
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
vi /etc/kernel/postinst.d/statoverride
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}
chmod +x /etc/kernel/postinst.d/statoverride
service nova-compute restart
service nova-network restart
source keystonerc
service nova-api status
nova network-create vmnet --fixed-range-v4=10.0.0.0/24 --bridge-interface=br100 --multi-host=T

2.7) Dashboard (horizon)

apt-get install memcached libapache2-mod-wsgi openstack-dashboard
apt-get remove --purge openstack-dashboard-ubuntu-theme
vi /etc/openstack-dashboard/local_settings.py

Check the following:

CACHES = {
   'default': {
       'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
       'LOCATION' : '127.0.0.1:11211',
   }
}

You may set:

ALLOWED_HOSTS = ['localhost', 'my-desktop']

Make sure to set:

OPENSTACK_HOST = "cloud-ctrlr"
service apache2 restart
service memcached restart

Now we can use the dashboard: http://cloud-ctrlr/horizon

2.8) Block Storage (cinder)

Swift is a distributed storage system. This has many purposes, but DOES NOT ALLOW ACTIVE VM STORAGE. This would be used for clouded storage, backups, cloud backups, ect.

apt-get install cinder-api cinder-scheduler lvm2 cinder-volume
vi /etc/cinder/cinder.conf
[DEFAULT]
...
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = cloud-ctrlr
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = RABBIT_PASS

[database]
connection = mysql://cinder:CINDER_DBPASS@cloud-ctrlr/cinder
vi /etc/cinder/api-paste.ini    	
[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=cloud-ctrlr
auth_port = 35357
auth_protocol = http
admin_tenant_name=service
admin_user=cinder
admin_password=CINDER_PASS
mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
quit
cinder-manage db sync
keystone user-create --name=cinder --pass=CINDER_PASS --email=shawn@internetworkconsulting.net
keystone user-role-add --user=cinder --tenant=service --role=admin

Note the ID as it will be used in the next command.

keystone service-create --name=cinder --type=volume --description="Cinder Volume Service"

Use the ID from above in the ID_FROM_ABOVE variable.

keystone endpoint-create --service-id=ID_FROM_ABOVE --publicurl=http://cloud-ctrlr:8776/v1/%\(tenant_id\)s --internalurl=http://cloud-ctrlr:8776/v1/%\(tenant_id\)s --adminurl=http://cloud-ctrlr:8776/v1/%\(tenant_id\)s

Note the ID as it will be used in the next command.

keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Volume Service V2"

Use the ID from above in the ID_FROM_ABOVE variable.

keystone endpoint-create --service-id=ID_FROM_ABOVE --publicurl=http://cloud-ctrlr:8776/v2/%\(tenant_id\)s --internalurl=http://cloud-ctrlr:8776/v2/%\(tenant_id\)s --adminurl=http://cloud-ctrlr:8776/v2/%\(tenant_id\)s
service cinder-scheduler restart
service cinder-api restart
pvcreate /dev/sda3
vgcreate cinder-volumes /dev/sda3
vi /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
...
}
service cinder-volume restart
service tgt restart

2.9) Object Storage (swift) [skip]

The system was setup with a RAID1 over two 500GB drives (/dev/sda), CDROM (/dev/sr0), and two RAID0 arrays of one 1TB drive (/dev/sdb, /dev/sdc). These last two are object drives.

apt-get install swift openssh-server rsync memcached python-netifaces python-xattr python-memcache
keystone user-create --name=swift --pass=SWIFT_PASS --email=shawn@internetworkconsulting.net
keystone user-role-add --user=swift --tenant=service --role=admin
keystone service-create --name=swift --type=object-store --description="Object Storage Service"

Make sure to use the above service ID in the ABOVE_SERVICE_ID variable.

keystone endpoint-create --service-id=ABOVE_SERVICE_ID --publicurl='http://cloud-ctrlr:8080/v1/AUTH_%(tenant_id)s' --internalurl='http://cloud-ctrlr:8080/v1/AUTH_%(tenant_id)s' --adminurl=http://cloud-ctrlr:8080
mkdir -p /etc/swift
chown -R swift:swift /etc/swift/
vi /etc/swift/swift.conf
[swift-hash]
# random unique string that can never change (DO NOT LOSE) - !this authenicates the storage ring!
swift_hash_path_suffix = fLIbertYgibbitZ
apt-get install swift-account swift-container swift-object xfsprogs

For each object drive, repeat the following commands replacing SDX with the actual device path. We will create a single partition per drive with FDISK.

fdisk /dev/sdX
mkfs.xfs /dev/sdX1
echo "/dev/sdX1 /srv/node/sdX1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdX1
mount /srv/node/sdX1
chown -R swift:swift /srv/node

The drives are now prepared.

vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.0.30

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
vi /etc/default/rsync
RSYNC_ENABLE=true
service rsync start
mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon
apt-get install swift-proxy memcached python-keystoneclient python-swiftclient python-webob
cd /etc/swift
openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
vi /etc/memcached.conf

Change

-l 127.0.0.1

To

-l 10.0.0.30
service memcached restart
apt-get install git
git clone https://github.com/openstack/swift.git
cd swift
apt-get install python-pip
python setup.py install
vi /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift

[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
# cache directory for signing certificate
signing_dir = /home/swift/keystone-signing
# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = cloud-ctrlr
auth_port = 35357
# the same admin_token as provided in keystone.conf
admin_token = ADMIN_TOKEN
# the service tenant and swift userid and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = SWIFT_PASS

[filter:cache]
use = egg:swift#memcache

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck
mkdir -p /home/swift/keystone-signing
chown -R swift:swift /home/swift/keystone-signing
cd /etc/swift

The following lines create three set of swift objects: accounts, containers, and objects. The 18 is the MD5 has address offset bits, and 3 is the number of copies (redundentcy), and the

swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
swift-ring-builder account.builder add z1-10.0.0.30:6002/sdb1 100
swift-ring-builder container.builder add z1-10.0.0.30:6001/sdb1 100
swift-ring-builder object.builder add z1-10.0.0.30:6000/sdb1 100
swift-ring-builder account.builder add z2-10.0.0.30:6002/sdc1 100
swift-ring-builder container.builder add z2-10.0.0.30:6001/sdc1 100
swift-ring-builder object.builder add z2-10.0.0.30:6000/sdc1 100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
swift-init proxy start

Run the following:

service swift-object start
service swift-object-replicator start
service swift-object-updater start
service swift-object-auditor start
service swift-container start
service swift-container-replicator start
service swift-container-updater start
service swift-container-auditor start
service swift-account start
service swift-account-replicator start
service swift-account-reaper start
service swift-account-auditor start

Or

swift-init main start
service rsyslog restart
service memcached restart
swift -V 2.0 -A http://cloud-ctrlr:5000/v2.0 -U admin -K ADMIN_PASS stat

3) Administration

3.1) Performance Monitoring

Please refer to the following article for more information: http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/

apt-get install sysstat

List of Processes

top

List of Open Files

lsof

List of IO Processes

iostat

3.2) DD Bottle Neck on Volume Delete

vi /etc/cinder/cinder.conf
[DEFAULT]
…
volume_clear = none

3.3) Large Host Error on Block Device

When running larger images, the system only wait 60 times for 1 second each before timming out. This poses a big problem for larger images (100+ GB). We need to edit the following file, line 879-880:

vi /usr/lib/python2.7/dist-packages/nova/compute/manager.py
    def _await_block_device_map_created(self, context, vol_id, max_tries=60, wait_between=5):
	...
	max_tries = 60
	wait_between = 5

	attempts = 0
	start = time.time()
	...

3.4) Create New Host

Upload the Image to Horizon (3.5)

Login with SSH and ‘sudo’ to root.

qemu-img create -f qcow2 Windows7Pro.qcow2 120G
glance image-download Window7Pro > Window7Pro.iso
virt-install --name Win7Pro --ram 4096 --cdrom=/home/administrator/Window7Pro.iso --disk=/home/administrator/Windows7Pro.qcow2,format=qcow2 --boot=cdrom –vcpus=4 --graphics vnc,listen=0.0.0.0,port=5900 –noautoconsole

Connect with VNC Viewer to the host name.

glance image-create --name="Windows7Pro" --disk-format=qcow2 --container-format=bare --is-public=true < Windows7Pro.qcow2

3.5) Image Management

glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img
glance image-list

3.6) Run an Image

virt-install --name Win7Pro --ram 4096 --cdrom=/home/administrator/Window7Pro.iso --disk=/home/administrator/SzDesktop.qcow2,format=qcow2 --boot=cdrom --vcpus=4 --graphics vnc,listen=0.0.0.0,port=5900 –-noautoconsole