Red Hat OpenStack PackStack Install Guide on CentOS

Red Hat OpenStack PackStack Install Guide on CentOS

OpenStack PackStack 설치 guide 작성

1. Introduction

  • 해당 문서는 Redhat OpenStack 중에서 한번에 설치할 사용하는 PackStack 설치에 대한 가이드를 제공 하고자 작성
  • PackStack 설치 후 Compute Node를 추가 구성하고 할때 사항도 내용에 포함

1.1. Prepare

Server
Recommended Hardware
Notes
Cloud Controller node (runs network, volume, API, scheduler and image services)
Processor: 64-bit x86
Memory: 12 GB RAM
Disk space: 30 GB (SATA, SAS or SSD)
Volume storage: two disks with 2 TB (SATA) for volumes attached to the compute nodes
Network: one 1 Gbps Network Interface Card (NIC)
Two NICS are recommended but not required. A quad core server with 12 GB RAM would be more than sufficient for a cloud controller node.
Compute nodes (runs virtual instances)
Processor: 64-bit x86
Memory: 32 GB RAM
Disk space: 30 GB (SATA)
Network: two 1 Gbps NICs
With 2 GB RAM you can run one m1.small instance on a node or three m1.tiny instances without memory swapping, so 2 GB RAM would be a minimum for a test-environment compute node. As an example, Rackspace Cloud Builders use 96 GB RAM for compute nodes in OpenStack deployments.
Specifically for virtualization on certain hypervisors on the node or nodes running nova-compute, you need a x86 machine with an AMD processor with SVM extensions (also called AMD-V) or an Intel processor with VT (virtualization technology) extensions.
For XenServer and XCP refer to the XenServer installation guide and the XenServer harware compatibility list.
For LXC, the VT extensions are not required.

2. System Settings On Both Controller Node and Compute Node

2.1. Configure the network

  • The current network configuration
    • # ifconfig
  • Assigning Fully Qualified Domain Names
    • To avoid this problem, before registering to Red Hat Network, set the HOST NAME in the /etc/sysconfig/network file on each system that will host an OpenStack API endpoint
      • HOSTNAME=myhost.parentdomain
    • Edit /etc/hosts
      • 127.0.0.1 … myhost.parentdomain
      • {controller node ip} myhost.parentdomain
    • hostname set
      • # hostname myhost.parentdomain

2.2. Update yum

  • # yum update -y

2.3. Install tgtd service, wget tools and ntpd

  • install tgtd
    • # yum install -y scsi-target-utils
    • # service tgtd start
    • # service tgtd status
    • # chkconfig tgtd on
    • Check
      • # chkconfig --list tgtd
  • install wget
    • # yum install -y wget
  • install ntpd
    • # yum install -y ntp
    • # chkconfig ntpd on
    • # service ntpd start
    • # ntpdate -s time.bora.net
    • Check
      • # chkconfig --list ntpd

2.4. Disabled Selinux , Stop IPtables, bridge setting

  • Selinux
    • Run
      • # getenforce
    • The output should be Permissive or Disabled. If the output is enforcing, run
      • # setenforce 0
      • # vi /etc/selinux/config
        • SELINUX=disabled
  • IPtables
    • # service iptables stop
    • # chkconfig iptables off
  • Modprobe bridge
    • sysctl check
      • bridge 모듈이 올라오지 않는 경우 아래와 같은 사항이 발생
        • packstack 설치 시에서만 문제가 발생
# sysctl -p
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
    • load bridge module
      • Add bridge module
# modprobe bridge
      • Check bridge module
# lsmod | grep bridge
bridge              83689  0
stp                  2218  1 bridge
llc                  5546  2 bridge,stp
      • 시스템 rebooting 시 module start
        • Edit /etc/rc.local
# vi /etc/rc.local
# upload bridge
modprobe bridge
  • Reboot
    • # reboot

2.5. Add Red Hat RDO source to repository

2.6. Install Packstack Installer

  • # yum install -y openstack-packstack

2.7. SSH key

  • ssh-keygen
    • packstack에서 사용할 key 생성
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):Enter
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):Enter
Enter same passphrase again:Enter
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
94:fb:c6:70:eb:60:a8:db:b0:f6:ff:2a:d4:f2:e4:89 root@desktop8.example.com
The key's randomart image is:
+--[ RSA 2048]----+
|              |
|      .    |
|     o     |
|    . .    |
|    .S .   |
|   o.o= . |
| ...*o.=   |
| .=E.++    |
|   .+oooooo   |
+-----------------+
#

2.8. RDO OpenStack Installation Only On Controller Node

  • Generate an answer file for Packstack configuration
    • # packstack --gen-answer-file=/root/packstack_answer.cfg
  • Update necessary parameters in the answer file
    • answer file에서 기본 수정이 필요한 것
# vi /root/packstack_answer.cfg
##############################################
# Edit
##############################################

# To set up Horizon communication over https set this to 'y'
CONFIG_HORIZON_SSL=y

# The password to use for the Keystone admin user
CONFIG_KEYSTONE_ADMIN_PW=openstack

# Whether to provision for demo usage and testing. Note that
# provisioning is only supported for all-in-one installations.
CONFIG_PROVISION_DEMO=n

# Cinder's volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.
CONFIG_CINDER_VOLUMES_SIZE=4G

#    > cinder volume 기본 사이즈 수정 40G로 되어 있음
#    >> cinder volume이 초기에 /dev/loop0 로 로컬 디스크 영역을 할당
#    >> /(root disk) 영역을 초과할 경우 문제 발생

# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=time.bora.net
    • Hosting IP Edit Exam
      • IP 정보 확인
      • 내부 외부에 사용되는 내부 외부 인터페이스 정보 확인 및 수정
        • RDO에서 DB HOST 정보는 Local IP로 셋팅해야 함
##############################################
# Host info
##############################################
# 10.0.2.15 -> 10.0.0.76
# %s/10.0.2.15/10.0.0.76/g
##############################################

# The IP address of the server on which to install OpenStack services
# specific to controller role such as API servers, Horizon, etc.
CONFIG_CONTROLLER_HOST=10.0.0.76

# The list of IP addresses of the server on which to install the Nova
# compute service
CONFIG_COMPUTE_HOSTS=10.0.0.76

# The list of IP addresses of the server on which to install the
# network service such as Nova network or Neutron
CONFIG_NETWORK_HOSTS=10.0.0.76

# The IP address of the server on which to install the AMQP service
CONFIG_AMQP_HOST=10.0.0.76

##############################################
# DB IP info
##############################################

# The IP address of the server on which to install MySQL or IP
# address of DB server to use if MySQL installation was not selected
CONFIG_MYSQL_HOST=127.0.0.1

# The IP address of the server on which to install MongoDB
CONFIG_MONGODB_HOST=127.0.0.1

##############################################
# Network config info
##############################################

# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=eth5

# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth3

# Private interface for network manager on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth5
    • sed 명령어를 사용하는 예
sed -i -e "s/CONFIG_CEILOMETER_INSTALL=y/CONFIG_CEILOMETER_INSTALL=n/g; \
s/CONFIG_CINDER_VOLUMES_SIZE=20G/CONFIG_CINDER_VOLUMES_SIZE=1G/g; \
s/CONFIG_NOVA_COMPUTE_HOSTS=(CONTROLL_IP)/CONFIG_NOVA_COMPUTE_HOSTS={put_COMPUTER_eth0_IP_here}/g; \
s/CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local/CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=gre/g; \
s/CONFIG_NEUTRON_OVS_TUNNEL_RANGES=/CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:1000/g; \
s/CONFIG_NEUTRON_OVS_TUNNEL_IF=/CONFIG_NEUTRON_OVS_TUNNEL_IF=eth2/g; \
s/CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local/CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre/g" \
/root/packstack_answer.cfg
  • Network setting detail info
    • CONTROLLER, COMPUTE, NETWORK IP Change
# vi /root/packstack_answer.cfg

# specific to controller role such as API servers, Horizon, etc.
CONFIG_CONTROLLER_HOST={put_eth2_ip_here}

# The list of IP addresses of the server on which to install the Nova
# compute service
CONFIG_COMPUTE_HOSTS={put_eth2_ip_here}

# The list of IP addresses of the server on which to install the
# network service such as Nova network or Neutron
CONFIG_NETWORK_HOSTS={put_eth2_ip_here}

...
# address of DB server to use if MySQL installation was not selected
CONFIG_MYSQL_HOST={put_eth2_ip_here}

...
# The IP address of the server on which to install MongoDB
CONFIG_MONGODB_HOST={put_eth2_ip_here}
    • public private network ethernet change
      • public
        • public network로 쓰일 interface 정보 설정
# vi /root/packstack_answer.cfg
...
# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth2
      • private
        • private network 쓰일 interface 정보 수정
# vi /root/packstack_answer.cfg
...
# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=eth3
# Private interface for network manager on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth3

2.9. Run Packstack to install OpenStack


2.9.1. install packstack

  • # packstack --answer-file=/root/packstack_answer.cfg
    • The installation may be failed due to the instability of the download package, run this command again

2.9.2. Success Log

  • install 성공시 정보
[root@controller ~]# packstack --answer-file=/root/packstack_answer.cfg

Welcome to Installer setup utility

Installing:
Clean Up                                             [ DONE ]
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]


Preparing servers                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MySQL manifest entries                        [ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron Metering Agent manifest entries       [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding Nagios server manifest entries                [ DONE ]
Adding Nagios host manifest entries                  [ DONE ]
Adding post install manifest entries                 [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 115.144.181.59_prescript.pp
Applying 127.0.0.1_prescript.pp
115.144.181.59_prescript.pp:                         [ DONE ]          
127.0.0.1_prescript.pp:                              [ DONE ]          
Applying 115.144.181.59_ntpd.pp
Applying 127.0.0.1_ntpd.pp
115.144.181.59_ntpd.pp:                              [ DONE ]     
127.0.0.1_ntpd.pp:                                   [ DONE ]     
Applying 115.144.181.59_amqp.pp
Applying 127.0.0.1_mysql.pp
115.144.181.59_amqp.pp:                              [ DONE ]     
127.0.0.1_mysql.pp:                                  [ DONE ]     
Applying 115.144.181.59_keystone.pp
Applying 115.144.181.59_glance.pp
Applying 115.144.181.59_cinder.pp
115.144.181.59_keystone.pp:                          [ DONE ]         
115.144.181.59_cinder.pp:                            [ DONE ]         
115.144.181.59_glance.pp:                            [ DONE ]         
Applying 115.144.181.59_api_nova.pp
115.144.181.59_api_nova.pp:                          [ DONE ]         
Applying 115.144.181.59_nova.pp
115.144.181.59_nova.pp:                              [ DONE ]     
Applying 115.144.181.59_neutron.pp
115.144.181.59_neutron.pp:                           [ DONE ]        
Applying 115.144.181.59_neutron_fwaas.pp
Applying 115.144.181.59_osclient.pp
Applying 115.144.181.59_horizon.pp
115.144.181.59_neutron_fwaas.pp:                     [ DONE ]              
115.144.181.59_osclient.pp:                          [ DONE ]              
115.144.181.59_horizon.pp:                           [ DONE ]              
Applying 115.144.181.59_ring_swift.pp
115.144.181.59_ring_swift.pp:                        [ DONE ]           
Applying 115.144.181.59_swift.pp
115.144.181.59_swift.pp:                             [ DONE ]      
Applying 127.0.0.1_mongodb.pp
127.0.0.1_mongodb.pp:                                [ DONE ]   
Applying 115.144.181.59_ceilometer.pp
Applying 115.144.181.59_nagios.pp
Applying 115.144.181.59_nagios_nrpe.pp
Applying 127.0.0.1_nagios_nrpe.pp
127.0.0.1_nagios_nrpe.pp:                            [ DONE ]            
115.144.181.59_ceilometer.pp:                        [ DONE ]            
115.144.181.59_nagios.pp:                            [ DONE ]            
115.144.181.59_nagios_nrpe.pp:                       [ DONE ]            
Applying 115.144.181.59_postscript.pp
Applying 127.0.0.1_postscript.pp
115.144.181.59_postscript.pp:                        [ DONE ]           
127.0.0.1_postscript.pp:                             [ DONE ]           
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

**** Installation completed successfully ******


Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 115.144.181.59. To use the command line tools you need to source the file.
* NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 115.144.181.59 to use a CA signed cert.
* To access the OpenStack Dashboard browse to https://115.144.181.59/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://115.144.181.59/nagios username: nagiosadmin, password: 30206e40606c4cbe
* The installation log file is available at: /var/tmp/packstack/20140916-175432-yUGY9O/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140916-175432-yUGY9O/manifests

2.9.3. Error

.....
--> Running transaction check
---> Package augeas-libs.x86_64 0:0.9.0-4.el6 will be installed
---> Package rubygem-json.x86_64 0:1.5.5-1.el6 will be installed
--> Processing Dependency: rubygems for package: rubygem-json-1.5.5-1.el6.x86_64
--> Finished Dependency Resolution
Error: Package: rubygem-json-1.5.5-1.el6.x86_64 (puppetlabs-deps)
          Requires: rubygems
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
    • Resolve
# rpm -Uvh http://rbel.frameos.org/rbel6
# yum install rubygems

install again
# packstack --answer-file=/root/packstack_answer.cfg


3. RDO OpenStack Configuration On both Controller Node and Compute Node

3.1. Configure Network for Neutron

3.1.1. Source environment variables for OpenStack

  • # source /root/keystonerc_admin

3.1.2. Bind eth2 to the external bridge

  • external bridge 정보를 설정하여 내부 VM 에서 외부로 network가 동작하도록 설정
  • # ovs-vsctl show
af3cdde0-f551-4c4b-b566-7726cb0b4dba
Bridge br-int
    Port "qr-5ac662f8-c5"
        tag: 1
        Interface "qr-5ac662f8-c5"
            type: internal
    Port br-int
        Interface br-int
            type: internal
Bridge br-eth2
    Port br-eth2
        Interface br-eth2
            type: internal
    Port "qg-58e9d5c9-a0"
        Interface "qg-58e9d5c9-a0"
            type: internal
ovs_version: "1.11.0"
  • br-ex 를 ethernet과 연결
    • # ovs-vsctl add-port br-ex eth2

3.1.3. Update the external bridge configuration

  • # vi /etc/sysconfig/network-scripts/ifcfg-eth2
# Modify the corresponding configuration
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth2
ONBOOT=yes
PROMISC=yes
    • PROMISC
      • promiscuous 모드는 네트웍 내의 모든 패킷을 다 받아 들이겠다는 뜻
      • 보통 이 모드는 네트웍 내의 패킷을 분석하거나, 스니핑을 하기 위해서 설정
  • # vi /etc/sysconfig/network-scripts/ifcfg-br-ex
# Modify the corresponding configuration
TYPE=Ethernet
BOOTPROTO=static
DEVICE=br-ex
ONBOOT=yes
IPADDR={put_eth2_ip_here}
NETMASK=255.255.255.0
PROMISC=yes
  • Restart network
    • # service network restart
  • Network 적용 후 ping 안될 경우 ovs-vsctl show 로 정상적으로 network 설정 되었는지 확인

3.2. Restart OpenStack Services

3.2.1. Compute Node Restart OpenStack Service

  • Restart Neutron Service
    • # service neutron-openvswitch-agent restart
  • Restart Nova Service
    • # service openstack-nova-compute restart

3.2.2. Controller Node Restart OpenStack Service

  • Restart Neutron Service
    • # for svc in dhcp-agent l3-agent metadata-agent openvswitch-agent ovs-cleanup server; do service neutron-$svc restart; done
    • # source ~/keystonerc_admin
    • # neutron agent-list
      • Check the output, there should be smiley :-).
      • (If some services are not up, wait for a while and run "neutron agent-list" again.)
  • Restart Nova Services
    • # for svc in api cert compute conductor consoleauth novncproxy scheduler; do service openstack-nova-$svc restart; done
    • # source ~/keystonerc_admin
    • # nova-manage service list
      • Check the output, and there should be smiley :-).

4. Configure Tenant Networks and Start a VM

  • The rest is the same with native installations. http://wiki.stackinsider.com/index.php/Native_Stack_-_Single_Node_using_Neutron_GRE_-_Havana#Prepare_Tenant_Network

4.1. Login Horizon

4.1.1. Modify Horizon allowed access and restart the http service

  • Note
    • 해당 Version에서는 이미 설정 되어 있음
  • dashboard 접근 가능한 Host 설정
    • # sed -i "s/ALLOWED_HOSTS.*$/ALLOWED_HOSTS = ['*', ]/g" /etc/openstack-dashboard/local_settings
      • OR edit /etc/openstack-dashboard/local_settings
        • ALLOWED_HOSTS = ['*', ]
    • # service httpd restart

4.1.2. Login username and password

  • You can login your Horizon use the username and password as below:
    • username:admin
    • password:{you can find the password in the /root/keystonerc_admin file with the OS_PASSWORD option}


5.  Implementing an additional nova compute node

  • RDO PackStack 일반 설치 후 Compute Node를 추가 하고자 할 때 가이드

5.1. Prepare

5.1.1. OpenStack packages

  • The EPEL package includes GPG keys for package signing and repository information. This should only be installed on Red Hat Enterprise Linux and CentOS, not Fedora. Install the latest epel-release package (seehttp://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html). For example:
    • # yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  • The openstack-utils package contains utility programs that make installation and configuration easier. These programs are used throughout this guide. Installopenstack-utils. This verifies that you can access the RDO repository:
    • # yum install openstack-utils
  • The openstack-selinux package includes the policy files that are required to configure SELinux during OpenStack installation on RHEL and CentOS. This step is not required during OpenStack installation on Fedora. Install openstack-selinux:
    • # yum install openstack-selinux
  • Upgrade your system packages:
    • # yum upgrade
  • If the upgrade included a new kernel package, reboot the system to ensure the new kernel is running:
    • # reboot

5.1.2. Database Node setup

  • On all nodes other than the controller node, install the MySQL Python library:
    • # yum install MySQL-python


5.2. Compute node Setting

  • Setting 참조 사항
    • prompt 화면에서 server8은 Controll Node
    • prompt 화면에서 desktop은 추가되는 Compute Node
  • Compute Node 추가 사항 개요
    • 신규 Node에 compute install 하여 compute service 제공
    • 신규 Node에 neutron-openvswitch agent를 설치 하여 compute에서 Control node에서 동작 중인 Neutron service와 연동이 가능하도록 설정
      • Network 서버는 control node에 설치된 neutron-server를 사용

5.2.1. Managing Nova compute nodes

  • Disable debugging in nova
    • [root@server8 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT debug False
    • [root@server8 ~]# for i in /etc/init.d/openstack-nova* ; do $i condrestart ; done
  • Ensure that cloud info
    • admin 인증을 한 후 확인
[root@server8 ~]# source /root/keystonerc_admin
[root@server8 ~(keystone_admin)]# nova-manage host list
host                 zone      
server8.example.com internal  
[root@server8 ~(keystone_admin)]# nova-manage service list
Binary        Host                              Zone          Status State Updated_At
nova-conductor   server8.example.com               internal      enabled :-)   2014-07-30 06:45:31
nova-consoleauth server8.example.com               internal      enabled :-)   2014-07-30 06:45:31
nova-scheduler   server8.example.com               internal      enabled :-)   2014-07-30 06:45:31
nova-compute server8.example.com               nova          enabled :-)   2014-07-30 06:45:32
nova-cert     server8.example.com               internal      enabled :-)   2014-07-30 06:45:39

  • On desktop8 , install the nova package
    • [root@desktop8 ~]# yum install -y openstack-nova-compute
  • Back up config file and copy from server8 nova.conf
    • [root@desktop8 ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.orig
    • [root@desktop8 ~]# scp server8:/etc/nova/nova.conf /etc/nova/
    • [root@desktop8 ~]# chown root.nova /etc/nova/nova.conf
      • 해당 파일의 권한 문제로 정상적으로 compute node가 동작하지 않을 수 있음
  • libvirtd daemon on
    • [root@desktop8 ~]# service libvirtd start
    • [root@desktop8 ~]# chkconfig libvirtd on
  • Change config nova.conf
    • [root@desktop8 ~]# vi /etc/nova/nova.conf
my_ip=192.168.0.8
# my_ip정보는 control node 접속 가능한 compute ip 정보를 입력
libvirt_type=qemu
...
vncserver_listen=$my_ip
...
vncserver_proxyclient_address=$my_ip
#cinder_cross_az_attach=true
sql_connection=mysql://nova:73d2435774754c78@server8:3306/nova
# mysql connection 정보에서 control node 정보확인
    • libvirt_type 은 Compute Node가 가상화 서버가 아닐 경우 kvm 으로 설정
      • icehouse 에서는 virt_type 정보로 확인
    • 방화벽 확인
      • control node에서 mysql 방화벽 정보 확인 후 추가
        • iptables open
          • [root@server8 ~]# vi /etc/sysconfig/iptables
            • 5671,5672,3306
  • Start and enable nova compute service
[root@desktop8 ~]# service openstack-nova-compute start
Starting openstack-nova-compute:                        [  OK  ]

[root@desktop8 ~]# grep ERROR /var/log/nova/compute.log

[root@desktop8 ~]# chkconfig openstack-nova-compute on

  • Verify the hosts and services
[root@desktop8 ~]# nova-manage host list
host                 zone      
server8.example.com internal  
desktop8.example.com nova      
[root@desktop8 ~]# nova-manage service list
Binary        Host                              Zone          Status State Updated_At
nova-conductor   server8.example.com               internal      enabled :-)   2014-07-30 06:56:01
nova-consoleauth server8.example.com               internal      enabled :-)   2014-07-30 06:56:01
nova-scheduler   server8.example.com               internal      enabled :-)   2014-07-30 06:56:01
nova-compute server8.example.com               nova          enabled :-)   2014-07-30 06:56:02
nova-cert     server8.example.com               internal      enabled :-)   2014-07-30 06:55:59
nova-compute desktop8.example.com              nova          enabled :-)   2014-07-30 06:55:58
[root@desktop8 ~]#

5.2.2. Install OpenStack networking and copy configuration on the Nova compute node

  • On desktop8, install neutron-openvswitch
    • [root@desktop8 ~]# yum install -y openstack-neutron-openvswitch
  • Copy config file from server8 to desktop8
    • backup
      • [root@desktop8 ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.orig
      • [root@desktop8 ~]# cp /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.orig
    • copy
      • [root@desktop8 ~]# scp server8:/etc/neutron/neutron.conf /etc/neutron/
      • [root@desktop8 ~]# scp server8:/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugins/openvswitch/

5.2.3. Configuring OpenStack networking on the Nova compute node

5.2.3.1. Source environment variables for OpenStack

  • # source /root/keystonerc_admin

5.2.3.2. Start and enable the openvswitch

  • [root@desktop8 ~]# service openvswitch start
  • [root@desktop8 ~]# tail /var/log/openvswitch/ovs*
  • [root@desktop8 ~]# chkconfig openvswitch on

5.2.3.3. Create the bridges

  • bridge 네트워크 추가 및 장비 연결
[root@desktop8 ~]# ovs-vsctl add-br br-int
[root@desktop8 ~]# ovs-vsctl add-br br-ex

5.2.3.4. Bind eth2 to the external bridge

  • external bridge 정보를 설정하여 내부 VM 에서 외부로 network가 동작하도록 설정
  • [root@desktop8 ~]## ovs-vsctl show
af3cdde0-f551-4c4b-b566-7726cb0b4dba
Bridge br-int
    Port "qr-5ac662f8-c5"
        tag: 1
        Interface "qr-5ac662f8-c5"
            type: internal
    Port br-int
        Interface br-int
            type: internal
Bridge br-eth2
    Port br-eth2
        Interface br-eth2
            type: internal
    Port "qg-58e9d5c9-a0"
        Interface "qg-58e9d5c9-a0"
            type: internal
ovs_version: "1.11.0"
  • br-ex 를 ethernet과 연결
    • public network로 사용되는 ethernet 정보를 확인 하여 br-ex와 연결
    • # ovs-vsctl add-port br-ex eth2

5.2.3.5. Update the external bridge configuration

  • # vi /etc/sysconfig/network-scripts/ifcfg-eth2
# Modify the corresponding configuration
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth2
ONBOOT=yes
PROMISC=yes
    • PROMISC
      • promiscuous 모드는 네트웍 내의 모든 패킷을 다 받아 들이겠다는 뜻
      • 보통 이 모드는 네트웍 내의 패킷을 분석하거나, 스니핑을 하기 위해서 설정
  • # vi /etc/sysconfig/network-scripts/ifcfg-br-ex
# Modify the corresponding configuration
TYPE=Ethernet
BOOTPROTO=static
DEVICE=br-ex
ONBOOT=yes
IPADDR={put_eth2_ip_here}
NETMASK=255.255.255.0
PROMISC=yes
  • Restart network
    • # service network restart
  • Network 적용 후 ping 안될 경우 ovs-vsctl show 로 정상적으로 network 설정 되었는지 확인

5.2.3.6. Start and enable the neutron-openvswitch-agent

  • Run
[root@desktop8 ~]# service neutron-openvswitch-agent start
Starting neutron-openvswitch-agent:                     [  OK  ]
[root@desktop8 ~]# tail /var/log/neutron/openvswitch-agent.log
[root@desktop8 ~]# chkconfig neutron-openvswitch-agent on
[root@desktop8 ~]# chkconfig neutron-ovs-cleanup on
      • ovs-cleanup 은 리눅스가 openvswitch bridge 에 영향을 주지 않도록 부팅시 적용



X. Error

X.1. Install

X.1.1. Error: Package: rubygem-json

X.1.2. Error keystone-manage db_sync returned

  • Error info
xx.xx.xx.xx_keystone.pp:                         [ ERROR ]         
Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: xx.xx.xx.xx_keystone.pp
Error: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Failed to call refresh: keystone-manage db_sync returned 1 instead of one of [0]
You will find full trace in log /var/tmp/packstack/20140519-103001-nPPXFU/manifests/xx.xx.xx.xx_keystone.pp.log
Please check log file /var/tmp/packstack/20140519-103001-nPPXFU/openstack-setup.log for more information

X.1.3. An unexpected error prevented the server from fulfilling your request.

  • Error info
115.144.181.59_keystone.pp:                       [ ERROR ]           
Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: 115.144.181.59_keystone.pp
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: An unexpected error prevented the server from fulfilling your request. (HTTP 500)
You will find full trace in log /var/tmp/packstack/20140916-172703-erQsGD/manifests/115.144.181.59_keystone.pp.log
Please check log file /var/tmp/packstack/20140916-172703-erQsGD/openstack-setup.log for more informatio
    • 해결 방안
      • DB HOST 정보는 Local IP로 셋팅
# vi /root/packstack_answer.cfg
CONFIG_MYSQL_HOST=127.0.0.1
CONFIG_MONGODB_HOST=127.0.0.1

X.2. Horizon

X.2.1. Error on tenant administration on Horizon dashboard

  • PackStack 일반 설치 후 프로젝트가 생성되지 않는 경우
  • Check /etc/openstack-dashboard/local_settings for OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" it should be there. The error happens because the role does not exist in keystone.
  • Check for "Member" on Keystone: keystone role-list. If Member is not there, create it.
  • Create Member role in Keystone: keystone role-create --name Member
    • # source keystonerc_admin
    • # keystone role-create --name Member

X.2.2. dashboard 접속 시 not found

  • ip로 접근시 domain 정보를 찾지 못하여 브라우저를 못들어가는 현상 발생
  • hosts 정보에 해당 도메인과 ip 정보 셋팅
# vi /etc/hosts
{control node ip} domain_name
115.144.181.100 opControl.localhost

X.3. Add Compute Node

X.3.1. libvirtd daemon

  • Error Info
2014-09-17 16:56:53.435 2624 TRACE nova.virt.libvirt.driver libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
2014-09-17 16:56:53.435 2624 TRACE nova.virt.libvirt.driver
2014-09-17 16:56:53.627 2624 WARNING nova.virt.libvirt.driver [-] Cannot update service status on host: opCompute,since it is not registered.
  • 해결 방법
    • libvirtd daemon start


Y. 참조정보

Y.1. URL

Y.1.1. Red Hat RDO - Dual Node using Neutron GRE - Havana


Y.1.2. All-In-One Openstack using RDO Packstack with External Public IP’s



Y.2. On Redhat Enterprise Linux

  • RHEL에서 설치 할 경우
    • Ceilometer 경우 mongo db가 설치가 이루어지지 않아 문제가 발생할 수 있다.
    • rubygem package 정보를 못 찾아 문제가 발생할 수 있다.

Y.3. Removing PackStack Deployments

  • Completely removing OpenStack, application data and all packages
    • 다음 파일을 만든 후 실행
# Warning! Dangerous step! Destroys VMs
for x in $(virsh list --all | grep instance- | awk '{print $2}') ; do
   virsh destroy $x ;
   virsh undefine $x ;
done ;

# Warning! Dangerous step! Removes lots of packages
yum remove -y nrpe "*nagios*" puppet "*ntp*" "*openstack*" \
"*nova*" "*keystone*" "*glance*" "*cinder*" "*swift*" \
mysql mysql-server httpd "*memcache*" scsi-target-utils \
iscsi-initiator-utils perl-DBI perl-DBD-MySQL ;

# Warning! Dangerous step! Deletes local application data
rm -rf /etc/nagios /etc/yum.repos.d/packstack_* /root/.my.cnf \
/var/lib/mysql/ /var/lib/glance /var/lib/nova /etc/nova /etc/swift \
/srv/node/device*/* /var/lib/cinder/ /etc/rsync.d/frag* \
/var/cache/swift /var/log/keystone /var/log/cinder/ /var/log/nova/ \
/var/log/httpd /var/log/glance/ /var/log/nagios/ /var/log/quantum/ ;

umount /srv/node/device* ;
killall -9 dnsmasq tgtd httpd ;

vgremove -f cinder-volumes ;
losetup -a | sed -e 's/:.*//g' | xargs losetup -d ;
find /etc/pki/tls -name "ssl_ps*" | xargs rm -rf ;
for x in $(df | grep "/lib/" | sed -e 's/.* //g') ; do
   umount $x ;
done
    • 현황
      • 해당 package는 지워졌지만 재설치시 문제 발생

Y.4. cinder-volumes extend

  • 기존 사용중이던 cinder service의 cinder-volumes의 크기를 확장

Y.4.1. vgextend

  • pvs
    • 현재 lvm의 physical volume 정보 확인
[root@opControl ~]# pvs
 PV         VG             Fmt  Attr PSize   PFree  
 /dev/loop0 cinder-volumes lvm2 a--    4.12g   4.12g
 /dev/sdb1                 lvm2 a--  931.51g 931.51g
[root@opControl ~]#
  • vgextend
    • volume group의 volume을 확장
    • loop0로 로컬 디스크를 바로 보던 pv는 제고하고 신규 추가된 lvm 구성
[root@opControl ~]# vgextend cinder-volumes /dev/sdb1
 Volume group "cinder-volumes" successfully extended

[root@opControl ~]# vgreduce cinder-volumes /dev/loop0
 Removed "/dev/loop0" from volume group "cinder-volumes"

[root@opControl ~]# pvs
 PV         VG             Fmt  Attr PSize   PFree  
 /dev/loop0                lvm2 a--    4.12g   4.12g
 /dev/sdb1  cinder-volumes lvm2 a--  931.51g 931.51g
[root@opControl ~]# vgs
 VG             #PV #LV #SN Attr   VSize   VFree  
 cinder-volumes   1   0   0 wz--n- 931.51g 931.51g
[root@opControl ~]#

Y.5. Example Architectures





인터넷 자료들을 참조하여 작성

https://docs.google.com/document/d/1huwLCwDp1D05rHpj14ehL5U8aPt4cXvAHXUiwfqWVRg/edit?usp=sharing

댓글

이 블로그의 인기 게시물

How To Restart Windows Server 2012