Lab 4: Openstack





實驗描述

OpenStack 是一個雲端運算軟體,運來管理叢集資源的管理。

而在資源管理上,OpenStack 將資源分割為運算、網路、儲存種類型,並分別使用不同的元件管理。


在這次實驗中,我們教大家如何 step by step 安裝 single node 的 Openstack 雲端管理平台,

希望藉此了解各服務元件功能以及整體架構的 Overview,

另外也讓大家嘗試使用模版快速佈署自己的雲端運算環境。




內容更新區

2015/01/04 19:00 更新 : 
  • 新增作業檢查頁面
    2015/01/04 更新 :
  • 修正下指令nova service-list後顯示的服務範例(去除nova-network)
  • 如果作業做完可寄信給助教,助教檢查完後回信便可以隨你們玩這台機器,每個人有3次reset機器的機會(搞爆機器想重來的話)。3次機會用完後還是可以申請reset機器,不過會優先幫其他人reset
    2014/12/26 更新 :
  • 新增作業內容
  • 更新 Heat Orchestration Template (HOT) 範例
  • 作業繳交期限為 2015/01/09
  • 新增如何使用 VNC 連入底下開的機器 (在 step11)
  • 新增 Openstack 參考資料



  • 實驗環境

    理學大樓 1002 教室
    Cloud-A01 ~ Cloud-D12
    CPU AMD Phenom™ II X6 1065T Processor
    Memory 8G
    Disk spaces 500G、500G
    O.S. Debian Wheezy

    虛擬機環境
    Virtual Machine
    Location 821 機房伺服器
    CPU vcpu*2
    Memory 8G
    Disk spaces 80G (QCOW2 Format)
    O.S. Debian Jessie
    理學大樓 821 機房
    CSIE-Cloud01 ~ CSIE-Cloud06
    CPU AMD Opteron™ Processor 6128 * 2
    (total 16 cpu cores)
    Memory 64G
    Disk spaces 500G, 500G, 1T
    O.S. Debian wheezy
    CSIE-Cloud07, CSIE-Cloud08
    CPU AMD Opteron™ Processor 6234 * 2
    (total 24 cpu cores)
    Memory 128G, 160G
    Disk spaces 500G, 500G, 1T
    O.S. Debian wheezy



    環境參數

    為了避免輸入錯誤以及方便區分 IP 使用環境,我們使用下列幾個按鈕來快速取代各個不同 IP。

    您的機器參數:

    Username : [USERNAME]

    Code number : [code-num]

    Host IP : [Host-IP]

    Gateway : [Gateway-IP]

    密碼 :

    Password :



    快速安裝 Openstack - DevStack

    Openstack 官方網站目前有提供快速腳本安裝的方式 : devstack




    安裝 Openstack

    Step0 : Login Server

      若您使用 MS Windows, 請參閱說明 Login Server From Windows.

      Open a terminal emulator and then type the following commands.

      ssh [USERNAME]@cloud.cs.nchu.edu.tw -X -p [PortNUM]



    Step1 : Openstack 環境準備工作

    1. 安裝 Ubuntu-Cloud 套件庫的 key

      先更新一下套件庫清單

      sudo apt-get update; sudo apt-get dist-upgrade

      安裝key

      sudo apt-get install ubuntu-cloud-keyring

      增加 source 來源

      sudo cp /etc/apt/sources.list /etc/apt/sources.list.orig
      sudo nano /etc/apt/sources.list
      	#在最後一行加入
      deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/juno main

      更新套件庫

      sudo apt-get update
    2. 安裝 KVM 以及 Libvirt

      確認系統是否有支援虛擬化技術

      egrep '(vmx|svm)' --color=always /proc/cpuinfo
      	flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge 
      mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl 
      pni cx16 popcnt hypervisor lahf_lm svm abm sse4a

      確認是否有載入 KVM 的 Module

      lsmod | grep kvm
      	kvm_amd                60026  0 
      kvm                   455835  1 kvm_amd

      如果沒有發現上述的畫面,可以試著執行這個指令

      sudo modprobe kvm_amd
      lsmod | grep kvm
      	kvm_amd                60026  0 
      kvm                   455835  1 kvm_amd

      確認沒問題之後,使用 apt-get 安裝 kvm 以及 libvirt 套件

      sudo apt-get install kvm libvirt-bin pm-utils

      這次實作中,網路環境是由 Openstack 做管控,因此我們先刪掉 Libvirt 預設的 Bridge

      sudo virsh net-destroy default
      	Network default destroyed
      sudo virsh net-undefine default
      	Network default has been undefined

      重新啟動 Libvirt 的服務

      sudo service dbus restart
      sudo service libvirt-bin restart



    Step2 : 安裝 MySQL

    資料庫是用來儲存 Openstack 服務的訊息,由於之後各個 Openstack 服務元件的資訊都是由資料庫做儲存管理, 為了方便起見我們統一將各元件的資料庫預先建立起來。

    1. 使用 apt-get 安裝 MySQL

      sudo apt-get install mysql-server python-mysqldb

      輸入密碼 ,需要輸入兩次

    2. 修改資料庫的設定

      對設定檔做備份

      sudo cp /etc/mysql/my.cnf /etc/mysql/my.cnf.bak

      MySQL 預設只能夠讓 localhost 存取,為了安裝上方便,我們設定為可讓任何人存取

      sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

      重新啟動 MySQL 的 service

      sudo service mysql restart
    3. 建立各個 Openstack 服務元件的 Database

      進入資料庫,輸入密碼

      mysql -u root -p
      	Enter password: 

      接下來開始建立各元件的資料庫,首先是 keystone

      CREATE DATABASE keystone;
      GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '';
      GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '';

      然後是 glance

      CREATE DATABASE glance;
      GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY '';
      GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '';

      再來是 nova

      CREATE DATABASE nova;
      GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY '';
      GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '';

      接下來是 cinder

      CREATE DATABASE cinder;
      GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '';
      GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '';

      接下來是 Neutron

      CREATE DATABASE neutron;
      GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '';
      GRANT ALL ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '';

      還有一個是 heat

      CREATE DATABASE heat;
      GRANT ALL ON heat.* TO 'heat'@'%' IDENTIFIED BY '';
      GRANT ALL ON heat.* TO 'heat'@'localhost' IDENTIFIED BY '';

      最後是 ceilometer

      CREATE DATABASE ceilometer;
      GRANT ALL ON ceilometer.* TO 'ceilometer'@'%' IDENTIFIED BY '';
      GRANT ALL ON ceilometer.* TO 'ceilometer'@'localhost' IDENTIFIED BY '';

      我們確認一下是否有建立成功

      SHOW DATABASES;
      	+--------------------+
      | Database           |
      +--------------------+
      | information_schema |
      | ceilometer         |
      | cinder             |
      | glance             |
      | heat               |
      | keystone           |
      | mysql              |
      | neutron            |
      | nova               |
      | performance_schema |
      +--------------------+
      10 rows in set (0.00 sec)
      

      建立完成就可以離開囉

      quit;
      	Bye



    Step3 : 安裝 RabbitMQ

    Openstack 使用 Messaging Server 來傳遞各服務之間的訊息,這次實作我們使用 RabbitMQ。

      sudo apt-get install rabbitmq-server

      設定 RabbitMQ 的密碼

      sudo rabbitmqctl change_password guest 

      事前準備工作終於告一個段落,接下來要開始安裝 Openstack 的服務元件囉!




    Step4 : 安裝 Keystone

    Keystone 是進行授權認証的元件,在安裝的過程中各服務元件都必須向 keystone 註冊。

    1. 使用 apt-get 安裝 keystone 相關的套件

      sudo apt-get install keystone python-keystoneclient
    2. 修改 keystone 相關的設定

      先備份原始的設定檔

      sudo cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak

      開啟設定檔

      sudo nano /etc/keystone/keystone.conf -c
      	[database]
      ...
      #修改第632行
      connection = mysql://keystone:@[Host-IP]/keystone
      

      存檔退出,檢查一下修改後的結果

      sudo diff /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
      	632c632
      < mysql://keystone:@[Host-IP]/keystone
      ---
      > connection = sqlite:////var/lib/keystone/keystone.db
    3. 將設定寫入 database 中

      sudo keystone-manage db_sync

      重新啟動 Keystone 的服務

      sudo service keystone restart
    4. 由於 Ubuntu 預設會建立 SQLite database,我們用不到可以刪掉

      sudo rm -f /var/lib/keystone/keystone.db
    5. 我們透過官方腳本建立給其他元件的憑證

      首先建立兩個空的腳本文件

      touch keystone_basic.sh keystone_endpoints_basic.sh

      設定為可執行的權限

      chmod +x keystone_basic.sh keystone_endpoints_basic.sh

      開啟腳本文件,將下列這段貼上,存檔

      nano keystone_basic.sh
      	#!/bin/sh
      #
      # Keystone basic configuration 
      
      # Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh
      
      # Modified by Bilel Msekni / Institut Telecom
      #
      # Support: openstack@lists.launchpad.net
      # License: Apache Software License (ASL) 2.0
      #
      HOST_IP=[Host-IP]
      ADMIN_PASSWORD=${ADMIN_PASSWORD:-}
      SERVICE_PASSWORD=${SERVICE_PASSWORD:-}
      export SERVICE_TOKEN="ADMIN"
      export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"
      SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
      
      get_id () {
          echo `$@ | awk '/ id / { print $4 }'`
      }
      
      # Tenants
      ADMIN_TENANT=$(get_id keystone tenant-create --name=admin)
      SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME)
      
      
      # Users
      ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)
      
      
      # Roles
      ADMIN_ROLE=$(get_id keystone role-create --name=admin)
      KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin)
      KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin)
      
      # Add Roles to Users in Tenants
      keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
      keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT
      keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT
      
      # The Member role is used by Horizon and Swift
      MEMBER_ROLE=$(get_id keystone role-create --name=Member)
      
      # Configure service users/roles
      NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)
      keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE
      
      GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)
      keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE
      
      #QUANTUM_USER=$(get_id keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com)
      #keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE
      
      CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)
      keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE
      
      HEAT_USER=$(get_id keystone user-create --name=heat --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=heat@domain.com)
      keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $HEAT_USER --role-id $ADMIN_ROLE
      
      CEILOMETER_USER=$(get_id keystone user-create --name=ceilometer --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=ceilometer@domain.com)
      keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CEILOMETER_USER --role-id $ADMIN_ROLE
      
      NEUTRON_USER=$(get_id keystone user-create --name=neutron --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=neutron@domain.com)
      keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NEUTRON_USER --role-id $ADMIN_ROLE
      
      # HEAT
      keystone role-create --name heat_stack_user; keystone role-create --name heat_stack_owner
      
      
      # END
      
      

      開啟另外一個檔案,將下列這段貼上,存檔

      nano keystone_endpoints_basic.sh
      	#!/bin/sh
      #
      # Keystone basic Endpoints
      
      # Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh
      
      # Modified by Bilel Msekni / Institut Telecom
      #
      # Support: openstack@lists.launchpad.net
      # License: Apache Software License (ASL) 2.0
      #
      
      # Host address
      HOST_IP=[Host-IP]
      EXT_HOST_IP=[Host-IP]
      
      # MySQL definitions
      MYSQL_USER=keystone
      MYSQL_DATABASE=keystone
      MYSQL_HOST=$HOST_IP
      MYSQL_PASSWORD=
      
      # Keystone definitions
      KEYSTONE_REGION=RegionOne
      export SERVICE_TOKEN=ADMIN
      export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"
      
      while getopts "u:D:p:m:K:R:E:T:vh" opt; do
        case $opt in
          u)
            MYSQL_USER=$OPTARG
            ;;
          D)
            MYSQL_DATABASE=$OPTARG
            ;;
          p)
            MYSQL_PASSWORD=$OPTARG
            ;;
          m)
            MYSQL_HOST=$OPTARG
            ;;
          K)
            MASTER=$OPTARG
            ;;
          R)
            KEYSTONE_REGION=$OPTARG
            ;;
          E)
            export SERVICE_ENDPOINT=$OPTARG
            ;;
          T)
            export SERVICE_TOKEN=$OPTARG
            ;;
          v)
            set -x
            ;;
          h)
            cat <&2
            exit 1
            ;;
          :)
            echo "Option -$OPTARG requires an argument" >&2
            exit 1
            ;;
        esac
      done  
      
      if [ -z "$KEYSTONE_REGION" ]; then
        echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2
        missing_args="true"
      fi
      
      if [ -z "$SERVICE_TOKEN" ]; then
        echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2
        missing_args="true"
      fi
      
      if [ -z "$SERVICE_ENDPOINT" ]; then
        echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2
        missing_args="true"
      fi
      
      if [ -z "$MYSQL_PASSWORD" ]; then
        echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2
        missing_args="true"
      fi
      
      if [ -n "$missing_args" ]; then
        exit 1
      fi
       
      keystone service-create --name nova --type compute --description 'OpenStack Compute Service'
      keystone service-create --name cinder --type volume --description 'OpenStack Volume Service'
      keystone service-create --name glance --type image --description 'OpenStack Image Service'
      keystone service-create --name keystone --type identity --description 'OpenStack Identity'
      keystone service-create --name heat --type orchestration --description "Orchestration"
      keystone service-create --name heat-cfn --type cloudformation --description "Orchestration"
      keystone service-create --name ceilometer --type metering --description "Telemetry"
      keystone service-create --name neutron --type network --description "OpenStack Networking"
      #keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service'
      #keystone service-create --name quantum --type network --description 'OpenStack Networking service'
      
      create_endpoint () {
        case $1 in
          compute)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s'
          ;;
          volume)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8776/v1/$(tenant_id)s'
          ;;
          image)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':9292/' --adminurl 'http://'"$HOST_IP"':9292/' --internalurl 'http://'"$HOST_IP"':9292/'
          ;;
          identity)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':5000/v2.0' --adminurl 'http://'"$HOST_IP"':35357/v2.0' --internalurl 'http://'"$HOST_IP"':5000/v2.0'
          ;;
          orchestration)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8004/v1/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8004/v1/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8004/v1/$(tenant_id)s'
          ;;
          cloudformation)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8000/v1/' --adminurl 'http://'"$HOST_IP"':8000/v1/' --internalurl 'http://'"$HOST_IP"':8000/v1/'
          ;;
          metering)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8777/' --adminurl 'http://'"$HOST_IP"':8777/' --internalurl 'http://'"$HOST_IP"':8777/'
          ;;
          network)
          keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':9696/' --adminurl 'http://'"$HOST_IP"':9696/' --internalurl 'http://'"$HOST_IP"':9696/'
          ;;
        esac
      }
      
      for i in compute volume image object-store identity orchestration cloudformation metering network; do
        id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type='"$i"';"` || exit 1
        create_endpoint $i $id
      done
      
      # End
      
    6. 接著設定一些 Openstack 需要的環境變數

      建立一個文件,貼上這段

      nano ~/stackrc
      	export OS_USERNAME=admin
      export OS_PASSWORD=
      export OS_TENANT_NAME=admin
      export OS_AUTH_URL=http://[Host-IP]:5000/v2.0/

      載入環境變數

      source ~/stackrc
    7. 執行剛剛寫好的這兩個腳本

      sudo ./keystone_basic.sh
      sudo ./keystone_endpoints_basic.sh
    8. 接下來我們測試一下,查詢 keystone 的使用者有哪些

      keystone user-list
      	+----------------------------------+------------+---------+-----------------------+
      |                id                |    name    | enabled |         email         |
      +----------------------------------+------------+---------+-----------------------+
      | 45f98a968ef6427b852756a4c5a0d5b3 |   admin    |   True  |    admin@domain.com   |
      | e375a322fbb7417ba60141674ff17b07 | ceilometer |   True  | ceilometer@domain.com |
      | 02e09a4e1fff4c959073c72b332ec2b8 |   cinder   |   True  |   cinder@domain.com   |
      | 3ac79cb22c634e5c8a8aa7e887345fe9 |   glance   |   True  |   glance@domain.com   |
      | 190f7ff5e4934635b2421bcc80c6444e |    heat    |   True  |    heat@domain.com    |
      | bccfec2324164aa08f03e7501c2174c5 |    nova    |   True  |    nova@domain.com    |
      +----------------------------------+------------+---------+-----------------------+
      



    Step5 : 安裝 Glance

    Glance 是提供管理映像檔的服務,使用者可以透過上傳映像檔建立不同的虛擬機器。

    1. 首先使用 apt-get 安裝 glance 套件

      sudo apt-get install glance python-glanceclient
    2. 安裝完之後,對 glance 進行一些相關的設定

      記得養成好習慣,先備份原始的設定檔

      sudo cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak

      開啟設定檔做一些修改

      sudo nano /etc/glance/glance-api.conf -c
      	[DEFAULT]
      ...
      # 將第210行修改為
      notification_driver = messaging
      ...
      # 將第216行修改為
      rpc_backend = rabbit
      ...
      # 將第220行修改為
      rabbit_host = [Host-IP]
      ...
      # 將第224行修改為
      rabbit_password = 
      
      [database]
      ...
      # 將第303行修改為
      connection = mysql://glance:@[Host-IP]/glance
      
      
      [keystone_authtoken]
      ...
      # 第384行
      identity_uri = http://[Host-IP]:35357
      admin_tenant_name = service
      admin_user = glance
      admin_password = 
      # 再加上這行
      auth_uri = http://[Host-IP]:5000/v2.0
      
      
      [paste_deploy]
      ...
      # 第399行
      flavor = keystone
      
      
    3. 接下來要修改另外一個設定檔

      記得先對設定檔做備份

      sudo cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
      sudo nano /etc/glance/glance-registry.conf -c
      	[database]
      ...
      # 第143行
      connection = mysql://glance:@[Host-IP]/glance
      
      
      [keystone_authtoken]
      ...
      # 第224行
      identity_uri = http://[Host-IP]:35357
      admin_tenant_name = service
      admin_user = glance
      admin_password = 
      # 再加上這行
      auth_uri = http://[Host-IP]:5000/v2.0
      
      
      [paste_deploy]
      ...
      # 第238行
      flavor = keystone
      
      
    4. 將修改完的設定寫回 database 中

      sudo glance-manage db_sync
    5. 重新啟動 glance 的服務

      sudo service glance-registry restart; sudo service glance-api restart
    6. 由於 Ubuntu 預設會建立 SQLite database,我們用不到可以刪了

      sudo rm -f /var/lib/glance/glance.sqlite
    7. 測試 Glance 是否安裝成功,能夠正常工作

      下載測試的映像檔 CirrOS

      wget http://172.16.1.60/cirros-0.3.3-x86_64-disk.img

      上傳映像檔到 Glance

      glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img \
        --disk-format qcow2 --container-format bare --is-public True --progress
      	[=============================>] 100%
      +------------------+--------------------------------------+
      | Property         | Value                                |
      +------------------+--------------------------------------+
      | checksum         | 133eae9fb1c98f45894a4e60d8736619     |
      | container_format | bare                                 |
      | created_at       | 2014-12-17T15:26:19                  |
      | deleted          | False                                |
      | deleted_at       | None                                 |
      | disk_format      | qcow2                                |
      | id               | d6c9448e-d5ed-42a4-9b62-89013061b5e8 |
      | is_public        | True                                 |
      | min_disk         | 0                                    |
      | min_ram          | 0                                    |
      | name             | cirros-0.3.3-x86_64                  |
      | owner            | f390fd8d7cc040f482c526cb0b51902a     |
      | protected        | False                                |
      | size             | 13200896                             |
      | status           | active                               |
      | updated_at       | 2014-12-17T15:26:20                  |
      | virtual_size     | None                                 |
      +------------------+--------------------------------------+
      

      看一下映像檔的清單

      glance image-list
      	+--------------------------------------+---------------------+-------------+------------------+----------+--------+
      | ID                                   | Name                | Disk Format | Container Format | Size     | Status |
      +--------------------------------------+---------------------+-------------+------------------+----------+--------+
      | d6c9448e-d5ed-42a4-9b62-89013061b5e8 | cirros-0.3.3-x86_64 | qcow2       | bare             | 13200896 | active |
      +--------------------------------------+---------------------+-------------+------------------+----------+--------+
      

      有出現類似上面的畫面就代表成功囉!最後記得把 local 端測試用的映像檔刪掉

      sudo rm cirros-0.3.3-x86_64-disk.img



    Step6 : 安裝 Nova

    Nova 為 Openstack 中提供運算服務的元件,但是在這次實作中我們也使用 Nova 所提供的網路環境。

    1. 使用 apt-get 安裝 Nova 相關套件

      sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth \
       nova-novncproxy nova-scheduler python-novaclient nova-compute sysfsutils
    2. 修改 Nove 的設定檔

      先備份原始的設定檔

      sudo cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

      因為要直接建立新的設定檔,所以移除掉原本的設定檔

      sudo rm /etc/nova/nova.conf

      建立新的設定檔

      sudo nano /etc/nova/nova.conf
      	# 貼上下列這段
      [DEFAULT]
      verbose=True
      logdir=/var/log/nova
      state_path=/var/lib/nova
      lock_path=/var/lock/nova
      libvirt_use_virtio_for_bridges=True
      auth_strategy = keystone
      my_ip = [Host-IP]
      instance_usage_audit = True
      instance_usage_audit_period = hour
      notify_on_state_change = vm_and_task_state
      notification_driver = nova.openstack.common.notifier.rpc_notifier
      notification_driver = ceilometer.compute.nova_notifier
      
      # VNC
      vnc_enabled = True
      vncserver_listen = 0.0.0.0
      vncserver_proxyclient_address = [Host-IP]
      novncproxy_base_url = http://[Host-IP]:6080/vnc_auto.html
      
      # RabbitMQ
      rpc_backend = rabbit
      rabbit_host = [Host-IP]
      rabbit_password = 
      
      # API
      ec2_private_dns_show_ip=True
      api_paste_config=/etc/nova/api-paste.ini
      enabled_apis=ec2,osapi_compute,metadata
      
      # Networking
      network_api_class = nova.network.neutronv2.api.API
      security_group_api = neutron
      linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
      firewall_driver = nova.virt.firewall.NoopFirewallDriver
      
      [neutron]
      url = http://[Host-IP]:9696
      auth_strategy = keystone
      admin_auth_url = http://[Host-IP]:35357/v2.0
      admin_tenant_name = service
      admin_username = neutron
      admin_password = 
      service_metadata_proxy = True
      metadata_proxy_shared_secret = 
      
      [database]
      connection = mysql://nova:@[Host-IP]/nova
      
      [keystone_authtoken]
      auth_uri = http://[Host-IP]:5000/v2.0
      identity_uri = http://[Host-IP]:35357
      admin_tenant_name = service
      admin_user = nova
      admin_password = 
      
      [glance]
      host = [Host-IP]
      
      

      將設定寫回 database 中

      sudo nova-manage db sync
    3. 我們透過腳本來重新啟動所有服務

      nano ~/restart.sh
      	# 將下列這段貼上
      #!/bin/bash
      
      for a in rabbitmq-server libvirt-bin nova-cert nova-compute nova-api nova-conductor nova-scheduler nova-consoleauth nova-novncproxy; do sudo service "$a" stop; done
      for a in rabbitmq-server libvirt-bin nova-cert nova-compute nova-api nova-conductor nova-scheduler nova-consoleauth nova-novncproxy; do sudo service "$a" start; done
      
      # End
      

      設定可以執行的權限

      chmod a+x ~/restart.sh

      執行腳本

      ~/restart.sh
    4. 我們來確認一下 Nova 服務是否有正常啟動 (可能需要等一下)

      nova service-list
      	Binary           Host                                 Zone             Status     State Updated_At
      +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
      | Id | Binary           | Host         | Zone     | Status  | State | Updated_at                 | Disabled Reason |
      +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
      | 1  | nova-consoleauth | Simon-Ubuntu | internal | enabled | up    | 2014-12-17T17:44:33.000000 | -               |
      | 2  | nova-scheduler   | Simon-Ubuntu | internal | enabled | up    | 2014-12-17T17:44:33.000000 | -               |
      | 3  | nova-cert        | Simon-Ubuntu | internal | enabled | up    | 2014-12-17T17:44:34.000000 | -               |
      | 4  | nova-conductor   | Simon-Ubuntu | internal | enabled | up    | 2014-12-17T17:44:34.000000 | -               |
      | 5  | nova-compute     | Simon-Ubuntu | nova     | enabled | up    | 2014-12-17T17:44:32.000000 | -               |
      +----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
      
      nova image-list
      	+--------------------------------------+---------------------+--------+--------+
      | ID                                   | Name                | Status | Server |
      +--------------------------------------+---------------------+--------+--------+
      | d6c9448e-d5ed-42a4-9b62-89013061b5e8 | cirros-0.3.3-x86_64 | ACTIVE |        |
      +--------------------------------------+---------------------+--------+--------+
      
    5. 由於 Ubuntu 預設會建立 SQLite database,我們用不到可以刪掉

      sudo rm -f /var/lib/nova/nova.sqlite



    Step7 : 安裝 Cinder

    Cinder 為 Openstack 中管理儲存空間的元件,另外 Openstack 還有提供 Swift Object Storage,有興趣的同學可以嘗試看看。

    1. 使用apt-get安裝Cinder相關套件

      sudo apt-get install cinder-api cinder-scheduler python-cinderclient \
       cinder-volume python-mysqldb iscsitarget open-iscsi iscsitarget-dkms
    2. 修改 cinder 的設定檔

      首先還是要備份原始設定檔

      sudo cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

      因為我們要重新建立設定檔,所以把原始設定檔給砍掉

      sudo rm /etc/cinder/cinder.conf

      建立新的設定檔

      sudo nano /etc/cinder/cinder.conf
      	# 將下列貼上
      [DEFAULT]
      rootwrap_config = /etc/cinder/rootwrap.conf
      api_paste_confg = /etc/cinder/api-paste.ini
      iscsi_helper = tgtadm
      volume_name_template = volume-%s
      volume_group = cinder-volumes
      verbose = True
      auth_strategy = keystone
      state_path = /var/lib/cinder
      lock_path = /var/lock/cinder
      volumes_dir = /var/lib/cinder/volumes
      rpc_backend = rabbit
      rabbit_host = [Host-IP]
      rabbit_password = 
      my_ip = [Host-IP]
      glance_host = [Host-IP]
      control_exchange = cinder
      notification_driver = cinder.openstack.common.notifier.rpc_notifier
      
      [database]
      connection = mysql://cinder:@[Host-IP]/cinder
      
      [keystone_authtoken]
      auth_uri = http://[Host-IP]:5000/v2.0
      identity_uri = http://[Host-IP]:35357
      admin_tenant_name = service
      admin_user = cinder
      admin_password = 
      
      
    3. 將設定寫入 database 中

      sudo cinder-manage db sync
    4. 由於 Ubuntu 預設會建立 SQLite database,我們用不到可以刪掉

      sudo rm -f /var/lib/cinder/cinder.sqlite
    5. 重新啟動 Cinder 的服務

      sudo service cinder-api restart
      sudo service cinder-scheduler restart
      sudo service cinder-volume restart
    6. 製作 Cinder Volume

      首先修改 iSCSI 的設定檔

      sudo sed -i 's/false/true/g' /etc/default/iscsitarget

      啟動 iSCSI 服務

      sudo service iscsitarget start
      sudo service open-iscsi start

      首先使用 dd 製作一塊空間出來

      sudo dd if=/dev/zero of=/root/cinder-volumes bs=1M seek=20000 count=0

      使用 losetup 將剛剛製作的空間模擬成一個 device (Pseduo-device)

      sudo losetup /dev/loop0 /root/cinder-volumes

      看一下這個 Pseduo device 連結的狀態

      sudo losetup -a
      	/dev/loop0: [fd01]:1450930 (/root/cinder-volumes)

      格式化 loop0

      sudo fdisk /dev/loop0
      	Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
      Building a new DOS disklabel with disk identifier 0x39df5d1f.
      Changes will remain in memory only, until you decide to write them.
      After that, of course, the previous content won't be recoverable.
      
      Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
      
      Command (m for help): n
      Partition type:
         p   primary (0 primary, 0 extended, 4 free)
         e   extended
      Select (default p): p
      Partition number (1-4, default 1): 1
      First sector (2048-40959999, default 2048): [Enter]
      Using default value 2048
      Last sector, +sectors or +size{K,M,G} (2048-40959999, default 40959999): [Enter]
      Using default value 40959999
      
      Command (m for help): t
      Selected partition 1
      Hex code (type L to list codes): 8e
      Changed system type of partition 1 to 8e (Linux LVM)
      
      Command (m for help): w
      The partition table has been altered!
      
      Calling ioctl() to re-read partition table.
      
      WARNING: Re-reading the partition table failed with error 22: Invalid argument.
      The kernel still uses the old table. The new table will be used at
      the next reboot or after you run partprobe(8) or kpartx(8)
      Syncing disks.

      產生 Volume

      sudo pvcreate /dev/loop0
      	  Physical volume "/dev/loop0" successfully created

      產生 Volume Group

      sudo vgcreate cinder-volumes /dev/loop0
      	  Volume group "cinder-volumes" successfully created

      檢查 Volume Group 的狀態

      sudo vgdisplay
      	
        --- Volume group ---
        VG Name               cinder-volumes
        System ID             
        Format                lvm2
        Metadata Areas        1
        Metadata Sequence No  15
        VG Access             read/write
        VG Status             resizable
        MAX LV                0
        Cur LV                0
        Open LV               0
        Max PV                0
        Cur PV                1
        Act PV                1
        VG Size               19.53 GiB
        PE Size               4.00 MiB
        Total PE              4999
        Alloc PE / Size       0 / 0   
        Free  PE / Size       4999 / 19.53 GiB
        VG UUID               ayKxWx-IeTA-369h-uotP-lxtO-yKIu-uREEoR
      
      



    Step8 : 安裝 Neutron

    Neutron 負責提供 Openstack 的網路環境

    1. 使用 apt-get 安裝 Neutron 相關的套件

      sudo apt-get install neutron-server neutron-plugin-ml2 python-neutronclient neutron-plugin-openvswitch-agent \
      	neutron-l3-agent neutron-dhcp-agent
    2. 設定 ip forwarding

      sudo sysctl net.ipv4.ip_forward=1
      sudo sysctl net.ipv4.conf.all.rp_filter=0
      sudo sysctl net.ipv4.conf.default.rp_filter=0
    3. 建立網路需要的 bridge

      sudo ovs-vsctl add-br brWAN
    4. 修改 Neutron 的一些設定

      記得要先備份設定檔

      sudo cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

      因為我們要重新建立設定檔,所以把原始設定檔給砍掉

      sudo rm /etc/neutron/neutron.conf

      建立新的設定檔

      sudo nano /etc/neutron/neutron.conf
      	# 將下列貼上
      [DEFAULT]
      lock_path = $state_path/lock
      core_plugin = ml2
      rpc_backend = rabbit
      rabbit_host = [Host-IP]
      rabbit_password = 
      auth_strategy = keystone
      service_plugins = router
      allow_overlapping_ips = True
      verbose = True
      
      notify_nova_on_port_status_changes = True
      notify_nova_on_port_data_changes = True
      nova_url = http://[Host-IP]:8774/v2
      nova_admin_auth_url = http://[Host-IP]:35357/v2.0
      nova_region_name = regionOne
      nova_admin_username = nova
      nova_admin_tenant_id = SERVICE_TENANT_ID
      nova_admin_password = 
      
      [matchmaker_redis]
      ###
      
      [matchmaker_ring]
      ###
      
      [quotas]
      ###
      
      [agent]
      root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
      
      [keystone_authtoken]
      auth_uri = http://[Host-IP]:5000/v2.0
      identity_uri = http://[Host-IP]:35357
      admin_tenant_name = service
      admin_user = neutron
      admin_password = 
      #auth_host = 127.0.0.1
      #auth_port = 35357
      #auth_protocol = http
      
      [database]
      connection = mysql://neutron:@[Host-IP]/neutron
      
      [service_providers]
      service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
      service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
      
      # End
      
      

      SERVICE_TENANT_ID 使用指令來取得

      keystone tenant-get service
      	+-------------+----------------------------------+
      |   Property  |              Value               |
      +-------------+----------------------------------+
      | description |                                  |
      |   enabled   |               True               |
      |      id     | 821b061099944233ade2c94dd5980303 |
      |     name    |             service              |
      +-------------+----------------------------------+
      

      再來是另外一個設定檔 ml2_conf.ini,記得要先備份設定檔

      sudo cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

      因為我們要重新建立設定檔,所以把原始設定檔給砍掉

      sudo rm /etc/neutron/plugins/ml2/ml2_conf.ini

      建立新的設定檔

      sudo nano /etc/neutron/plugins/ml2/ml2_conf.ini
      	# 將下列貼上
      [ml2]
      type_drivers = flat,gre
      tenant_network_types = gre
      mechanism_drivers = openvswitch
      
      [ml2_type_flat]
      flat_networks = external
      
      [ml2_type_vlan]
      #
      
      [ml2_type_gre]
      tunnel_id_ranges = 1:1000
      
      [ml2_type_vxlan]
      #
      
      [securitygroup]
      enable_security_group = True
      enable_ipset = True
      firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
      
      [ovs]
      local_ip = [Host-IP]
      enable_tunneling = True
      bridge_mappings = external:brWAN
      
      [agent]
      tunnel_types = gre
      

      再來是另外一個設定檔 l3_agent.ini,記得要先備份設定檔

      sudo cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak

      開啟設定檔

      sudo nano /etc/neutron/l3_agent.ini
      	# 在最下面貼上
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
      dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      use_namespaces = True
      verbose = True
      

      再來是另外一個設定檔 dhcp_agent.ini,記得要先備份設定檔

      sudo cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak

      開啟設定檔

      sudo nano /etc/neutron/dhcp_agent.ini
      	# 在最下面貼上
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
      use_namespaces = True
      external_network_bridge = brWAN
      verbose = True
      

      再來是另外一個設定檔 metadata_agent.ini,記得要先備份設定檔

      sudo cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak

      因為我們要重新建立設定檔,所以把原始設定檔給砍掉

      sudo rm /etc/neutron/metadata_agent.ini

      建立新的設定檔

      sudo nano /etc/neutron/metadata_agent.ini
      	# 將下列貼上
      [DEFAULT]
      auth_url = http://[Host-IP]:5000/v2.0
      auth_region = regionOne
      admin_tenant_name = service
      admin_user = neutron
      admin_password = 
      nova_metadata_ip = [Host-IP]
      metadata_proxy_shared_secret = 
      verbose = True
      
    5. 我們透過腳本來重新啟動 Neutron 所有服務

      nano ~/neutron_restart.sh
      	# 將下列這段貼上
      #!/bin/bash
      
      for a in neutron-server neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent; do sudo service "$a" stop; done
      for a in neutron-server neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent; do sudo service "$a" start; done
      
      # End
      

      設定可以執行的權限

      chmod a+x ~/neutron_restart.sh

      執行腳本

      ~/neutron_restart.sh
      ~/restart.sh
    6. 將設定寫入 database 中

      sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
        --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron



    Step8 : 安裝 Heat

    Heat 可以讓使用者透過模版 (template) 快速佈署底下的機器配置和網路環境,也可以進一步定義所需要提供的服務。

    1. 使用 apt-get 安裝 Heat 相關的套件

      sudo apt-get install heat-api heat-api-cfn heat-engine python-heatclient
    2. 修改 Heat 的一些設定

      記得要先備份設定檔

      sudo cp /etc/heat/heat.conf /etc/heat/heat.conf.bak

      刪除設定檔

      sudo rm /etc/heat/heat.conf

      重新建立一個設定檔設定檔

      sudo nano /etc/heat/heat.conf -c
      	# 貼上下列這段
      [DEFAULT]
      heat_metadata_server_url = http://[Host-IP]:8000
      heat_waitcondition_server_url = http://[Host-IP]:8000/v1/waitcondition
      rabbit_host = [Host-IP]
      rabbit_password = 
      rpc_backend = rabbit
      verbose = True
      
      [database]
      connection = mysql://heat:@[Host-IP]/heat
      
      [ec2authtoken]
      auth_uri = http://[Host-IP]:5000/v2.0
      
      [keystone_authtoken]
      auth_uri = http://[Host-IP]:5000/v2.0
      identity_uri = http://[Host-IP]:35357
      admin_user = heat
      admin_password = 
      admin_tenant_name = service
      
      
    3. 將設定寫入 database 中

      sudo heat-manage db_sync
    4. 由於 Ubuntu 預設會建立 SQLite database,我們用不到可以刪掉

      sudo rm -f /var/lib/heat/heat.sqlite
    5. 重新啟動 Heat 的服務

      sudo service heat-api restart
      sudo service heat-api-cfn restart
      sudo service heat-engine restart



    Step9 : 安裝 Ceilometer

    Ceilometer 是監控 Openstack 資源的使用量的服務元件。

    1. 使用 apt-get 安裝 ceilometer 相關的套件

      sudo apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \
        ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier \
        python-ceilometerclient ceilometer-agent-compute
    2. 修改 Ceilometer 的一些設定

      記得要先備份設定檔

      sudo cp /etc/ceilometer/ceilometer.conf /etc/ceilometer/ceilometer.conf.bak

      刪除設定檔

      sudo rm /etc/ceilometer/ceilometer.conf

      重新建立一個設定檔設定檔

      sudo nano /etc/ceilometer/ceilometer.conf -c
      	# 貼上下列這段
      [DEFAULT]
      rabbit_host = [Host-IP]
      rabbit_password = 
      rpc_backend = rabbit
      auth_strategy = keystone
      log_dir=/var/log/ceilometer
      
      [database]
      connection = mysql://ceilometer:@[Host-IP]/ceilometer?charset=utf8
      
      [keystone_authtoken]
      auth_uri = http://[Host-IP]:5000/v2.0
      identity_uri = http://[Host-IP]:35357
      admin_tenant_name = service
      admin_user = ceilometer
      admin_password = 
      
      [publisher]
      metering_secret=change this or be hacked
      
      [service_credentials]
      os_auth_url = http://[Host-IP]:5000/v2.0
      os_username = ceilometer
      os_tenant_name = service
      os_password = 
      os_endpoint_type = [Host-IP]
      
      
    3. 我們透過腳本來重新啟動 Ceilometer 所有服務

      nano ~/ceilometer_restart.sh
      	# 將下列這段貼上
      #!/bin/bash
      
      for a in ceilometer-agent-central ceilometer-agent-notification ceilometer-api ceilometer-collector ceilometer-alarm-evaluator ceilometer-alarm-notifier ceilometer-agent-compute; do sudo service "$a" stop; done
      for a in ceilometer-agent-central ceilometer-agent-notification ceilometer-api ceilometer-collector ceilometer-alarm-evaluator ceilometer-alarm-notifier ceilometer-agent-compute; do sudo service "$a" start; done
      
      # End
      

      設定可以執行的權限

      chmod a+x ~/ceilometer_restart.sh

      執行腳本

      ~/ceilometer_restart.sh
      sudo ceilometer-dbsync



    Step10 : 安裝 Dashborad

    最後這個章節教大家安裝 Web 介面,之後可以使用圖形化的介面管理 Openstack

    1. 使用 apt-get 安裝 dashboard 相關的套件

      sudo apt-get install openstack-dashboard apache2 libapache2-mod-wsgi \
       memcached python-memcache
    2. 修改 Dashboard 的一些設定

      記得要先備份設定檔

      sudo cp /etc/openstack-dashboard/local_settings.py \
       /etc/openstack-dashboard/local_settings.py.bak

      開啟設定檔

      sudo nano /etc/openstack-dashboard/local_settings.py -c
      	#修改第133行
      OPENSTACK_HOST = "[Host-IP]"
      
    3. 重新啟動服務

      sudo service apache2 restart; sudo service memcached restart
    4. 利用網頁登入 Openstack

      http://cloud.cs.nchu.edu.tw:2[code-num]80/horizon

      帳號 : admin

      密碼 :




    Step11 : 建立機櫃 (Stack)




    我們在這個步驟嘗試使用 Heat 的模版來佈署我們虛擬機環境

    1. 提供的範例

      {
        "AWSTemplateFormatVersion" : "2010-09-09",
        "Description" : "Sample Heat template that spins up multiple instances and a private network (JSON)",
        "Resources" : {
          "cloud_network_01" : {
            "Type" : "OS::Neutron::Net",
            "Properties" : {
              "name" : "cloud-network-01"
            }
          },
       
          "cloud_subnet_01" : {
            "Type" : "OS::Neutron::Subnet",
            "Properties" : {
              "name" : "cloud-subnet-01",
              "cidr" : "10.0.0.0/24",
              "dns_nameservers" : ["8.8.4.4", "8.8.8.8"],
              "enable_dhcp" : "True",
              "gateway_ip" : "10.0.0.254",
              "network_id" : { "Ref" : "cloud_network_01" }
            }
          },
       
          "cloud_router_01" : {
            "Type" : "OS::Neutron::Router",
            "Properties" : {
              "admin_state_up" : "True",
              "name" : "cloud-router-01"
            }
          },
        
          "cloud_router_int0" : {
            "Type" : "OS::Neutron::RouterInterface",
            "Properties" : {
              "router_id" : { "Ref" : "cloud_router_01" },
              "subnet_id" : { "Ref" : "cloud_subnet_01" }
            }
          },
       
          "VM01_port0" : {
            "Type" : "OS::Neutron::Port",
            "Properties" : {
              "admin_state_up" : "True",
              "network_id" : { "Ref" : "cloud_network_01" },
            }
          },
       
          "VM02_port0" : {
            "Type" : "OS::Neutron::Port",
            "Properties" : {
              "admin_state_up" : "True",
              "network_id" : { "Ref" : "cloud_network_01" },
            }
          },
       
          "VM01" : {
            "Type" : "OS::Nova::Server",
            "Properties" : {
              "name" : "cloud-VM01",
              "image" : "cirros-0.3.3-x86_64",
              "flavor": "m1.tiny",
              "networks" : [{
                "port" : { "Ref" : "VM01_port0" }
              }]
            }
          },
       
          "VM02" : {
            "Type" : "OS::Nova::Server",
            "Properties" : {
              "name" : "cloud-VM02",
              "image" : "cirros-0.3.3-x86_64",
              "flavor": "m1.tiny",
              "networks" : [{
                "port" : { "Ref" : "VM02_port0" }
              }]
            }
          }
        }
      }
      


    2. 如果同學要登入底下的虛擬機,可以透過主控台 VNC 的方式連入

      a. 進入執行實例選擇要登入的機器




      b. 複製連結 "點擊這裡以顯示單獨的控制臺" 的網址連結


      c. 修改網址。

      例如使用者 A00 複製到的網址是 :

       http://172.16.1.10:6080/vnc_auto.html?token=11b34000-c80b-.... 略 

      將前面修改為 :

       http://cloud.cs.nchu.edu.tw:31080/vnc_auto.html?token=11b34000-c80b-.... 略 

      幫同學寫好了,你要取代的網址是 :

      cloud.cs.nchu.edu.tw:3[請選擇機器代號]]80



    參考資料

    1. Openstack 官方網站 :

      http://www.openstack.org/
    2. Openstack 官方 Documents :

      http://docs.openstack.org/
    3. OpenStack Installation Guide for Ubuntu 14.04 :

      http://docs.openstack.org/juno/install-guide/install/apt/content/index.html
    4. Heat Orchestration Template (HOT) Guide :

      http://docs.openstack.org/developer/heat/template_guide/hot_guide.html
    5. Ceilometer developer documentation :

      http://docs.openstack.org/developer/ceilometer/



    Assignment #4 - Deadline 2015/01/09

    作業檢查頁面 :check

    1. 首先,建立一個新的使用者 cloud,並建立一個新的專案 cloud[請選擇機器代號] 給該使用者。
      例如 A00 的同學,要建立一個 cloud 使用者(密碼為上課使用的那個),並建立一個新專案 cloud10 給該使用者。


    2. 接下來,使用 admin 中新增虛擬硬體樣板,名稱為 m1.cloud,規格如下 :

      虛擬CPU : 1
      隨機存取記憶體 : 256 MB
      根硬碟 : 1G
      暫時性硬碟 : 0 G
      SWAP硬碟 : 512 MB
      虛擬硬體樣板存取權只有 cloud[請選擇機器代號]


    3. 最後登入 cloud 使用者,透過 Heat Orchestration Template (HOT) 建立機櫃 (stack),條件是 :

      a. 有三台 VM (instance),並且連結到 Neutron 的 Router
      b. VM 的 flavor 請使用剛剛建立的 m1.cloud
      c. Image 使用課堂上傳的 cirros-0.3.3-x86_64 即可。

      提示 : 修改範例給的 Template 即可達成。

    作業繳交方式 :

    1. 提前做完可以寄信給助教,助教進網頁檢查OK後就可以使用機器測試期末project
    2. Deadline後助教會進其餘同學的網頁檢查
    3. 助教的信箱

    請隨時注意Lab4網頁最上方的更新事項


    作業結果範例 :