Centos7上配置openstack ocata配置详解

       往日写过一篇《openstack mitaka
配置详解》但是最近利用发现阿里不再提供m版本的源,所以近来又最先上学ocata版本,并拓展总括,写下如下文档

OpenStack ocata版本官方文档:https://docs.openstack.org/ocata/install-guide-rdo/environment.html

并且假若不想一步步安装,能够实施安装脚本:http://www.cnblogs.com/yaohong/p/7251852.html

一:环境

1.1主机网络

  • 系统版本 CentOS7

  • 操纵节点: 1 处理器, 4 GB 内存, 及5 GB 存储

  • 测算节点: 1 处理器, 2 GB 内存, 及10 GB 存储

   说明:

  1:以CentOS7为镜像,安装两台机器(如何设置详见http://www.cnblogs.com/yaohong/p/7240387.html)并留意安排双网卡和操纵两台机械的内存。

  2:修改机器主机名分别为:controller和compute1

             #hostnamectl set-hostname hostname 

  3:编辑controller和compute1的 /etc/hosts 文件

             #vi /etc/hosts

            MySQL 1

  4:验证

            拔取互ping以及ping百度的点子

            MySQL 2 MySQL 3

1.2网络时间钻探(NTP)

[决定节点安装NTP]   

NTP重要为同步时间所用,时间不一起,可能造成你不可以制造云主机

#yum install chrony              (安装软件包)

#vi /etc/chrony.conf              增加

server NTP_SERVER iburst

allow ip地址网段(可以去掉,指代允许你的ip地址网段可以访问NTP)

 

 

 

 

 

#systemctl enable chronyd.service     (设置为系统自启动) 

#systemctl start chronyd.service       (启动NTP服务)

[总结节点安装NTP]

# yum install chrony             

#vi /etc/chrony.conf             “ 释除“server“ 值外的所有内容。修改它引用控制节点:

server controller iburst

# systemctl enable chronyd.service     (参预系列自启动)

# systemctl start chronyd.service       (启动ntp服务)

[验证NTP]

决定节点和总结节点分别执行#chronyc
sources,出现如下

           [验证NTP]

    控制节点和总计节点分别执行#chronyc sources,出现如下

              MySQL 4

              MySQL 5

1.3Openstack包

[openstack packages安装在支配和计算节点]
  安装openstack最新的源:
  #yum install
centos-release-openstack-ocata
  #yum install
https://rdoproject.org/repos/rdo-release.rpm 

       #yum upgrade                          (在主机上升级包)
  #yum install
python-openstackclient         (安装opentack必须的插件)
  #yum install
openstack-selinux               

1.4SQL数据库

    安装在控制节点,指南中的步骤按照不同的发行版使用MariaDB或
MySQL。OpenStack 服务也匡助其他 SQL 数据库。
    #yum install mariadb mariadb-server python2-PyMySQL

       
 #vi /etc/mysql/conf.d/mariadb_openstack.cnf       

    加入:
        [mysqld]
      bind-address = 192.168.1.73                        
(安装mysql的机器的IP地址,这里为controller地址)
      default-storage-engine = innodb
      innodb_file_per_table
      collation-server = utf8_general_ci
      character-set-server = utf8
    
    #systemctl enable mariadb.service    
(将数据库服务设置为自启动)
    #systemctl start mariadb.service         
(将数据库服务设置为打开)
    设置mysql属性:
    #mysql_secure_installation 
(此处参照http://www.cnblogs.com/yaohong/p/7352386.html,中坑一)

1.5音信队列

    音信队列在openstack整个架构中扮演着至关首要(交通枢纽)的功用,正是因为openstack部署的八面玲珑、模块的松耦合、架构的扁平化,反而使openstack更加倚重于新闻队列(不肯定使用RabbitMQ,

    可以是其它的音信队列产品),所以音讯队列收发音信的属性和消息队列的HA能力一直影响openstack的性能。借使rabbitmq没有运行起来,你的整openstack平台将不能运用。rabbitmq使用5672端口。
    #yum install rabbitmq-server
    #systemctl enable rabbitmq-server.service(出席自启动)
    #systemctl start rabbitmq-server.service(启动)
    #rabbitmqctl add_user openstack
RABBIT_PASS                      
(扩充用户openstack,密码自己安装替换掉RABBIT_PASS)
    #rabbitmqctl set_permissions openstack “.*” “.*”
“.*”                  
(给新增的用户授权,没有授权的用户将不可能承受和传递信息)

1.6Memcached

memcache为挑选安装项目。使用端口11211

[决定节点] 
  #yum install memcached python-memcached

修改/etc/sysconfig/memcached中的OPTIONS为。

OPTIONS="-l 127.0.0.1,::1,controller"

 

 

 

 

  #systemctl enable memcached.service     

 #systemctl start memcached.service       

二:认证服务

2.1设置和布置

签到数据库创造keystone数据库。

【只在控制节点部署】
  #mysql -u root -p
  #CREATE DATABASE keystone;
设置授权用户和密码:
  #GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’localhost’ \
IDENTIFIED BY ‘自定义的密码’;
  #GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ \
IDENTIFIED BY ‘自定义的密码’;
安全并安排组件:
#yum install openstack-keystone httpd mod_wsgi
#vi /etc/keystone/keystone.conf
  

[database]

 connection = mysql+pymysql://keystone:密码@controller/keystone
       provider = fernet

 

 

 

 

 

 

伊始化身份认证服务的数据库

# su -s /bin/sh -c “keystone-manage db_sync”
keystone(一点要翻开数据库是否生成表成功)
  初始化keys:
  #keystone-manage fernet_setup –keystone-user keystone
–keystone-group keystone
  指点身份服务:

keystone-manage bootstrap –bootstrap-password ADMIN_PASS \

  –bootstrap-admin-url http://controller:35357/v3/ \

  –bootstrap-internal-url http://controller:5000/v3/ \

  –bootstrap-public-url http://controller:5000/v3/ \

  –bootstrap-region-id RegionOne

 

 

 

配置apache:
  #vi  /etc/httpd/conf/httpd.conf 

ServerName controller(将ServerName 后面改成主机名,防止启动报错)

 

 

创造一个指向/usr/share/keystone/wsgi-keystone.conf文件的链接:

#ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动httpd:
  #systemctl enable httpd.service
  #systemctl start httpd.service  

布置管理账户

#vi admin加入

export OS_USERNAME=admin

export OS_PASSWORD=123456

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3    

2.2创造域、项目、用户和角色

创建Service Project:
  #penstack project create –domain default \

–description “Service Project” service
  创建Demo Project:
  #openstack project create –domain default \

–description “Demo Project” demo

创建 demo 用户:
  #openstack user create –domain default \

  –password-prompt demo
  创建user角色:
  #openstack role create user
  将用户租户角色连接起来:
  #openstack role add –project demo –user demo user

2.3验证

vi /etc/keystone/keystone-paste.ini

从“[pipeline:public_api]“,[pipeline:admin_api]“和“[pipeline:api_v3]“局部删除“admin_token_auth 

重置“OS_TOKEN“和“OS_URL“ 环境变量:

unset OS_AUTH_URL OS_PASSWORD

用作 admin 用户,请求认证令牌:
  #openstack –os-auth-url http://controller:35357/v3 \
–os-project-domain-name default –os-user-domain-name default \
–os-project-name admin –os-username admin token issue

这边会碰到错误:

MySQL 6

出于是Http错误,所以回来Apache HTTP 服务配置的地点,重启Apache
服务,并再次安装管理账户:

  # systemctlrestart httpd.service

  $ export OS_USERNAME=admin

  $ export OS_PASSWORD=ADMIN_PASS

  $ export OS_PROJECT_NAME=admin

  $ export OS_USER_DOMAIN_NAME=Default

  $ export OS_PROJECT_DOMAIN_NAME=Default

  $ export OS_AUTH_URL=http://controller:35357/v3

  $ export OS_IDENTITY_API_VERSION=3

举行完后重新实施

#openstack –os-auth-url http://controller:35357/v3 \
–os-project-domain-name default –os-user-domain-name default \
–os-project-name admin –os-username admin token issue

MySQL 7

 输入密码然后,有正确的输出即为配置不错。

MySQL 8

 

 

 

 

 

 

图2.4 admin认证服务验证 

作为“demo“ 用户,请求认证令牌: 

#openstack –os-auth-url http://controller:5000/v3 \ 

–os-project-domain-name default –os-user-domain-name default \ 

–os-project-name demo –os-username demo token issue

MySQL 9

 

2.4创设 OpenStack 客户端环境脚本

可将环境变量设置为脚本:
  #vi admin-openrc 加入:

export OS_PROJECT_DOMAIN_NAME=default
  export OS_USER_DOMAIN_NAME=default
  export OS_PROJECT_NAME=admin
  export OS_USERNAME=admin
  export OS_PASSWORD=123456(admin设置的密码)
  export OS_AUTH_URL=http://controller:35357/v3
  export OS_IDENTITY_API_VERSION=3
  export OS_IMAGE_API_VERSION=2

 

 

 

 

 

 

 

#vi demo-openrc 加入:

 export OS_PROJECT_DOMAIN_NAME=default
   export OS_USER_DOMAIN_NAME=default
   export OS_PROJECT_NAME=demo
   export OS_USERNAME=demo
   export OS_PASSWORD=123456(demo设置的密码)
   export OS_AUTH_URL=http://controller:35357/v3
   export OS_IDENTITY_API_VERSION=3
   export OS_IMAGE_API_VERSION=2 

          
  

 

 

 

 

 

 

#.
admin-openrc (加载“admin-openrc“文件来地位认证服务的环境变量地点和“admin“体系和用户证书)
   #openstack token issue(请求认证令牌)

MySQL 10

                                  图2.6 请求认证令牌

三:镜像服务

3.1装置配备

建立glance数据
  登录mysql
  #mysql -u root
-p (用数据库连接客户端以 root 用户连接到数据库服务器)
  #CREATE DATABASE glance;(创建 glance 数据库)
  授权
   GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’localhost’ \
IDENTIFIED BY ‘密码’; (对“glance“数据库授予恰当的权杖)
   GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ \
IDENTIFIED BY ‘密码’;(对“glance“数据库授予恰当的权柄)
  运行环境变量:
  #. admin-openrc
  创造glance用户音讯:
  #openstack user create –domain default –password-prompt glance

添加 admin 角色到 glance 用户和 service 项目上
    #openstack role add –project service –user glance admin
  创建“glance“服务实体:
  #openstack service create –name glance \
 –description “OpenStack Image” image

MySQL 11

                          图3.1 创立glance服务实体

 

创制镜像服务的 API 端点:
  #penstack endpoint create –region RegionOne \
         image public http://controller:9292

MySQL 12

                        图3.2 成立镜像服务API端点

 

#penstack endpoint create –region RegionOne \
image internal http://controller:9292

MySQL 13

                     图3.3 创造镜像服务API端点

  #penstack endpoint create –region RegionOne \
image admin http://controller:9292

MySQL 14

                                      图3.4 创制镜像服务API端点

  安装:
  #yum install openstack-glance
  #vi  /etc/glance/glance-api.conf 配置
  

[database]  

connection = mysql+pymysql://glance:密码@controller/glance
  [keystone_authtoken](配置认证)
  加入:
   auth_uri = http://controller:5000
   auth_url = http://controller:35357
   memcached_servers = controller:11211
   auth_type = password
   project_domain_name = default
   user_domain_name = default
   project_name = service
   username = glance
   password = xxxx
   [paste_deploy]
   flavor = keystone
  [glance_store] 
   stores = file,http
   default_store = file
   filesystem_store_datadir = /var/lib/glance
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

#vi /etc/glance/glance-registry.conf

   

 

 

[database]
   connection = mysql+pymysql://glance:密码@controller/glance
   [keystone_authtoken](配置认证)
   加入:
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = control:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = glance
      password = xxxx
  [paste_deploy]
      flavor = keystone

 

   

 

 

 

 

 

 

 

 

 

 

 同步数据库:
      #su -s /bin/sh -c “glance-manage db_sync” glance
    启动glance:
      #systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
      # systemctl start openstack-glance-api.service \
openstack-glance-registry.service

3.2验证

运行环境变量:
  #. admin-openrc
  下载一个相比较小的镜像:
  #wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86\_64-disk.img

 

 

解决办法:

yum -y install wget

   再执行

wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

 

 

 

 

 

 

 

 

 

上传镜像:
  #openstack image create “cirros” \

–file cirros-0.3.5-x86_64-disk.img \

–disk-format qcow2 –container-format bare \

–public

MySQL 15

 

 

 

 

 

 

 

 

 

 

 

 

 

                                       

   图3.5 上传镜像

  查看:
   #openstack image list

MySQL 16

 

 

 

 

                                       

                               图3.6 确认镜像上传

有出口注脚glance配置不错

四:总计服务

4.1装置并配备控制节点

确立nova的数据库:
  #mysql -u root
-p (用数据库连接客户端以 root 用户连接到数据库服务器)
  #CREATE DATABASE nova_api;
  #CREATE DATABASE nova; (创建 nova_api 和 nova 数据库:)

#CREATE DATABASE nova_cell0;

  对数据库举办科学的授权:
  #GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ \
IDENTIFIED BY ‘密码’;
  #GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ \
IDENTIFIED BY ‘密码’;
  #GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’localhost’ \
IDENTIFIED BY ‘密码’;
  #GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ \
IDENTIFIED BY ‘密码’;

#GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’localhost’ \

IDENTIFIED BY ‘密码’;

#GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’ \

IDENTIFIED BY ‘密码’;

运转环境变量:
  #. admin-openrc
  创建nova用户:
  #openstack user create –domain default \
 –password-prompt nova
  #openstack role add –project service –user nova admin
  创制 nova 服务实体:
  #openstack service create –name nova \
–description “OpenStack Compute” compute
  创建 Compute 服务 API 端点:
  #openstack endpoint create –region RegionOne \

compute public http://controller:8774/v2.1

#openstack endpoint create –region RegionOne \

compute internal http://controller:8774/v2.1

#openstack endpoint create –region RegionOne \

compute admin http://controller:8774/v2.1

#openstack user create –domain default –password-prompt placement

MySQL 17

#openstack role add –project service –user placement admin

#openstack service create –name placement –description “Placement
API” placement

MySQL 18

#openstack endpoint create –region RegionOne placement public
http://controller:8778

MySQL 19

# openstack endpoint create –region RegionOne placement internal
http://controller:8778

MySQL 20

#openstack endpoint create –region RegionOne placement admin
http://controller:8778

MySQL 21

安装:
  # yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \

openstack-nova-scheduler openstack-nova-placement-api
  #vi /etc/nova/nova.conf 

 

 

[DEFAULT].

enabled_apis = osapi_compute,metadata

[api_database]

# connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]

# connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[DEFAULT]

#transport_url = rabbit://openstack:RABBIT_PASS@controller

 

[api]

#auth_strategy = keystone

[keystone_authtoken]

#auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = 密码

[DEFAULT]

#my_ip = 10.0.0.11

[DEFAULT]

# use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]

enabled = true

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

[glance]

#api_servers = http://controller:9292

[oslo_concurrency]

#lock_path = /var/lib/nova/tmp

[placement]

#os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = PLACEMENT_PASS

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

#vi  /etc/httpd/conf.d/00-nova-placement-api.conf

加入:

<Directory /usr/bin>

   <IfVersion >= 2.4>

      Require all granted

   </IfVersion>

   <IfVersion < 2.4>

      Order allow,deny

      Allow from all

   </IfVersion>

</Directory>

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

重启httpd 服务:

#systemctl restart httpd

填充nova-api数据库:

#su -s /bin/sh -c “nova-manage api_db sync” nova

注册cell0数据库:

 #su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova

创建cell1单元格

#su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1
–verbose” nova

填充新星数据库:

su -s /bin/sh -c “nova-manage db sync” nova

证实nova cell0和cell1是否科学注册:

nova-manage cell_v2 list_cells

MySQL 22

#systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

4.2设置并配置总计节点

#yum install openstack-nova-compute

编辑

#vi /etc/nova/nova.conf 

 

 

[DEFAULT]

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:RABBIT_PASS@controller

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS(计算节点ip地址)

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]

api_servers = http://controller:9292

 

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[placement]

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = PLACEMENT_PASS

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

#egrep -c ‘(vmx|svm)’
/proc/cpuinfo (确定你的揣摸节点是否援助虚拟机的硬件加速)

  假使为0则需要修改#vi /etc/nova/nova.conf

[libvirt]
  virt_type = qemu

 

 

 

 

 

启航总计服务及其依赖,并将其安排为随系统活动启动:
启动:
 #systemctl enable libvirtd.service openstack-nova-compute.service
 #systemctl start libvirtd.service openstack-nova-compute.service
将总计节点添加到单元数据库

以此在控制节点上执行

#. admin-openrc

# openstack hypervisor list

#su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova

vi /etc/nova/nova.conf

  [scheduler]

  discover_hosts_in_cells_interval = 300

4.3验证

在支配节点验证:
  运行环境变量:
#. admin-openrc
#openstack compute service list
 输出正常即为配置不错

#openstack catalog list

#openstack image list

#nova-status upgrade check

五:Networking服务

5.1设置并安排控制节点

创建neutron数据库
  #mysql -u root -p
  #CREATE DATABASE neutron;

对“neutron“ 数据库授予合适的拜访权限,使用方便的密码替换“NEUTRON_DBPASS“:
  #GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ \
 IDENTIFIED BY ‘NEUTRON_DBPASS’;
  #GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ \
IDENTIFIED BY ‘NEUTRON_DBPASS’;
  运行环境变量:
  #. admin-openrc
  创建“neutron“用户:
  #openstack user create –domain default –password-prompt neutron
  #openstack role add –project service –user neutron admin
  添加“admin“ 角色到“neutron“ 用户:
  #openstack service create –name neutron \
–description “OpenStack Networking” network
  创造网络服务API端点

#openstack endpoint create –region RegionOne \
network public http://controller:9696
  #openstack endpoint create –region RegionOne \
 network internal http://controller:9696
  #openstack endpoint create –region RegionOne \
network admin http://controller:9696
  创建vxlan网络:
  #yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
  #vi /etc/neutron/neutron.conf 

 

 

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:密码@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

 

[database]

connection = mysql+pymysql://neutron:密码@controller/neutron

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password =密码

 

[nova]

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = 密码

 

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

   
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

配置ml2扩展:
  #vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

 

[ml2_type_flat]

flat_networks = provider

 

[ml2_type_vxlan]

vni_ranges = 1:1000

 

[securitygroup]

enable_ipset = true

  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

配置网桥:

*  #vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini*

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:“第二张网卡名称”

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = true

local_ip = 192.168.1.146(本地网络ip)

l2_population = true

 

 

 

 

MySQL, 

 

 

 

 

 

 

 

 

 

配置3层网络:
  #vi /etc/neutron/l3_agent.ini 

[DEFAULT]
  interface_driver = linuxbridge

 

 

 

 

配置dhcp:
  #vi /etc/neutron/dhcp_agent.ini 

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

 

 

 

 

 

 

配置metadata agent
 #vi /etc/neutron/metadata_agent.ini 

[DEFAULT]
  nova_metadata_ip = controller
  metadata_proxy_shared_secret = METADATA_SECRET

 

 

 

 

 

为总计机节点配置网络服务

#vi /etc/nova/nova.conf

[neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = xxxx
      service_metadata_proxy = True
      metadata_proxy_shared_secret = METADATA_SECRET

 

 

 

 

 

 

 

 

 

创建扩大连接:
   #ln -s /etc/neutron/plugins/ml2/ml2_conf.ini
/etc/neutron/plugin.ini
    同步数据库

# su -s /bin/sh -c “neutron-db-manage –config-file
/etc/neutron/neutron.conf \

–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head”
neutron 

重启总结API 服务:
   #systemctl restart openstack-nova-api.service
   #systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
 neutron-metadata-agent.service
   #systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

启用layer-3服务并设置其随系统自启动
    # systemctl enable neutron-l3-agent.service
   #systemctl start neutron-l3-agent.service

5.2设置并安排总括节点

#yum install openstack-neutron-linuxbridge ebtables ipset
   #vi  /etc/neutron/neutron.conf 

[DEFAULT]

transport_url = rabbit://openstack:密码@controller

auth_strategy = keystone

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 密码

 

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

配置vxlan
  #vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
  physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME(第二个网卡名称)
  [vxlan]
  enable_vxlan = True
  local_ip = OVERLAY_INTERFACE_IP_ADDRESS(本地网络地址)
  l2_population = True
  [securitygroup]
  enable_security_group = True
  firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

 

 

 

 

 

 

 

#vi /etc/nova/nova.conf

[neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = xxxx

 

 

 

 

 

 

 

 

重启总括服务
  #systemctl restart openstack-nova-compute.service
  #systemctl enable neutron-linuxbridge-agent.service
  #systemctl enable neutron-linuxbridge-agent.service

5.3验证

运作环境变量:
  #. admin-openrc

#openstack extension list –network

 MySQL 23

#openstack network agent list

 MySQL 24

六:Dashboard

6.1配置

#yum install openstack-dashboard
  #vi /etc/openstack-dashboard/local_settings

 OPENSTACK_HOST = "controller"
     ALLOWED_HOSTS = [‘one.example.com’, ‘two.example.com’]

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
    ’default’: {
   ’BACKEND’:  ‘django.core.cache.backends.memcached.MemcachedCache’,
      ’LOCATION’: ‘controller:11211’,
    }
   }
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
      OPENSTACK_API_VERSIONS = {
        "identity": 3,
        "image": 2,
        "volume": 2,
        }
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {

    ‘enable_router’: False,

    ‘enable_quotas’: False,

    ‘enable_distributed_router’: False,

    ‘enable_ha_router’: False,

    ‘enable_lb’: False,

    ‘enable_firewall’: False,

    ‘enable_vpn’: False,

    ‘enable_fip_topology_check’: False,

}

TIME_ZONE = "TIME_ZONE"

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

启动:
  #systemctl restart httpd.service memcached.service

6.2登录

在网页上输入网址http://控制节点ip/dashboard/auth/login

域:default

用户名:admin或者demo

密码:自己安装的

MySQL 25

 

                                              图6.1 登录页面

      

 

网站地图xml地图