127 KiB
第一章:OpenStack介绍及环境部署
当面对KVM集群的时候,我们对KVM的管理以及宿主机的管理就会遇到很大的难度,例如:
- 查看每一个宿主机有多少台KVM虚拟机?
- 查看每一个宿主机资源信息,每一个KVM虚拟机资源信息?
- 查看每一台宿主机配置信息,每一个KVM虚拟机的配置信息
- 查看每一台宿主机IP地址,每一个KVM虚拟机的IP地址?
OpenStack是带计费功能的kvm管理平台,IaaS层自动化管理kvm宿主机,云主机定制化操作。
OpenStack介绍
OpenStack是一个开源的虚拟化编排平台,提供了基础设施即服务(IaaS)的解决方案,帮助服务商和企业内部实现类似于 Amazon EC2 和阿里云的ECS的云基础架构服务(Infrastructure as a Service, IaaS)。
OpenStack核心组件
![]() |
---|
![]() |
- **身份服务Keystone:**为OpenStack其他组件通讯时提供身份验证服务;
- **UI 界面 Horizon:**OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等;
- **计算服务Nova:**负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作;
- **镜像服务Glance:**负责虚拟机镜像管理,例如:上传镜像、删除镜像、编辑镜像基本信息的功能;
- **网络&地址管理Neutron:**提供网络虚拟化技术,为OpenStack其他服务提供网络连接服务,为用户提供网络接口;
- **块存储 Cinder:**提供一个额外的volume存储卷,练习环境没有都行;
- **Swift:**对象存储服务,通过基于HTTP的 RESTful API 来存储和检索任意的非结构化数据对象;
部署文档: https://docs.openstack.org/zh_CN/install-guide/
目前官方只针对M版和L版的提供中文文档
基础环境介绍
controller控制节点;
compute计算节点;
主机名 | IP地址 | 网络模式 | 硬件配置 |
---|---|---|---|
controller | ens32:192.168.0.50 | 管理网络:NAT | 2C,6G内存,50G硬盘 |
ens34:不需要配置IP | 提供商网络:NAT | ||
compute01 | ens32:192.168.0.51 | 管理网络:NAT | 2C,4G内存,50G硬盘 |
ens34:不需要配置IP | 提供商网络:NAT |
上面环境通过OpenStack官方文档建议的最小配置来设置的,因为两台机器用途是不同的。
- controller 控制节点:上面会安装很多的软件启动很多的服务,甚至数据库都在他上面,内存和存储可以给多点;
- compute计算节点:他主要是用来提供硬件资源虚拟化,创建虚机的,他本身运行的服务有nova和
- neutron,所以在搭建过程中不需要给太大;
- 根据官方文档中的介绍,所有节点都需要访问互联网,管理网络用于访问外网安装软件包,提供商网络不需要配置IP地址,用于给neutron服务使用;
VMware环境需要在虚拟机开启cpu虚拟化,如下图:controller与compute节点环境配置
controller与compute节点配置主机名与本地解析
# cat /etc/hosts
192.168.0.50 controller
192.168.0.51 compute01
时间同步
OpenStack的节点时间是需要进行同步的,但是我使用的是两台机器,而且都是可以进行联网的,完全可以都通过网络源来进行时间同步,但是我还是将controller和网络源同步时间,然后compute和controller同步时间。
controller控制节点配置:
vim /etc/chrony.conf
#...
27 allow 192.168.0.0/24 #配置允许访问的客户端列表
31 local stratum 10 #本地的优先级
查看时间同步并设置开机启动:
systemctl restart chronyd
chronyc sources -v
compute计算节点配置:
compute01上指定时间同步源是controller节点
vim /etc/chrony.conf
...
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
启动服务并查看时间同步:
systemctl restart chronyd
chronyc sources -v
配置阿里Base源
OpenStack只需要Base源,如果有epel源先删除,应为根据官方的建议在epel源的软件包版本会破坏OpenStack的兼容性,会出现版本不兼容的问题。
controller和compute都要配置
rm -rf /etc/yum.repos.d/epel.repo
安装OpenStack仓库
OpenStack目前可用的版本有P、Q、R、S、T、U、V、W、X、Y,如果想要安装U版本及U版往后的版本,要求CentOS/RHEL在8版本以上才可以。
本实验使用T版本的OpenStack部署,T版作为CentOS/EHEL7可以使用的最高版本,后续也不需要考虑版本升级的问题。
参考地址:https://docs.openstack.org/zh_CN/install-guide/environment-packages-rdo.html
controller和compute都要安装
#下载T版本的OpenStack仓库
yum -y install centos-release-openstack-train
#安装RDO仓库RPM来启用OpenStack仓库
yum -y install https://rdoproject.org/repos/rdo-release.rpm
#安装客户端工具
yum -y install python-openstackclient
安装MySQL数据库
在OpenStack环境中使用mysql数据库来存储各个服务的基础数据,因为mysql已经商用默认源中是没有他的软件包,所以使用mariadb。
参考地址:https://docs.openstack.org/zh_CN/install-guide/environment-sql-database-rdo.html
controller节点安装
yum -y install mariadb mariadb-server python2-PyMySQL
创建配置文件,并将bind-Address的地址设置为本机管理网络IP地址
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.0.50
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
设置开机自动启动并启动服务查看状态
systemctl enable mariadb.service && systemctl start mariadb.service
#对数据库进行初始化操作
mysql_secure_installation
输入root密码:——》[回车]
是否设置root密码?——》[y]
直接设置root密码——》[123456]
是否删除匿名用户?——》[y]
不允许root用户远程登录?——》[y]
是否删除测试数据库?——》[y]
是否重新加载授权表?——》[y]
安装rabbitMQ消息队列
在OpenStack中,因为控制节点、计算节点、存储节点,相互之间是需要进行通信的,通信的时候需要借用消息的传递服务,rabbitMQ就是为他们提供消息的传递,节点之间传递消息的时候会存放到rabbitMQ中,其他节点再到rabbitMQ中调取消息。
有的时候当我们无法正常执行命令的时候,可以尝试重启下rabbitMQ服务,也许有效果。
参考地址:https://docs.openstack.org/zh_CN/install-guide/environment-messaging-rdo.html
controller节点安装
yum -y install rabbitmq-server
设置开机启动并启动服务
systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service
对于rabbitMQ的配置就是创建一个OpenStack用户设置好权限,这里为了好记密码设置的是123,后面创建OpenStack所有服务的时候,密码也都是123
rabbitmqctl add_user openstack 123
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
安装memcacheh数据库
在OpenStack环境中keystone服务颁布的令牌就是使用memcache来缓存的,也就是缓存用户的验证信息。
参考地址:https://docs.openstack.org/zh_CN/install-guide/environment-memcached-rdo.html
controller节点安装
yum -y install memcached python-memcached
修改配置文件,配置IP地址及主机名
vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,controller"
设置开机启动并启动服务
systemctl enable memcached.service && systemctl start memcached.service
通过配置文件可以知道它开启的是11211端口
netstat -ntlp | grep 11211
安装etcd数据库
Openstack 使用 etcd 来做配置管理和分布式锁,用于配置共享和服务发现。
ETCD数据库功能解读:http://www.sel.zju.edu.cn/blog/2015/02/01/etcd从应用场景到实现原理的全方位解读/
参考地址:https://docs.openstack.org/zh_CN/install-guide/environment-etcd-rdo.html
controller节点安装
yum -y install etcd
编辑/etc/etcd/etcd.conf文件,将文件中所有默认的IP改为本机管理网络IP,使其他服务能够访问etcd
vi /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
#替换
sed -i 's/10.0.0.11/192.168.0.50/g' /etc/etcd/etcd.conf
设置开机启动并启动服务
systemctl enable etcd && systemctl start etcd
查看etcd服务的端口2379与2380是打开的
netstat -ntlp | egrep '2379|2380'
现在基础环境就都安装完成了,检查前边安装的所有服务
systemctl is-active chronyd mariadb.service rabbitmq-server.service memcached.service etcd.service
第二章、部署Keystone服务
在OpenStack的框架体系中Keystone的作用类似于一个服务总线,为OpenStack提供身份管理服务,
包括用户认证,服务认证和口令认证,其他服务通过Keystone来注册服务的Endpoint(端点),针对服务的任何调动都要通过Keystone的身份认证,并获得Endpoint(端点)来进行访问。
部署顺序
首先,keystone组件是第一个要安装的组件,其他组件之间通信都是需要通过keystone进行认证;
接着,glance组件负责镜像管理,启动实例时提供镜像服务,可存储各个不同操作系统的镜像;
接着,placement组件,负责为nova提供资源的监控的功能;
接着,nova组件负责管理实例,创建、删除、管理等操作;
最后,neutron组件负责网络,二层和三层网络,通过linuxbridge网桥来镜像连接;
其实最小化的OpenStack环境这五个组件都完全足够了,有他们就可以创建出来一个实例了
接着,dashboard提供一个图形界面,在web页面可以启动实例,创建网络等等操作;
最后,cinder组件提供一个额外的volume存储卷,启动实例的时候可以创建卷给他绑定上去,自己搭建练习,有没有都行;
keystone数据库配置
参考地址:https://docs.openstack.org/keystone/train/install/keystone-install-rdo.html
controller节点
在安装和配置keystone身份服务之前,必须创建服务对应的数据库用于存储相关数据
然后授权keystone用户本地访问和远程访问两种访问权限
我这里创建用户设置的密码都是123,因为是自己部署的实验环境,还是简单点好不给自己找麻烦了!
mysql -u root -p123456
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '123';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '123';
keystone服务安装配置
安装keystone、httpd、mod_wsgi软件(mod_wsgi会提供一个对外的api接口用于接收请求)
yum -y install openstack-keystone httpd mod_wsgi
修改keystone.conf 配置文件
#查看文件属性
ll /etc/keystone/keystone.conf
-rw-r----- 1 root keystone 106413 6月 8 2021 /etc/keystone/keystone.conf
#提前备份配置文件
cp /etc/keystone/keystone.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
#查看文件属性(确定归属关系没有发生改变)
ll /etc/keystone/keystone.conf
-rw-r----- 1 root keystone 601 12月 8 20:56 /etc/keystone/keystone.conf
#修改文件内容
vim /etc/keystone/keystone.conf
[database]
#配置keystone访问数据库的密码
connection = mysql+pymysql://keystone:123@controller/keystone
[token]
#配置Fernet令牌,方便其他程序通过令牌来连接keyston进行验证
provider = fernet
初始化数据库,keystone服务会向数据库中导入表结构
su -s /bin/sh -c "keystone-manage db_sync" keystone
查看数据库验证
mysql -u keystone -p123
use keystone
show tables;
初始化Fernet密钥存储库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导身份服务,使得keystone可以在所有网卡的5000端口上运行,这里的ADMIN_PASS(管理员密码)改为:123
keystone-manage bootstrap --bootstrap-password 123 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
配置HTTP服务
配置 ServerName选项为控制器节点名称
vim /etc/httpd/conf/httpd.conf
...
ServerName controller
创建文件链接,这个文件是keystone给生成好的,用于访问keystone的5000端口
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
设置开机启动并启动http服务
systemctl enable httpd.service && systemctl start httpd.service
http服务正常启动,keystone的5000端口已经打开
netstat -tunpl | grep httpd
配置admin管理员环境变量,定义admin管理员的密码为123
export OS_USERNAME=admin
export OS_PASSWORD=123
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
创建认证账号
keystone为OpenStack提供身份验证功能,身份验证使用域(domain),项目(project),用户(user),角色(Role)的组合。
- User 用户:User就是用户,它代表可以通过keystone进行访问的人或者程序;
- Role 角色:Role代表一组用户可以访问的资源权限,用户可以通过角色从而获得该角色的权限;
- project 项目:他是各个服务中一些可以访问的资源集合,例如:镜像,存储,网络等资源;
- domain 域:Domain 的概念实现真正的多租户架构,云服务的客户是 Domain 的所有者,他们可以在自己的 Domain 中创建多个 Projects、Users、Groups 和 Roles;
参考地址:https://docs.openstack.org/keystone/train/install/keystone-users-rdo.html
controller节点
首先创建domain域:域名称example
openstack domain create --description "An Example Domain" example
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | An Example Domain |
| enabled | True |
| id | 34643814121f41ffb961d791e5354140 |
| name | example |
| options | {} |
| tags | [] |
+-------------+----------------------------------+
#查看域列表
openstack domain list
+----------------------------------+---------+---------+--------------------+
| ID | Name | Enabled | Description |
+----------------------------------+---------+---------+--------------------+
| 34643814121f41ffb961d791e5354140 | example | True | An Example Domain |
| default | Default | True | The default domain |
+----------------------------------+---------+---------+--------------------+
#查看指定域详细信息
openstack domain show default
+-------------+--------------------+
| Field | Value |
+-------------+--------------------+
| description | The default domain |
| enabled | True |
| id | default |
| name | Default |
| options | {} |
| tags | [] |
+-------------+--------------------+
接着创建project项目:项目名称service
,并指定放在default域中
openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | a099fd0b56634363ac56f80ff276649c |
| is_domain | False |
| name | service |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
#查看项目列表
openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 96b65f7df63b42fb99cffaa2ef5a0d11 | admin |
| a099fd0b56634363ac56f80ff276649c | service |
+----------------------------------+---------+
#查看指定项目详细信息
openstack project show service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | a099fd0b56634363ac56f80ff276649c |
| is_domain | False |
| name | service |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
创建一个project项目:项目名称myproject
,并指定放在default域中
openstack project create --domain default \
--description "Demo Project" myproject
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 9c11574d501d4a399760940e2e62f245 |
| is_domain | False |
| name | myproject |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
创建非特权user用户:用户名称myuser
,用户的密码:123
openstack user create --domain default \
--password-prompt myuser
User Password:[123]
Repeat User Password:[123]
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | b4806aece59c4b7a9ca92ff092b6a2be |
| name | myuser |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#查看用户列表
openstack user list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 927d0cf675e64ef6be2ff9486b295cd7 | admin |
| b4806aece59c4b7a9ca92ff092b6a2be | myuser |
+----------------------------------+--------+
--提示:admin是自带的管理员用户
#查看指定用户详细信息
openstack user show myuser
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | b4806aece59c4b7a9ca92ff092b6a2be |
| name | myuser |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
创建role角色:角色名称为myrole
openstack role create myrole
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | None |
| domain_id | None |
| id | f42877fa854146a488ea12bffcbf0013 |
| name | myrole |
| options | {} |
+-------------+----------------------------------+
#查看指定角儿详细信息
openstack role show myrole
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | None |
| domain_id | None |
| id | f42877fa854146a488ea12bffcbf0013 |
| name | myrole |
| options | {} |
+-------------+----------------------------------+
角色绑定:myrole角色绑定到myuser用户与myproject项目上
openstack role add --project myproject --user myuser myrole
验证操作
参考地址:https://docs.openstack.org/keystone/train/install/keystone-verify-rdo.html
取消环境变量
unset OS_AUTH_URL OS_PASSWORD
向keyston获取token令牌,如果能成功获取那么说明keyston服务就是没有问题的了
#以admin用户获取token令牌
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
Password:[123]
Password:[123]
+------------+-----------------------------------------------------------------
| Field | Value
+------------+-----------------------------------------------------------------
| expires | 2022-12-02T10:33:53+0000
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws
| project_id | 343d245e850143a096806dfaefa9afdc
| user_id | ac3377633149401296f6c0d92d79dc16
+------------+-----------------------------------------------------------------
#以myuser用户获取token令牌
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
Password:[123]
Password:[123]
+------------+-----------------------------------------------------------------
| Field | Value
+------------+-----------------------------------------------------------------
| expires | 2022-12-02T10:37:21+0000
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U
| project_id | ed0b60bf607743088218b0a533d5943f
| user_id | 58126687cbcc4888bfa9ab73a2256f27
+------------+-----------------------------------------------------------------
创建环境变量脚本
在OpenStack环境中执行命令的时候需要验证用户身份,而用户身份在Open Stack中通过环境变量的方式来定义与切换,但是这些参数是非常多而且长的,每次输入一次命令就要加上这些参数
为了简化,可以将变量写入脚本,使用的时候只需要执行不同的脚本就可以切换用户身份,方便!
#admin用户脚本(用户密码:123)
vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
#myuser用户脚本(用户密码:123)
vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=123
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
切换用户身份只需执行脚本即可
source admin-openrc
#查看自己的token令牌
openstack token issue
source demo-openrc
#查看自己token令牌
openstack token issue
总结
到现在为止我们keystone服务就部署完成了,kystone服务可以说是OpenStack环境最重要的一个服务,他的主要功能就是负责发布token【令牌】,给用户发布令牌并且提供验证服务。
第三章:部署glance服务
身份认证服务部署完毕之后,部署glance映像服务,映像服务可以帮助用户发现、注册、检索虚拟机镜像,就是说启动实例的镜像是放在这里的。
默认镜像存储目录为:/var/lib/glance/images/
glance数据库配置
参考地址:https://docs.openstack.org/glance/train/install/install-rdo.html
controller节点
在安装和配置glance服务之前,必须创建服务对应的数据库用于存储相关数据
然后授权glance用户本地访问和远程访问两种访问权限
mysql -u root -p123456
#创建库
CREATE DATABASE glance;
#授权用户本地登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY '123';
#授权用户远程登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY '123';
创建认证账号
为glance创建用户、授权、创建glance服务、创建url访问地址,其目的是让openstack可以识别glance身份
切换到admin用户,创建glance用户(密码:123)
source admin-openrc
openstack user create --domain default --password-prompt glance
User Password:[123]
Repeat User Password:[123]
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 7db95c3bbfa849e9ab90e434512974d5 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
查看用户列表
openstack user list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 927d0cf675e64ef6be2ff9486b295cd7 | admin |
| b4806aece59c4b7a9ca92ff092b6a2be | myuser |
| 7db95c3bbfa849e9ab90e434512974d5 | glance |
+----------------------------------+--------+
将glance用户添加到service项目中拥有admin权限
openstack role add --project service --user glance admin
创建一个service服务(供其他服务访问)名称为glance,类型为image
openstack service create --name glance \
--description "OpenStack Image" image
通过 openstack service list 查看服务
openstack service list
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| 077fc2589677488b9a190a7a351f1be9 | glance | image |
| 5e23f560a82c4c87828b9595d5540bb4 | keystone | identity |
+----------------------------------+----------+----------+
创建glance服务API端点
API端点是OpenStack中提供给客户或者与其他核心服务之间的交互的入口,而glance服务的api端点用于接受请求,响应镜像查询,获取和存储的调用,OpenStack使用三种API端点代表三种服务:admin、internal、public
- admin:管理员访问的API端点
- internal:内部服务访问的API端点
- public: 可以被所有项目访问的API端点
#创建public端点
openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 949765ca993449c3b4132fc947f01ab2 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 077fc2589677488b9a190a7a351f1be9 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
#创建internal端点
openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 36a65de1f994489c97fab3d298f35a1b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 077fc2589677488b9a190a7a351f1be9 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
#创建admin端点
openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6d94a033a7714b49af6f3b2f594c61cc |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 077fc2589677488b9a190a7a351f1be9 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
通过openstack endpoint list查看端点
openstack endpoint list
glance服务安装和配置
安装glance软件包
yum -y install openstack-glance
修改glance文件,对接mysql,对接keystone,配置文件:/etc/glance/glance-api.conf
#查看文件属性
ll /etc/glance/glance-api.conf
-rw-r----- 1 root glance 192260 8月 12 2020 /etc/glance/glance-api.conf
#提前备份配置文件
cp /etc/glance/glance-api.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
#查看文件属性
ll /etc/glance/glance-api.conf
-rw-r----- 1 root glance 476 12月 8 21:45 /etc/glance/glance-api.conf
#修改文件内容
vim /etc/glance/glance-api.conf
#访问glance数据库使用的用户及密码:123
[database]
connection = mysql+pymysql://glance:123@controller/glance
#接着是glance找keystone需要做验证,配置keystone认证信息
[keystone_authtoken]
#指定keystone的api,到此url去认证
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
#指定memcache的地址
memcached_servers = controller:11211
#指定身份验证方式为password密码验证方式
auth_type = password
#指定项目所在的域Default
project_domain_name = Default
#指定用户所在的域Default
user_domain_name = Default
#指定项目名称service
project_name = service
#指定认证用户是glance
username = glance
#指定用户密码
password = 123
#指定提供认证的服务为keystone
[paste_deploy]
flavor = keystone
#指定存储
[glance_store]
#file:文件方式,http:基于api调用的方式,把镜像放到其他存储上
stores = file,http
#存储类型默认
default_store = file
#指定镜像存放目录
filesystem_store_datadir = /var/lib/glance/images/
初始化glance数据库,生成相关表结构(会有很多的输出)
su -s /bin/sh -c "glance-manage db_sync" glance
查看数据库表结构
mysql -u glance -p123
use glance;
show tables;
开启glance服务(此处开启之后会生成存放镜像的目录/var/lib/glance/images)
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
查看端口
netstat -natp | grep 9292
服务验证
参考地址:https://docs.openstack.org/glance/train/install/verify.html
使用CirrOS(一个小型Linux镜像(13M大小),可帮助您测试OpenStack部署)验证Image Service的运行
source admin-openrc
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
创建镜像到glance服务
glance image-create --name "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public
openstack image create "cirros" \ --创建的镜像名
--file cirros-0.3.5-x86_64-disk.img \ --创建镜像所需文件,当前目录,或带文件位置
--disk-format qcow2 \ --镜像格式 qcow2
--container-format bare \ --可以接受的镜像容器格式包含:ami,ari, aki, bare, and ovf
--public --共享此镜像,所有用户可见
查看镜像方式
openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 240c1c8a-656a-40cd-9023-2bf4712db89c | cirros | active |
+--------------------------------------+--------+--------+
总结
到这里为止OpenStack中glance服务就部署完成了,而且也已经验证过了,glance组件的命令是比较少的,工作中常用到的就三类命令:上传、查看、删除。
而且当我们部署好OpenStack环境之后,我们是需要根据需求来准备镜像上传到glance,注意ISO镜像上传上去是没法直接使用的,需要将ISO镜像转变成qcow2磁盘文件,然后上传磁盘文件,就可以创建云主机。
第四章:部署Placement服务
Placement服务是从nova服务中拆分出来的组件,Placement组件应该在Nova之前安装;
Placement服务用于跟踪节点资源(比如计算节点,存储资源池,网络资源池等)的使用情况,提供自定义资源的能力,为分配资源提供服务。
Placement数据库配置
参考地址:https://docs.openstack.org/placement/ussuri/install/install-rdo.html
controller节点
在安装和配置placement服务之前,必须创建服务对应的数据库用于存储相关数据
然后授权placement用户本地访问和远程访问两种访问权限
mysql -u root -p123456
#创建库
CREATE DATABASE placement;
#授权用户本地登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
IDENTIFIED BY '123';
#授权用户远程登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
IDENTIFIED BY '123';
创建认证账号
为placement创建用户、授权、创建placement服务、创建url访问地址,其目的是让keystone可以识别placement身份
切换到admin用户,创建placement用户(密码:123)
source admin-openrc
openstack user create --domain default --password-prompt placement
User Password:[123]
Repeat User Password:[123]
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 4dd6627cc62c4bec8cdbb5c68529685d |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
查看用户列表
openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 927d0cf675e64ef6be2ff9486b295cd7 | admin |
| b4806aece59c4b7a9ca92ff092b6a2be | myuser |
| 7db95c3bbfa849e9ab90e434512974d5 | glance |
| 4dd6627cc62c4bec8cdbb5c68529685d | placement |
+----------------------------------+-----------+
将placement用户添加到service项目中拥有admin权限
openstack role add --project service --user placement admin
创建一个service服务(供其他服务访问)名称为placement,类型为placement
openstack service create --name placement \
--description "Placement API" placement
创建placement服务API端点
- admin:管理员访问的API端点
- internal:内部服务访问的API端点
- public: 可以被所有项目访问的API端点
#创建public端点
openstack endpoint create --region RegionOne \
placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7be70b0b31ad460ba0219ce9fc50d6bd |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 11fde07a3ff94ae4a57c9ca9c267ceaa |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
#创建internal端点
openstack endpoint create --region RegionOne \
placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2cb17a6adab541679bf19468816995ea |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 11fde07a3ff94ae4a57c9ca9c267ceaa |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
#创建admin端点
openstack endpoint create --region RegionOne \
placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 137b21db414b4798942b38d9fa024995 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 11fde07a3ff94ae4a57c9ca9c267ceaa |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
Placement服务安装和配置
安装placement软件包
yum -y install openstack-placement-api
修改placement文件,对接mysql,对接keystone,配置文件:/etc/placement/placement.conf
#查看文件属性
ll /etc/placement/placement.conf
-rw-r----- 1 root placement 25512 2月 17 2021 /etc/placement/placement.conf
#提前备份配置文件
cp /etc/placement/placement.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
#查看文件属性
ll /etc/placement/placement.conf
-rw-r----- 1 root placement 102 12月 8 22:18 /etc/placement/placement.conf
#修改文件内容
vim /etc/placement/placement.conf
#访问placement数据库使用的用户及密码:123
[placement_database]
connection = mysql+pymysql://placement:123@controller/placement
#认证类型为keyston
[api]
auth_strategy = keystone
#配置keystone认证信息
[keystone_authtoken]
#指定keystone的api,到此url去认证
auth_url = http://controller:5000/v3
#指定memcache的地址
memcached_servers = controller:11211
#指定身份验证方式为password密码验证方式
auth_type = password
#指定项目所在的域Default
project_domain_name = Default
#指定用户所在的域Default
user_domain_name = Default
#指定项目名称service
project_name = service
#指定认证用户是placement
username = placement
#指定用户的密码123
password = 123
初始化placement数据库,生成相关表结构(会有一个Warning警告信息,可忽略)
su -s /bin/sh -c "placement-manage db sync" placement
查看数据库表结构
mysql -uplacement -p123
show databases;
use placement;
show tables;
修改Apache配置文件
允许apache访问/usr/bin目录,否则/usr/bin/placement-api不能被访问
vim /etc/httpd/conf.d/00-placement-api.conf
...
#末行添加如下内容
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
重启httpd服务
systemctl restart httpd
查看端口
netstat -ntlp | grep 8778
服务验证
参考地址:https://docs.openstack.org/placement/train/install/verify.html
切换admin身份进行验证
source admin-openrc
执行状态检查命令
placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
第五章:controller部署nova服务
nova服务是OpenStack service计算服务,负责维护和管理云环境的计算资源,例如: 接收客户端请求需要的计算资源、确定实例在哪个物理机上创建、通过虚机化的方式将实例启动运行等工作。
nova数据库配置
参考地址:https://docs.openstack.org/nova/train/install/controller-install-rdo.html
controller节点
在安装和配置nova服务之前,必须创建服务对应的数据库用于存储相关数据
然后授权nova用户本地访问和远程访问两种访问权限
mysql -u root -p123456
#创建库,需要创建三个数据库
# nova_api数据库中存放全局信息,如:实例模型、实例组、实例配额
CREATE DATABASE nova_api;
#存储nova本身元数据信息
CREATE DATABASE nova;
#nova_cell0数据库的模式与nova一样,主要用途就是当实例调度失败时,实例的信息放到nova_cell0数据库中
CREATE DATABASE nova_cell0;
#对以上三个库授权用户本地登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY '123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '123';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY '123';
#对以上三个库授权用户远程登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY '123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY '123';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY '123';
创建认证账号
切换到admin用户,创建nova用户(密码:123)
source admin-openrc
openstack user create --domain default --password-prompt nova
User Password:[123]
Repeat User Password:[123]
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 031c3ec94dc24289a8fb3c906a7fe01f |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
查看用户列表
openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 927d0cf675e64ef6be2ff9486b295cd7 | admin |
| b4806aece59c4b7a9ca92ff092b6a2be | myuser |
| 7db95c3bbfa849e9ab90e434512974d5 | glance |
| 4dd6627cc62c4bec8cdbb5c68529685d | placement |
| 031c3ec94dc24289a8fb3c906a7fe01f | nova |
+----------------------------------+-----------+
将nova用户添加到service项目中拥有admin权限
openstack role add --project service --user nova admin
创建一个service服务(供其他服务访问)名称为nova,类型为compute
openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | bba99b2825e44bf188a95ce0c7a94c13 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
创建nova服务API端点
- admin:管理员访问的API端点
- internal:内部服务访问的API端点
- public: 可以被所有项目访问的API端点
#创建public端点
openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f095616aeece40e6b8f213035041af1e |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bba99b2825e44bf188a95ce0c7a94c13 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
#创建internal端点
openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e98fc95e187b427aa3dea15119e2da92 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bba99b2825e44bf188a95ce0c7a94c13 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
#创建admin端点
openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7857c8a1a26141378c8e28ef2fbdafcb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bba99b2825e44bf188a95ce0c7a94c13 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
nova服务安装和配置
安装nova软件包:
- nova-api:用于响应客户的调度;
- nova-conductor:用于和数据库交互;
- nova-novncproxy:用于通过vnc方式连接实例;
- nova-scheduler:用于调度虚拟机实例在那台计算节点运行;
yum -y install openstack-nova-api openstack-nova-conductor \
openstack-nova-novncproxy openstack-nova-scheduler
修改nova配置文件,配置文件:/etc/nova/nova.conf
#查看文件属性
ll /etc/nova/nova.conf
-rw-r----- 1 root nova 220499 3月 16 2021 /etc/nova/nova.conf
#提前备份配置文件
cp /etc/nova/nova.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
#查看文件属性
ll /etc/nova/nova.conf
-rw-r----- 1 root nova 736 12月 12 16:51 /etc/nova/nova.conf
#修改文件内容
vim /etc/nova/nova.conf
...
#指定nova支持的api类型、指定连接的rabbitmq的用户密码123、通过neutron获取虚拟机实例IP地址、禁用nova服务的防火墙驱动、否则会与网络服务neutron防火墙驱动冲突
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123@controller:5672/
#定义本机IP,别复制我的IP,切记!
my_ip = 192.168.0.50
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#认证服务为keystone
[api]
auth_strategy = keystone
#访问nova_api数据库使用的用户及密码:123
[api_database]
connection = mysql+pymysql://nova:123@controller/nova_api
#访问nova数据库使用的用户及密码:123
[database]
connection = mysql+pymysql://nova:123@controller/nova
#指定glacne的api,nova启动实例需要找glance要镜像
[glance]
api_servers = http://controller:9292
#配置keystone认证信息,密码123
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123
#启动vnc、指定vnc的监听地址为本机IP、server客户端地址为本机地址,此地址是管理网的地址
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
#配置lock锁的临时存放目录,锁的作用是创建虚拟机时,在执行某个操作的时候,需要等此步骤执行完后才能执行下一个步骤,不能并行执行,保证操作是一步一步的执行
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
#nova需要访问placement获取计算节点的资源使用情况,注意这里的placement密码是:123
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123
初始化 nova_api
数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库
一般情况下 nova需要依赖一个逻辑的数据库和消息队列来进行组件之间的相互交换,这就会给整个系统的扩容和灾难迁移带来困难。而数据库和消息队列正在成为openstack扩展的瓶颈。尤其是消息队列,伴随着集群规模的扩展,其性能下降尤其明显。通常当集群规模扩展到200个节点,一个消息可能要十几秒后才会响应,集群的整体性能大大下降。针对nova的这个问题,提出了nova-cell的概念,这个概念的提出主要是把计算节点分成更多个更小的单元,每一个单元都有自己的数据库和消息队列
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元格
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
初始化nova数据库(出现警告可以忽略)
su -s /bin/sh -c "nova-manage db sync" nova
验证cell0和cell1是否注册成功
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
查看下数据库中情况
mysql -u nova -p123
show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| nova |
| nova_api |
| nova_cell0 |
+--------------------+
use nova
show tables;
use nova_api
show tables;
use nova_cell0
show tables;
启动Nova服务
#设置服务自启动
systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
#启动服务
systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
#查看服务状态
systemctl is-active \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
#提示:每次系统关机重启后,需要再次确认nova服务的状态,如果有相关服务没有启动,则重启nova服务
systemctl restart \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
上面已经将controller节点的nova安装好了,接着我们安装compute节点的nova服务;
第六章:compute部署nova服务
Nova-compute在计算节点上运行,负责管理节点上的实例,例如:创建、关闭、重启、挂起、恢复、中止、调整大小、迁移、快照等操作都是通过nova-compute实现的。
通常一个主机运行一个Nova-compute服务,一个实例部署在哪个可用的主机上取决于调度算法,OpenStack对实例的操作,最后都是提交给Nova-compute来完成 。
安装nova-compute组件
参考地址:https://docs.openstack.org/nova/train/install/compute-install-rdo.html
compute节点
yum -y install openstack-nova-compute
修改配置文件
修改配置文件:/etc/nova/nova.conf
#查看文件属性
ll /etc/nova/nova.conf
-rw-r----- 1 root nova 220499 3月 16 2021 /etc/nova/nova.conf
#备份配置文件
cp /etc/nova/nova.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
#查看文件属性
ll /etc/nova/nova.conf
-rw-r----- 1 root nova 220499 3月 16 2021 /etc/nova/nova.conf
#修改文件内容
vi /etc/nova/nova.conf
[DEFAULT]
#指定nova支持的api类型
enabled_apis = osapi_compute,metadata
#指定连接的rabbitmq的用户密码:123
transport_url = rabbit://openstack:123@controller
#定义本机IP,别复制我的IP,切记!
my_ip = 192.168.0.51
#通过neutron获取虚拟机实例IP地址
use_neutron = true
#禁用nova服务的防火墙驱动,否则会与网络服务neutron防火墙驱动冲突
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
#指定使用keystone认证
auth_strategy = keystone
#配置keystone认证信息,密码:123
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123
[vnc]
#启动vnc
enabled = true
#指定vnc的监听地址
server_listen = 0.0.0.0
#server客户端地址为本机IP
server_proxyclient_address = $my_ip
#通过内网IP来访问vnc server时的地址
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
#指定glacne的api,nova启动实例需要找glance要镜像
api_servers = http://controller:9292
[oslo_concurrency]
#配置lock锁的临时存放目录,锁的作用是创建虚拟机时,在执行某个操作的时候,需要等此步骤执行完后才能执行下一个步骤,不能并行执行,保证操作是一步一步的执行
lock_path = /var/lib/nova/tmp
#nova需要访问placement获取计算节点的资源使用情况,注意这里的placement密码是:123
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123
检查CPU虚拟化功能是否开启,compute节点负责启动虚机,所以需要开启cpu虚拟化
egrep -c '(vmx|svm)' /proc/cpuinfo
2 #返回值2为开启,0未开启
设置开机自启并启动服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
systemctl is-active libvirtd.service openstack-nova-compute.service
注册compute节点
接下来在controller节点查看compute节点是否注册到controller上
controller节点
扫描当前openstack中有哪些计算节点可用
source admin-openrc
openstack compute service list --service nova-compute
+----+--------------+-----------+------+---------+-------+---------------------
| ID | Binary | Host | Zone | Status | State | Updated At
+----+--------------+-----------+------+---------+-------+---------------------
| 6 | nova-compute | compute01 | nova | enabled | up | ...
+----+--------------+-----------+------+---------+-------+---------------------
将新的计算节点添加到openstack集群
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
#会有如下输出:
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 24509e4d-be39-48b6-a740-bb4b82bfa273
Checking host mapping for compute host 'compute01': c65aa14f-1cac-4504-9060-8397bd2fa18e
Creating host mapping for compute host 'compute01': c65aa14f-1cac-4504-9060-8397bd2fa18e
Found 1 unmapped computes in cell: 24509e4d-be39-48b6-a740-bb4b82bfa273
定义Nova自动发现新主机的时间间隔
controller节点
vim /etc/nova/nova.conf
...
[scheduler]
discover_hosts_in_cells_interval = 300
服务验证
参考地址:https://docs.openstack.org/nova/train/install/verify.html
controller节点
检查 nova 的各个服务是否都是正常,以及 compute 服务是否注册成功
source admin-openrc
openstack compute service list
+----+----------------+------------+----------+---------+-------+--------------
| ID | Binary | Host | Zone | Status | State | Updated At
+----+----------------+------------+----------+---------+-------+--------------
| 3 | nova-conductor | controller | internal | enabled | up | ...
| 4 | nova-scheduler | controller | internal | enabled | up | ...
| 6 | nova-compute | compute01 | nova | enabled | up | ...
+----+----------------+------------+----------+---------+-------+-------------
查看各个组件的 api 是否正常
openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| glance | image | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | |
| placement | placement | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | |
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
+-----------+-----------+-----------------------------------------+
查看是否能够获取镜像
openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 240c1c8a-656a-40cd-9023-2bf4712db89c | cirros | active |
+--------------------------------------+--------+--------+
查看cell的api和placement的api是否正常,只要其中一个有误,后期无法创建虚拟机
nova-status upgrade check
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+--------------------------------+
第七章:controller部署neutron服务
Neutron为整个openstack提供虚拟化的网络支持,主要功能包括二层交换、三层路由、防火墙、VPN,以及负载均衡等。
neutron数据库配置
参考地址:https://docs.openstack.org/neutron/train/install/controller-install-rdo.html
controller节点
在安装和配置neutron服务之前,必须创建服务对应的数据库用于存储相关数据
然后授权neutron用户本地访问和远程访问两种访问权限
mysql -u root -p123456
#创建库
CREATE DATABASE neutron;
#授权用户本地登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY '123';
#授权用户远程登录并设置密码(这里的密码设置:123)
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY '123';
创建认账账号
切换到admin用户,创建neutron用户(密码:123)
source admin-openrc
openstack user create --domain default --password-prompt neutron
User Password:[123]
Repeat User Password:[123]
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fe86bae82c0448b9931f378a7de7d088 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
将neutron用户添加到service项目中拥有管理员权限
openstack role add --project service --user neutron admin
创建一个service服务(供其他服务访问)名称为neutron,类型为network
openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | d5772d09b488443aadb3c48449c122fb |
| name | neutron |
| type | network |
+-------------+----------------------------------+
创建neutron服务API端点
- admin:管理员访问的API端点
- internal:内部服务访问的API端点
- public: 可以被所有项目访问的API端点
#创建public端点
openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d4e6ef35b9ea4ec4a82373cdf9ba7543 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d5772d09b488443aadb3c48449c122fb |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
#创建internal端点
openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 209a1e6f56384731bed52f2d283fd749 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d5772d09b488443aadb3c48449c122fb |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
#创建admin端点
openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 9b0e30a45c9d4b148e495277ee87207a |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d5772d09b488443aadb3c48449c122fb |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
创建提供商网络(桥接)
根据官方文档提供的网络模式一共是两种,分别是provider(提供商网络)与self-service(内部网络)
**provider提供商网络 :**他是通过我们最初在本机添的那块物理网卡来与虚拟机实例进行通讯,如果那块网卡的网络模式是NAT模式了,那么我们使用provider网络创建出来的虚拟机实例会与该网卡进行桥接,那虚拟机实例就可以连通的外网了;
**self-service自服务网络:**他说白了就是内部网络,就像ipv4地址中的私有网段一样,他可以创建网络但是仅限于内部的实例来进行通信是没法连接外网的;
如果你想要通过self-service网络来连接外网的话,那么你是需要将provider网络配置好,并且创建出一个provider类型的网络,然后设置为路由绑定到你创建的self-service网络上面,这样self-service网络才可以访问外网。
controller节点
参考地址:https://docs.openstack.org/neutron/train/install/controller-install-option1-rdo.html
安装相关的软件包
yum -y install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
修改neutron配置文件:/etc/neutron/neutron.conf
#查看文件属性
ll /etc/neutron/neutron.conf
-rw-r----- 1 root neutron 39708 5月 11 2021 /etc/neutron/neutron.conf
#备份配置文件
cp /etc/neutron/neutron.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
#查看文件属性
ll /etc/neutron/neutron.conf
-rw-r----- 1 root neutron 216 12月 12 17:13 /etc/neutron/neutron.conf
#修改文件内容
vi /etc/neutron/neutron.conf
[DEFAULT]
#启用二层网络插件
core_plugin = ml2
#service_plugins 默认为空,如果值是 router 表示支持路由模式(三层网络)即 vxlan
service_plugins =
#指定连接的rabbitmq的用户密码:123
transport_url = rabbit://openstack:123@controller
#指定使用keystone认证
auth_strategy = keystone
#当网络接口发生变化时,通知给nova
notify_nova_on_port_status_changes = true
#当端口数据发生变化,通知给nova
notify_nova_on_port_data_changes = true
[database]
#访问neutron数据库使用的用户及密码:123
connection = mysql+pymysql://neutron:123@controller/neutron
#配置keystone认证信息,注意将用户neutron密码改为:123
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123
#neutron需要给nova通知计算节点网络拓扑变化,指定nova的用户信息,注意将nova用户密码改为:123
#默认配置文件没有提供该模块,在文件最后添加即可
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123
[oslo_concurrency]
#配置锁路径
lock_path = /var/lib/neutron/tmp
修改ML2(二层网络)插件配置文件:/etc/neutron/plugins/ml2/ml2_conf.ini
#查看文件属性
ll /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r----- 1 root neutron 6524 5月 11 2021 /etc/neutron/plugins/ml2/ml2_conf.ini
#备份配置文件
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
#查看文件属性
ll /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r----- 1 root neutron 10 12月 12 17:18 /etc/neutron/plugins/ml2/ml2_conf.ini
#修改文件内容
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
#配置类型驱动:让二层网络支持桥接,支持基于vlan做子网划分
type_drivers = flat,vlan
#租户网络类型(vxlan),默认官方文档没有定义vxlan网络类型
tenant_network_types =
#启用Linuxbridge网桥过滤
mechanism_drivers = linuxbridge
#启用端口安全扩展驱动程序,基于iptables实现访问控制;但配置了扩展安全组会导致一些端口限制,造成一些服务无法启动
extension_drivers = port_security
[ml2_type_flat]
#将provider(提供商网络)设置为flat(桥接)类型
flat_networks = provider
[securitygroup]
#启用 ipset 增加安全组的安全性
enable_ipset = true
修改linuxbridge(网桥)插件配置文件:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
#查看文件属性
ll /etc/neutron/plugins/ml2/linuxbridge_agent.ini
-rw-r----- 1 root neutron 6524 5月 11 2021 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#备份配置文件
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#查看文件属性
ll /etc/neutron/plugins/ml2/linuxbridge_agent.ini
-rw-r----- 1 root neutron 10 12月 12 17:19 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#修改文件内容
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
#指定上个文件中的桥接网络provider与本机ens34物理网卡做关联,后期给虚拟机分配external(外部)网络地址,然后虚拟机就可以通过ens34上外网;桥接的物理网卡名有可能是bind0、br0等
physical_interface_mappings = provider:ens34
[vxlan]
#不启用vxlan
enable_vxlan = false
[securitygroup]
#启用安全组并配置 Linux 桥接 iptables 防火墙驱动
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
确保系统内核支持网桥过滤器
#加载modprobe br_netfilter网桥过滤器模块
modprobe br_netfilter && lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
#修改内核配置文件/etc/sysctl.conf,开启ipv4与ipv6的网络过滤功能
vim /etc/sysctl.conf
...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
#重新加载配置文件
sysctl -p
修改dhcp_agent( 为虚拟网络提供 DHCP 服务)插件配置文件: /etc/neutron/dhcp_agent.ini
#查看文件属性
ll /etc/neutron/dhcp_agent.ini
-rw-r----- 1 root neutron 6524 5月 11 2021 /etc/neutron/dhcp_agent.ini
#备份配置文件
cp /etc/neutron/dhcp_agent.ini{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
#查看文件属性
ll /etc/neutron/dhcp_agent.ini
-rw-r----- 1 root neutron 10 12月 12 17:21 /etc/neutron/dhcp_agent.ini
#修改文件内容
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
#指定默认接口驱动为linux网桥
interface_driver = linuxbridge
#指定DHCP驱动
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
#开启iso元数据
enable_isolated_metadata = true
配置元数据代理:用于配置桥接与自服务网络的通用配置
参考地址:https://docs.openstack.org/neutron/train/install/controller-install-rdo.html
修改配置文件:/etc/neutron/metadata_agent.ini
#查看文件属性
ll /etc/neutron/metadata_agent.ini
-rw-r----- 1 root neutron 11011 5月 11 2021 /etc/neutron/metadata_agent.ini
#备份配置文件
cp /etc/neutron/metadata_agent.ini{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
#查看文件属性
ll /etc/neutron/metadata_agent.ini
-rw-r----- 1 root neutron 18 12月 12 17:22 /etc/neutron/metadata_agent.ini
#修改文件内容
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
#元数据代理主机
nova_metadata_host = controller
#元数据代理的共享密钥
metadata_proxy_shared_secret = METADATA_SECRET
修改nova配置文件,用于neutron交互,配置文件: /etc/nova/nova.conf
vi /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
#指定neutron用户密码:123
password = 123
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
创建ML2插件文件链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
上述配置同步到数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
#会有很多输出信息:
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1280, u"Name 'alembic_version_pkc' ignored for PRIMARY key.")
result = self._query(query)
正在对 neutron 运行 upgrade...
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> kilo
INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225
INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151
INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf
INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee
INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f
INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773
INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592
INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7
INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79
INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051
INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136
INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59
INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d
INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a
INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25
INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee
INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9
INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4
INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664
INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5
INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f
INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821
INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4
INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81
INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6
INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532
INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f
INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a
INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b
INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73
INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502
INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee
INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048
INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4
INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99
INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada
INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016
INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3
INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d
INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d
INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297
INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c
INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39
INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b
INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050
INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9
INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada
INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc
INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53
INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70
INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37
INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa
INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf
INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4
INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e
INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90
INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4
INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426
INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524
INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc
INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d
INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70
INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c
INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c
INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da
INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192
INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9
INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6
INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f
INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee
INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c
INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding
INFO [alembic.runtime.migration] Running upgrade 867d39095bf4 -> d72db3e25539, modify uniq port forwarding
INFO [alembic.runtime.migration] Running upgrade d72db3e25539 -> cada2437bf41
INFO [alembic.runtime.migration] Running upgrade cada2437bf41 -> 195176fb410d, router gateway IP QoS
INFO [alembic.runtime.migration] Running upgrade 195176fb410d -> fb0167bd9639
INFO [alembic.runtime.migration] Running upgrade fb0167bd9639 -> 0ff9e3881597
INFO [alembic.runtime.migration] Running upgrade 0ff9e3881597 -> 9bfad3f1e780
INFO [alembic.runtime.migration] Running upgrade 9bfad3f1e780 -> 63fd95af7dcd
INFO [alembic.runtime.migration] Running upgrade 63fd95af7dcd -> c613d0b82681
INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a
INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad
INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab
INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0
INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62
INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353
INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
确定
重启nova-api服务
systemctl restart openstack-nova-api.service
开启neutron服务、设置开机自启动
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl is-active neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
第八章:compute部署neutron服务
配置计算节点的neutron网络服务
neutron服务安装和配置
参考地址:https://docs.openstack.org/neutron/train/install/compute-install-rdo.html
compute节点
安装软件包
yum -y install openstack-neutron-linuxbridge ebtables ipset
修改配置文件:/etc/neutron/neutron.conf
#查看文件属性
ll /etc/neutron/neutron.conf
-rw-r----- 1 root neutron 39708 5月 11 2021 /etc/neutron/neutron.conf
#备份配置文件
cp /etc/neutron/neutron.conf{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
#查看文件属性
ll /etc/neutron/neutron.conf
-rw-r----- 1 root neutron 216 12月 7 17:25 /etc/neutron/neutron.conf
#修改文件内容
vi /etc/neutron/neutron.conf
#指定连接的rabbitmq的用户密码123,指定使用keystone认证
[DEFAULT]
transport_url = rabbit://openstack:123@controller
auth_strategy = keystone
#配置keystone认证信息,注意将用户neutron密码改为:123
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123
#配置锁路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
创建provider提供商网络
根据官方文档提供的创建实例的时候提供了两种网络,一种是provider,一种是self-service;
provider network又称为运营商网络,self-service network又称为租户网络;
参考地址:https://docs.openstack.org/neutron/train/install/compute-install-option1-rdo.html
compute节点
修改linuxbridge(网桥)插件配置文件: /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#查看文件属性
ll /etc/neutron/plugins/ml2/linuxbridge_agent.ini
-rw-r----- 1 root neutron 6524 5月 11 2021 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#备份配置文件
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
#重新生成配置文件
egrep -v '^#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#查看文件属性
ll /etc/neutron/plugins/ml2/linuxbridge_agent.ini
-rw-r----- 1 root neutron 10 12月 12 17:26 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#修改文件内容
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
#指定桥接网络provider与本机ens34物理网卡做关联,后期给虚拟机分配external(外部)网络地址,然后虚拟机就可以通过ens34上外网;桥接的物理网卡名有可能是bind0、br0等
physical_interface_mappings = provider:ens34
[vxlan]
#不启用vxlan
enable_vxlan = false
[securitygroup]
#启用安全组并配置 Linux 桥接 iptables 防火墙驱动
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
确保系统内核支持网桥过滤器
#加载modprobe br_netfilter网桥过滤器模块
modprobe br_netfilter && lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
#修改内核配置文件/etc/sysctl.conf,开启ipv4与ipv6的网络过滤功能
vim /etc/sysctl.conf
...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
#重新加载配置文件
sysctl -p
修改nova配置文件,用于neutron交互,配置文件: /etc/nova/nova.conf
参考地址:https://docs.openstack.org/neutron/train/install/compute-install-rdo.html
vim /etc/nova/nova.conf
#指定neutron用户密码:123
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123
重启nova-api服务
systemctl restart openstack-nova-compute.service
开启neutron服务、设置开机自启动
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
systemctl is-active neutron-linuxbridge-agent.service
服务验证
参考地址:https://docs.openstack.org/neutron/train/install/verify-option1.html
controller节点
切换admin身份查看下网络服务
source admin-openrc
openstack network agent list
总结:到这里为止我们的网络服务neutron就搭建完毕了,现在我们的OpenStack环境就已经达到了启动实例的条件了。
第九章:创建provider实例
创建provider网络
参考地址:https://docs.openstack.org/install-guide/launch-instance-networks-provider.html
controller节点
创建一个provider网络,网络类型为external
对于provider网络来说,实例通过2层(桥接网络)连接到提供商网络。
参数说明:
--share:
允许所有项目都可以使用该网络;
--external:
类型为连通外部的虚拟网络;
--provider-physical-network:
指定网络的提供者为provider,由ml2_conf.ini文件的flat_networks定义;
--provider-network-type flat:
映射到主机的网卡ens34,由linuxbridge_agent.ini文件的physical_interface_mappings定义;
source admin-openrc
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-12-08T04:30:50Z |
| description | |
| dns_domain | None |
| id | 75e80634-9703-4297-adc8-36c442511464 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 96b65f7df63b42fb99cffaa2ef5a0d11 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| updated_at | 2022-12-08T04:30:51Z |
+---------------------------+--------------------------------------+
查看网络
openstack network list
+--------------------------------------+----------+---------+
| ID | Name | Subnets |
+--------------------------------------+----------+---------+
| 75e80634-9703-4297-adc8-36c442511464 | provider | |
+--------------------------------------+----------+---------+
为provider网络指定子网的范围(该provider网络包括一个DHCP服务器为实例提供IP地址)
参数说明:
--network:
指定网络名称;
--allocation-pool:
指定分配的地址池,start设定起始地址,end设置结束地址;
--dns-nameserver:
指定域名服务器,可以用8.8.4.4(google),223.5.5.5(阿里云)等;
--gateway:
指定网关,设定宿主机的网络网关;
--subnet-range:
指定子网范围;
openstack subnet create --network provider \
--allocation-pool start=192.168.0.150,end=192.168.0.160 \
--dns-nameserver 223.5.5.5 --gateway 192.168.0.254 \
--subnet-range 192.168.0.0/24 provider
Created a new subnet:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.0.150-192.168.0.160 |
| cidr | 192.168.0.0/24 |
| created_at | 2022-12-08T04:40:11Z |
| description | |
| dns_nameservers | 223.5.5.5 |
| enable_dhcp | True |
| gateway_ip | 192.168.0.254 |
| host_routes | |
| id | b687971b-e8bd-4c50-9a64-a9acaf6b0f7d |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 75e80634-9703-4297-adc8-36c442511464 |
| project_id | 96b65f7df63b42fb99cffaa2ef5a0d11 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| updated_at | 2022-12-08T04:40:11Z |
+-------------------+--------------------------------------+
查看网络
openstack network list
+--------------------------------------+----------+----------------------------
| ID | Name | Subnets
+--------------------------------------+----------+----------------------------
| 75e80634-9703-4297-adc8-36c442511464 | provider | b687971b-e8bd-4c50-9a64-a9acaf6b0f7d |
+--------------------------------------+----------+----------------------------
创建VM实例规格flavor
参考地址:https://docs.openstack.org/install-guide/launch-instance.html
创建一个名为m1.nano的flavor
参数说明:
--id:
规格ID;
--vcpus:
cpu数量;
--ram:
内存大小,单位Mb;
--disk:
磁盘空间大小,单位Gb;
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
配置密钥
openstack支持用户使用公钥认证的方式创建实例,而不是传统的密码认证。 在启动实例之前,必须向计算服务添加公钥。
#切换到普通用户(以普通租户身份创建实例)
source demo-openrc
#生成密钥
ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):[回车]
#创建密钥到openstack中,并指定密钥名称mykey
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 76:d0:38:ef:5f:68:08:a1:2e:99:2c:ab:79:1f:02:d9 |
| name | mykey |
| user_id | b4806aece59c4b7a9ca92ff092b6a2be |
+-------------+-------------------------------------------------+
#检查密钥
openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 76:d0:38:ef:5f:68:08:a1:2e:99:2c:ab:79:1f:02:d9 |
+-------+-------------------------------------------------+
添加安全组规则
在安全组规则中默认拒绝远程访问实例,所以需要放行安全组规则,允许ICMP(ping)和SSH访问
查看安全组
source demo-openrc
openstack security group list
+--------------------------------------+---------+-------------+---------------
| ID | Name | Description | Project
+--------------------------------------+---------+-------------+---------------
| f3fda803-842b-49e9-ba8a-a0c760dc860a | default | 缺省安全组 | 9c11574d501d
+--------------------------------------+---------+-------------+---------------
添加icmp规则
openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2022-12-08T05:07:08Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 9b805d5f-9815-4e13-bb45-b851c0d98478 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 9c11574d501d4a399760940e2e62f245 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | f3fda803-842b-49e9-ba8a-a0c760dc860a |
| updated_at | 2022-12-08T05:07:08Z |
+-------------------+--------------------------------------+
添加ssh规则
openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2022-12-08T05:20:28Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 42bc2388-ae1a-4208-919b-10cf0f92bc1c |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 9c11574d501d4a399760940e2e62f245 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | f3fda803-842b-49e9-ba8a-a0c760dc860a |
| updated_at | 2022-12-08T05:20:28Z |
+-------------------+--------------------------------------+
查看安全组规则
openstack security group rule list
+--------------------------------------+-------------+-----------+-----------+-
| IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group
+--------------------------------------+-------------+-----------+-----------+-
| tcp | IPv4 | 0.0.0.0/0 | 22:22 | None
+--------------------------------------+-------------+-----------+-----------+
| icmp | IPv4 | 0.0.0.0/0 | | None
provider网络启动实例
要启动实例,至少必须指定flavor、glance名称、网络、安全组(ID)、密钥和实例名称。
参考地址:https://docs.openstack.org/install-guide/launch-instance-provider.html
#切换到普通用户
source demo-openrc
#查看实例可用的规则
openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
#查看实例可用的安全组
openstack security group list
+--------------------------------------+---------+-------------+
| ID | Name | Description |
+--------------------------------------+---------+-------------+
| f3fda803-842b-49e9-ba8a-a0c760dc860a | default | 缺省安全组 |
+--------------------------------------+---------+-------------+
#查看实例可用的镜像
openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 240c1c8a-656a-40cd-9023-2bf4712db89c | cirros | active |
+--------------------------------------+--------+--------+
#查看实例可用的网络
openstack network list
+--------------------------------------+----------+----------------------------
| ID | Name | Subnets
| 75e80634-9703-4297-adc8-36c442511464 | provider | b687971b-e8bd-4c50-9a64-
+--------------------------------------+----------+----------------------------
创建实例
参数说明:
--flavor:
指定实例使用的规则;
--image:
指定虚拟机使用的镜像文件;
--nic:
指定虚拟网卡使用的网络,net-id=网络ID;
--security-group:
指定虚拟机使用的安全组;
--key-name:
指定虚拟机使用的秘钥对名称;
提示:net-id=网络ID,要指定为
openstack network list
查出来的ID,不要复制我的!!!
source demo-openrc
openstack server create --flavor m1.nano --image cirros \
--nic net-id=75e80634-9703-4297-adc8-36c442511464 --security-group default \
--key-name mykey provider-instance
openstack server create --flavor m1.nano --image cirros \
--nic net-id=e94995e9-997a-4312-a104-cd93649d415b --security-group default \
--key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | z9BAsBcBodaJ |
| config_drive | |
| created | 2022-12-08T06:58:39Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 6f26ab93-2bde-4550-a032-c16db932eaeb |
| image | cirros (240c1c8a-656a-40cd-9023-2bf4712db89c) |
| key_name | mykey |
| name | provider-instance |
| progress | 0 |
| project_id | 9c11574d501d4a399760940e2e62f245 |
| properties | |
| security_groups | name='f3fda803-842b-49e9-ba8a-a0c760dc860a' |
| status | BUILD |
| updated | 2022-12-08T06:58:39Z |
| user_id | b4806aece59c4b7a9ca92ff092b6a2be |
| volumes_attached | |
+-----------------------------+-----------------------------------------------+
查看实例
openstack server list
+-------------------+--------+------------------------+--------+---------
| Name | Status | Networks | Image | Flavor |
+-------------------+--------+------------------------+--------+---------+
| provider-instance | ACTIVE | provider=192.168.0.153 | cirros | m1.nano |
+-------------------+--------+------------------------+--------+---------+
提示:如果创建失败,查看相关日志信息:
controller节点nova-api日志:grep -i ERROR /var/log/nova/nova-api.log
compute节点nova-compute日志:grep -i ERROR /var/log/nova/nova-compute.log
在计算节点查看实例
compute节点
[root@compute01 ~]# virsh list
Id 名称 状态
----------------------------------------------------
1 instance-00000004 running
访问实例
此时的实例默认无法访问,应为该实例的网段地址我们并没有在物理节点配置过对应的网关,所以先按照官方提供的方案,先获取虚拟机的VNC地址
controller节点
查看实例VNC地址
openstack console url show provider-instance
+-------+----------------------------------------------------------------------
| Field | Value
+-------+----------------------------------------------------------------------
| type | novnc
| url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3Dbadaed18-b6cf-45d2-b452-5f884bde1a33
+-------+----------------------------------------------------------------------
通过浏览器访问
提示:如果在windows没有配置controller的域名解析,可以把地址栏中的controller换成IP地址
http://192.168.0.50:6080/vnc_auto.html?path=%3Ftoken%3Dbadaed18-b6cf-45d2-b452-5f884bde1a33
查看是实例没有正常创建,卡在grub系统引导这里了, 这种情况是因为我使用的vmware虚机虚拟磁盘格式和驱动程序的问题,导致创建的实例无法正常的启动,我们需要做如下操作
#查看镜像
source admin-openrc
openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 240c1c8a-656a-40cd-9023-2bf4712db89c | cirros | active |
+--------------------------------------+--------+--------+
提示:VMware环境需通过下边命令修改磁盘的类型为IDE(物理机不需要)
openstack image set \
--property hw_disk_bus=ide \
--property hw_vif_model=e1000 \
240c1c8a-656a-40cd-9023-2bf4712db89c #将镜像ID替换为查询出来的ID
openstack image set \
--property hw_disk_bus=ide \
--property hw_vif_model=e1000 \
296f3e93-0566-4c7e-b364-a9aa0d82a331
删除当前实例
source demo-openrc
openstack server delete provider-instance
重新创建实例
#查看网络ID
openstack network list
+--------------------------------------+----------+----------------------------
| ID | Name | Subnets
+--------------------------------------+----------+----------------------------
| 75e80634-9703-4297-adc8-36c442511464 | provider | b687971b-e8bd-4c50-9a64-
+--------------------------------------+----------+----------------------------
#创建实例(提示:net-id=网络ID)
source demo-openrc
openstack server create --flavor m1.nano --image cirros \
--nic net-id=75e80634-9703-4297-adc8-36c442511464 --security-group default \
--key-name mykey provider-instance
openstack server create --flavor m1.nano --image cirros \
--nic net-id=e94995e9-997a-4312-a104-cd93649d415b --security-group default \
--key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 67jZR96gsfkg |
| config_drive | |
| created | 2022-12-08T08:31:33Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 28e7eeab-2d96-4b3f-b926-9406103747eb |
| image | cirros (240c1c8a-656a-40cd-9023-2bf4712db89c) |
| key_name | mykey |
| name | provider-instance |
| progress | 0 |
| project_id | 9c11574d501d4a399760940e2e62f245 |
| properties | |
| security_groups | name='35f72fca-3f20-43dc-bb55-376553a3bfac' |
| status | BUILD |
| updated | 2022-12-08T08:31:33Z |
| user_id | b4806aece59c4b7a9ca92ff092b6a2be |
| volumes_attached | |
+-----------------------------+-----------------------------------------------+
查看实例
openstack server list
+--------------------------------------+-------------------+--------+----------
| Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------+--------+----------
| provider-instance | ACTIVE | provider=203.0.113.236 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+----------
查看实例VNC地址
openstack console url show provider-instance
+-------+----------------------------------------------------------------------
| Field | Value
+-------+----------------------------------------------------------------------
| type | novnc
| url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3De3180067-9304-4d51-b544-61fb031ba9af |
+-------+----------------------------------------------------------------------
通过浏览器访问:将地址中controller名称替换为管理节点的IP
http://192.168.0.50:6080/vnc_auto.html?path=%3Ftoken%3De3180067-9304-4d51-b544-61fb031ba9af
可以看到实例正常启动了!!!
根据提示输入用户名和密码就可以进入系统了
- 用户名:cirros
- 密码:gocubsgo
测试是否可以ping通,是否可以通过ssh连接:ssh cirros@实例IP
总结
到现在为止,OpenStack环境搭建完成了,网络可以正常创建,实例也能够正常的创建,这就证明这套私有云平台搭建完成了
第十章:安装dashboard
有了之前我们部署的keystone、glance、nova、neutron服务之后,我们就可以启动云主机了,但是如果只是使用命令来操作OpenStack的话非常不方便,我们使用OpenStack搭建云平台就是为了把底层所有资源整合在一起,然后以一种方便的方式、简单的方式提供给用户使用,如果用户使用的时候都需要敲命令来进行执行,这种方式是不能接受的,而且让所有使用云主机的人都去敲命令是不现实的,所以才有了dashboard。
dashboard就是一个web页面,通过web接口可以非常方便的去使用主机,而且也可以通过web页面来根据自己的需求来构建主机。
- 服务名称:Dashboard
- 项目名称: horizon
安装dashboard
参考地址:https://docs.openstack.org/horizon/train/install/install-rdo.html
controller节点
安装软件包
yum -y install openstack-dashboard
配置dashboard
修改配置文件: /etc/openstack-dashboard/local_settings
#查看文件属性
ll /etc/openstack-dashboard/local_settings
-rw-r----- 1 root apache 12972 12月 10 12:53 /etc/openstack-dashboard/local_settings
#备份配置文件(无需重新生成,如果重新生成回丢失很多配置)
cp /etc/openstack-dashboard/local_settings{,.bak}
#修改文件内容
vi /etc/openstack-dashboard/local_settings
#在本机上配置仪表盘以使用OpenStack服务
OPENSTACK_HOST = "controller"
# *允许所有主机访问仪表盘
ALLOWED_HOSTS = ['horizon.example.com', '*']
#配置缓存会话存储服务,删除注释即可
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
#指定memcached服务地址及端口,删除注释即可
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
#启用第3版认证API(如果是v3版本则不需要修改)
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#启用对域的支持(默认不存在,添加到文件最后)
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#配置API版本(默认不存在,手动添加)
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
#通过仪表盘创建用户时的默认域配置为 default(默认不存在,添加到文件最后即可)
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
#通过仪表盘创建的用户默认角色配置为 user(默认不存在,添加到文件最后即可)
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#如果选择网络选项1,也就是公共网络,则需要禁用对第3层网络服务的支持
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': False, #默认不存在,手动添加
'enable_firewall': False, #默认不存在,手动添加
'enable_vpn': False, #默认不存在,手动添加
'enable_auto_allocated_network': False,
'enable_distributed_router': False,
'enable_fip_topology_check': False, #改为False
'enable_ha_router': False,
'enable_ipv6': False, #改为False
'enable_quotas': False, #改为False
'enable_rbac_policy': False, #改为False
'enable_router': False, #改为False
}
#配置时区
TIME_ZONE = "Asia/Shanghai"
#文件最后增加页面的访问路径
WEBROOT="/dashboard"
修改配置文件: /etc/httpd/conf.d/openstack-dashboard.conf
vim /etc/httpd/conf.d/openstack-dashboard.conf
#...在前三行下增加,使Apache支持WSGI协议(用来支持Web服务器和Web应用程序交互)
WSGIApplicationGroup %{GLOBAL}
重启httpd、memcached
systemctl restart httpd.service memcached.service
访问dashboard
访问页面登录:http://controller/dashboard
提示:如果通过域名访问,需要在windows配置域名解析
域指定登录的域:default 默认域
用户和密码可以用最admin或者myuser,密码设置都是123
![]() |
---|
![]() |
第十一章:创建CentOS虚拟机
创建不同类型的实例虚拟机,我们是需要根据需求来准备镜像上传到glance,注意ISO镜像上传上去是没法直接使用的,需要将ISO镜像转变成qcow2磁盘文件,然后上传磁盘文件,就可以创建云主机。
官方镜像仓库地址:https://docs.openstack.org/image-guide/
官方CentOS镜像地址:https://docs.openstack.org/image-guide/obtain-images.html#centos
官方CentOS7版本镜像地址:http://cloud.centos.org/centos/7/images/
上传镜像到OpenStack集群
提前把镜像上传到controller节点,然后上传到glance,命令如下:
#移动镜像到/var/lib/glance/images/(镜像所在目录没有特殊要求,只为方便管理)
mv /root/CentOS-7-x86_64-GenericCloud-2211.qcow2 /var/lib/glance/images/
chown -R glance:glance /var/lib/glance/images/
#上传镜像
source admin-openrc
glance image-create --name "centos7.9" \
--file /var/lib/glance/images/CentOS-7-x86_64-GenericCloud-2211.qcow2 \
--disk-format qcow2 --container-format bare \
--property hw_qume_guest_agent=yes \
--property os_type="linux" \
--visibility public \
--progress
openstack image create "centos7.9" \ #创建的镜像名
--file cirros-0.3.5-x86_64-disk.img \ #创建镜像所需文件,当前目录,或带文件位置
--disk-format qcow2 \ #镜像磁盘格式 qcow2
--container-format bare \ #可以接受的镜像容器格式包含:ami,ari, aki, bare, and ovf
--property hw_qume_guest_agent=yes \ #运行在虚拟机内部的一个服务,实现宿主机与虚拟机通信
--property os_type="linux" \ #指定操作系统类型.可选值: linux 或者 windows
--public #共享此镜像,所有用户可见
--progress #上传镜像显示进度条
查看镜像
openstack image list
+--------------------------------------+-----------+--------+
| ID | Name | Status |
+--------------------------------------+-----------+--------+
| 33cd072c-e3f6-4a8a-bfdc-c4149a95b5a5 | centos7.9 | active |
| 6b6d9c17-5877-43d1-ab71-fe6c7b46ca3d | cirros | active |
+--------------------------------------+-----------+--------+
提示:VMware环境需通过下边命令修改类型为IDE(物理机不需要)否则实例会卡在GRUB引导页面
openstack image set \
--property hw_disk_bus=ide \
--property hw_vif_model=e1000 \
33cd072c-e3f6-4a8a-bfdc-c4149a95b5a5 #将镜像ID替换为查询出来的ID
创建VM实例flavor
创建一个名为centos的flavor
参数说明:
--id:
规格ID;
--vcpus:
cpu数量;
--ram:
内存大小,单位Mb;
--disk:
磁盘空间大小,单位Gb;
openstack flavor create --id 1 --vcpus 2 --ram 2048 --disk 40 centos
+----------------------------+--------+
| Field | Value |
+----------------------------+--------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 40 |
| id | 1 |
| name | centos |
| os-flavor-access:is_public | True |
| properties | |
| ram | 2048 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+----------------------------+--------+
查看flavor信息
openstack flavor list
+----+---------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+------+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
| 1 | centos | 2048 | 40 | 0 | 2 | True |
+----+---------+------+------+-----------+-------+-----------+
注入虚拟机root密码
我们通过镜像创建的虚拟机默认没有root密码,所以需要提前在nova配置文件中启用root密码
controller节点
#修改nova配置文件
vim /etc/nova/nova.conf
...
#在该模块下增加启用root密码功能
[libvirt]
inject_password=true
#重启nova服务
systemctl restart \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
创建CentOS虚拟机
使用myuser用户在dashboard面板创建实例
点击:项目——》实例——》创建实例,如下图:
![]() |
---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
提示:从openstack官方下载的云镜像默认禁止root用户登录,所以一般都是以普通用户登陆,比如centos镜像的普通用户为centos,该用户如果从openstack本机登录,默认没有密码;ubuntu镜像的普通用户为ubuntu,如需通过ssh远程连接,首先需要网络是通的,然后通过普通用方式登录,在切换到root即可。
在修改/etc/ssh/sshd_config允许root使用密码登录
vim /etc/ssh/sshd_config
...
PermitRootLogin yes #允许root登录
PasswordAuthentication yes #允许使用密码登录
重启sshd服务即可通过root远程登录