DevOps学院

DevOps学院

中国新一代IT在线教育平台
运维知识体系

运维知识体系

运维知识体系总结,持续更新,欢迎转载。
缓存知识体系

缓存知识体系

运维知识体系之缓存,分层多级缓存体系。
速云科技

速云科技

DevOps咨询、企业内训、落地解决方案。

云熵网络公司招聘运维开发,有兴趣请发简历到caisen@vip.sina.com,待遇根据能力,上不封顶。

回复

DevOpsdigicai 发起了问题 • 1 人关注 • 0 个回复 • 33 次浏览 • 2018-08-20 10:48 • 来自相关话题

docker中应用访问速度慢

回复

Dockerbyzz1225 发起了问题 • 1 人关注 • 0 个回复 • 134 次浏览 • 2018-07-23 13:04 • 来自相关话题

怎么监控API接口

回复

运维监控Alvins 发起了问题 • 1 人关注 • 0 个回复 • 144 次浏览 • 2018-07-20 19:28 • 来自相关话题

一个关于监控的问题

回复

运维监控Alvins 发起了问题 • 1 人关注 • 0 个回复 • 112 次浏览 • 2018-07-16 21:05 • 来自相关话题

OpenStack用Floating IP在IP-MAC绑定环境下实现外网访问

回复

OpenStackagoodman 发起了问题 • 1 人关注 • 0 个回复 • 132 次浏览 • 2018-06-29 18:38 • 来自相关话题

gitlab触发jenkins blocked

回复

Web架构走在路上的人 发起了问题 • 1 人关注 • 0 个回复 • 250 次浏览 • 2018-06-05 16:12 • 来自相关话题

基于OpenStack构建企业私有云(8)Cinder

OpenStack赵班长 发表了文章 • 0 个评论 • 929 次浏览 • 2018-04-07 18:59 • 来自相关话题

控制节点部署

1.Cinder安装[root@linux-node1 ~]# yum install -y openstack-cinder
2.数据库配置[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
#在 [database] 部分,配置数据库访问。
connection=mysql+pymysql://cinder:cinder@192.168.56.11/cinder
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
验证数据库状态
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ucinder -pcinder -e "use cinder;show tables;"
3.Keystone相关配置[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

4.RabbitMQ相关配置[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
5.其它配置在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
6.配置Nova以使用块设备存储,注意所有 编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
os_region_name = RegionOne
7.重启nova-api服务[root@linux-node1 ~]# systemctl restart openstack-nova-api.service
8.启动cinder服务,并设置为开机自动启动。# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
9.Cinder注册Service和Endpoint# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
# openstack endpoint create --region RegionOne \
volumev2 public http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 internal http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 admin http://192.168.56.11:8776/v2/%\(project_id\)s 
# openstack endpoint create --region RegionOne \
volumev3 public http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 internal http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 admin http://192.168.56.11:8776/v3/%\(project_id\)s
存储节点配置
对于CentOS环境,默认是已经安装了LVM。如果没有可以使用以下命令安装并启动。
    安装 LVM 包:[root@linux-node1 ~]# yum install -y lvm2 device-mapper-persistent-data
    启动LVM的metadata服务并且设置该服务随系统启动:[root@linux-node1 ~]# systemctl enable lvm2-lvmetad.service
[root@linux-node1 ~]# systemctl start lvm2-lvmetad.service
把/dev/sdb创建为LVM的物理卷:[root@linux-node2 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created

创建名为cinder-volumes的逻辑卷组[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
    在``devices``部分,添加一个过滤器,只接受``/dev/sdb``设备,拒绝其他所有设备:
    devices {
    ...
    filter = [ "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "r/.*/"]


存储节点安装

   存储节点安装和控制节点类型,还是分为两步:
1.    软件安装。
2.    从控制节点SCP配置文件。
安装isci-target和cinder[root@linux-node2 ~]# yum install -y openstack-cinder targetcli python-keystone
同步控制节点配置文件
由于存储节点大多数配置和控制节点相同,可以直接使用控制节点配置好的cinder.conf。再此基础上进行小的变动。[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/
设置Cinder后端驱动
[root@linux-node2 ~]# vim /etc/cinder/cinder.conf[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=iSCSI-Storage
在 [DEFAULT] 部分,启用 LVM 后端:[DEFAULT]
...
enabled_backends = lvm


[DEFAULT]
glance_api_servers = http://192.168.56.11:9292
启动块存储卷服务及其依赖的服务,并将其配置为随系统启动: # systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
  查看全部
控制节点部署

1.Cinder安装
[root@linux-node1 ~]# yum install -y openstack-cinder

2.数据库配置
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
#在 [database] 部分,配置数据库访问。
connection=mysql+pymysql://cinder:cinder@192.168.56.11/cinder
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
验证数据库状态
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ucinder -pcinder -e "use cinder;show tables;"

3.Keystone相关配置
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

4.RabbitMQ相关配置
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11

5.其它配置
[oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

6.配置Nova以使用块设备存储,注意所有
    编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
os_region_name = RegionOne

7.重启nova-api服务
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service

8.启动cinder服务,并设置为开机自动启动。
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

9.Cinder注册Service和Endpoint
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
# openstack endpoint create --region RegionOne \
volumev2 public http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 internal http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 admin http://192.168.56.11:8776/v2/%\(project_id\)s
 
 
# openstack endpoint create --region RegionOne \
volumev3 public http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 internal http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 admin http://192.168.56.11:8776/v3/%\(project_id\)s

存储节点配置
对于CentOS环境,默认是已经安装了LVM。如果没有可以使用以下命令安装并启动。
    安装 LVM 包:
[root@linux-node1 ~]# yum install -y lvm2 device-mapper-persistent-data

    启动LVM的metadata服务并且设置该服务随系统启动:
[root@linux-node1 ~]# systemctl enable lvm2-lvmetad.service
[root@linux-node1 ~]# systemctl start lvm2-lvmetad.service

把/dev/sdb创建为LVM的物理卷:
[root@linux-node2 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created


创建名为cinder-volumes的逻辑卷组
[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
    在``devices``部分,添加一个过滤器,只接受``/dev/sdb``设备,拒绝其他所有设备:
    devices {
    ...
    filter = [ "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "r/.*/"]


存储节点安装

   存储节点安装和控制节点类型,还是分为两步:
1.    软件安装。
2.    从控制节点SCP配置文件。
安装isci-target和cinder
[root@linux-node2 ~]# yum install -y openstack-cinder targetcli python-keystone

同步控制节点配置文件
由于存储节点大多数配置和控制节点相同,可以直接使用控制节点配置好的cinder.conf。再此基础上进行小的变动。
[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/

设置Cinder后端驱动
[root@linux-node2 ~]# vim /etc/cinder/cinder.conf
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=iSCSI-Storage

在 [DEFAULT] 部分,启用 LVM 后端:
[DEFAULT]
...
enabled_backends = lvm


[DEFAULT]
glance_api_servers = http://192.168.56.11:9292

启动块存储卷服务及其依赖的服务,并将其配置为随系统启动:
 # systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service

 

基于OpenStack创建企业私有云(7)Horizon

OpenStack赵班长 发表了文章 • 0 个评论 • 819 次浏览 • 2018-04-06 23:01 • 来自相关话题

1.安装Horizon[root@linux-node2 ~]# yum install -y openstack-dashboard
2.Horizon配置[root@linux-node2 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.56.11"
#允许所有主机访问
ALLOWED_HOSTS = ['*', ]
#设置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
"compute": 2,
}
开启多域支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
设置默认的域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
#设置Keystone地址
OPENSTACK_HOST = "192.168.56.11"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#为通过仪表盘创建的用户配置默认的 user 角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#设置Session存储到Memcached
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.56.11:11211',
}
}
#启用Web界面上修改密码
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
'can_set_password': True,
'requires_keypair': False,
}
#设置时区
TIME_ZONE = "Asia/Shanghai"
#禁用自服务网络的一些高级特性
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
3.启动服务[root@linux-node2 ~]# systemctl enable httpd.service
[root@linux-node2 ~]# systemctl restart httpd.service
好的,现在你就可以使用http://192.168.56.12/dashaboard来访问仪表盘了。用户名和密码可以使用admin或者demo。需要你亲自来体验他们到底有什么不同。 查看全部
1.安装Horizon
[root@linux-node2 ~]# yum install -y openstack-dashboard

2.Horizon配置
[root@linux-node2 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.56.11"
#允许所有主机访问
ALLOWED_HOSTS = ['*', ]
#设置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
"compute": 2,
}
开启多域支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
设置默认的域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
#设置Keystone地址
OPENSTACK_HOST = "192.168.56.11"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#为通过仪表盘创建的用户配置默认的 user 角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#设置Session存储到Memcached
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.56.11:11211',
}
}
#启用Web界面上修改密码
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
'can_set_password': True,
'requires_keypair': False,
}
#设置时区
TIME_ZONE = "Asia/Shanghai"
#禁用自服务网络的一些高级特性
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}

3.启动服务
[root@linux-node2 ~]# systemctl enable httpd.service
[root@linux-node2 ~]# systemctl restart httpd.service

好的,现在你就可以使用http://192.168.56.12/dashaboard来访问仪表盘了。用户名和密码可以使用admin或者demo。需要你亲自来体验他们到底有什么不同。

基于OpenStack构建企业私有云(5)Neutron

OpenStack赵班长 发表了文章 • 0 个评论 • 859 次浏览 • 2018-04-06 18:08 • 来自相关话题

1.Neutron安装[root@linux-node1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
2.Neutron数据库配置[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron@192.168.56.11:3306/neutron
3.Keystone连接配置[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
4.RabbitMQ相关设置[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
5.Neutron网络基础配置[DEFAULT]
core_plugin = ml2
service_plugins =
6.网络拓扑变化Nova通知配置[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[nova]
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
7.在 [oslo_concurrency] 部分,配置锁路径:[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
8.Neutron ML2配置[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve #支持多选,所以把所有的驱动都选择上。
tenant_network_types = flat,vlan,gre,vxlan,geneve #支持多项,所以把所有的网络类型都选择上。
mechanism_drivers = linuxbridge,openvswitch,l2population #选择插件驱动,支持多选,开源的有linuxbridge和openvswitch
#启用端口安全扩展驱动
extension_drivers = port_security,qos

[ml2_type_flat]
#设置网络提供
flat_networks = provider

[securitygroup]
#启用ipset
enable_ipset = True

9.Neutron Linuxbridge配置[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0

[vxlan]
#禁止vxlan网络
enable_vxlan = False

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True


10.Neutron DHCP-Agent配置[root@linux-node1 ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True


11.Neutron metadata配置
   [root@linux-node1 ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = 192.168.56.11

metadata_proxy_shared_secret = unixhot.com
12.Neutron相关配置在nova.conf[root@linux-node1 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = unixhot.com

[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
13.重启计算API 服务# systemctl restart openstack-nova-api.service
启动网络服务并配置他们开机自启动。# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
14.Neutron服务注册# openstack service create --name neutron --description "OpenStack Networking" network
创建endpoint
# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696
15.测试Neutron安装[root@linux-node1 ~]# openstack network agent list
Neutron计算节点部署

安装软件包 [root@linux-node2 ~]# yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables

1.Keystone连接配置[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
2.RabbitMQ相关设置[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
#请注意是在DEFAULT配置栏目下,因为该配置文件有多个transport_url的配置
3.锁路径[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
4.配置LinuxBridge配置[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/
5.设置计算节点的nova.conf[root@linux-node2 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron


重启计算服务[root@linux-node2 ~]# systemctl restart openstack-nova-compute.service
启动计算节点linuxbridge-agent[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service
在控制节点上测试Neutron安装[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack network agent list
看是否有linux-node2.example.com的Linux bridge agent

  查看全部
1.Neutron安装
[root@linux-node1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables

2.Neutron数据库配置
[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron@192.168.56.11:3306/neutron

3.Keystone连接配置
[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

4.RabbitMQ相关设置
[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11

5.Neutron网络基础配置
[DEFAULT]
core_plugin = ml2
service_plugins =

6.网络拓扑变化Nova通知配置
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[nova]
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

7.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

8.Neutron ML2配置
[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve #支持多选,所以把所有的驱动都选择上。
tenant_network_types = flat,vlan,gre,vxlan,geneve #支持多项,所以把所有的网络类型都选择上。
mechanism_drivers = linuxbridge,openvswitch,l2population #选择插件驱动,支持多选,开源的有linuxbridge和openvswitch
#启用端口安全扩展驱动
extension_drivers = port_security,qos

[ml2_type_flat]
#设置网络提供
flat_networks = provider

[securitygroup]
#启用ipset
enable_ipset = True

9.Neutron Linuxbridge配置
[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0

[vxlan]
#禁止vxlan网络
enable_vxlan = False

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True


10.Neutron DHCP-Agent配置
[root@linux-node1 ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True


11.Neutron metadata配置
   
[root@linux-node1 ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = 192.168.56.11

metadata_proxy_shared_secret = unixhot.com

12.Neutron相关配置在nova.conf
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = unixhot.com

[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

13.重启计算API 服务
# systemctl restart openstack-nova-api.service

启动网络服务并配置他们开机自启动。
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

14.Neutron服务注册
# openstack service create --name neutron --description "OpenStack Networking" network
创建endpoint
# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696

15.测试Neutron安装
[root@linux-node1 ~]# openstack network agent list

Neutron计算节点部署

安装软件包
 [root@linux-node2 ~]# yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables


1.Keystone连接配置
[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

2.RabbitMQ相关设置
[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
#请注意是在DEFAULT配置栏目下,因为该配置文件有多个transport_url的配置

3.锁路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

4.配置LinuxBridge配置
[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/

5.设置计算节点的nova.conf
[root@linux-node2 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron


重启计算服务
[root@linux-node2 ~]# systemctl restart openstack-nova-compute.service

启动计算节点linuxbridge-agent
[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service

在控制节点上测试Neutron安装
[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack network agent list

看是否有linux-node2.example.com的Linux bridge agent

 

基于OpenStack构建企业私有云(4)Nova

OpenStack赵班长 发表了文章 • 0 个评论 • 942 次浏览 • 2018-04-06 15:49 • 来自相关话题

1.控制节点安装[root@linux-node1 ~]# yum install -y openstack-nova-api openstack-nova-placement-api \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler
2.数据库配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api_database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova_api
[database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova
3.RabbitMQ配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
4.Keystone相关配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
5.关闭Nova的防火墙功能[DEFAULT]
use_neutron=true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
6.VNC配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.11
7.设置glance[glance]
api_servers = http://192.168.56.11:9292
8.在 [oslo_concurrency] 部分,配置锁路径:[oslo_concurrency]
lock_path=/var/lib/nova/tmp
9.设置启用的api[DEFAULT]
enabled_apis=osapi_compute,metadata
10.设置placement[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.56.11:35357/v3
username = placement
password = placement
11.修改nova-placement-api.conf[root@linux-node1 ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
# systemctl restart httpd

12.同步数据库[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
13.创建cell1的cell[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
14.同步nova数据库[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
15.验证cell0和cell1的注册是否正确[root@linux-node1 ~]# nova-manage cell_v2 list_cells
16.测试数据库同步情况[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova;show tables;"
[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova_api;show tables;"
17.启动Nova Service# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
11.Nova服务注册# source admin-openstack.sh
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2.1

# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement internal http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement admin http://192.168.56.11:8778验证控制节点服务[root@linux-node1 ~]# openstack host list
计算节点安装[root@linux-node2 ~]# yum install -y openstack-nova-compute sysfsutils

[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/nova.conf
[root@linux-node2 ~]# chown root:nova /etc/nova/nova.conf
1.删除多余的数据配置

2.修改VNC配置
计算节点需要监听所有IP,同时设置novncproxy的访问地址[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.12
novncproxy_base_url = http://192.168.56.11:6080/vnc_auto.html3.虚拟化适配[root@linux-node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
[libvirt]
virt_type=qemu
如果返回的是非0的值,那么表示计算节点服务器支持硬件虚拟化,需要在nova.conf里面设置
[libvirt]
virt_type=kvm
启动nova-compute# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
验证计算节点[root@linux-node1 ~]# openstack host list
计算节点加入控制节点[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova




  查看全部
1.控制节点安装
[root@linux-node1 ~]# yum install -y openstack-nova-api openstack-nova-placement-api \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler

2.数据库配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api_database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova_api
[database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova

3.RabbitMQ配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11

4.Keystone相关配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

5.关闭Nova的防火墙功能
[DEFAULT]
use_neutron=true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

6.VNC配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.11

7.设置glance
[glance]
api_servers = http://192.168.56.11:9292

8.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path=/var/lib/nova/tmp

9.设置启用的api
[DEFAULT]
enabled_apis=osapi_compute,metadata

10.设置placement
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.56.11:35357/v3
username = placement
password = placement

11.修改nova-placement-api.conf
[root@linux-node1 ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
# systemctl restart httpd


12.同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

13.创建cell1的cell
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

14.同步nova数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova

15.验证cell0和cell1的注册是否正确
[root@linux-node1 ~]# nova-manage cell_v2 list_cells

16.测试数据库同步情况
[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova;show tables;"
[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova_api;show tables;"

17.启动Nova Service
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service

11.Nova服务注册
# source admin-openstack.sh
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2.1

# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement internal http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement admin http://192.168.56.11:8778
验证控制节点服务
[root@linux-node1 ~]# openstack host list

计算节点安装
[root@linux-node2 ~]# yum install -y openstack-nova-compute sysfsutils

[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/nova.conf
[root@linux-node2 ~]# chown root:nova /etc/nova/nova.conf

1.删除多余的数据配置

2.修改VNC配置
计算节点需要监听所有IP,同时设置novncproxy的访问地址
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.12
novncproxy_base_url = http://192.168.56.11:6080/vnc_auto.html
3.虚拟化适配
[root@linux-node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
[libvirt]
virt_type=qemu
如果返回的是非0的值,那么表示计算节点服务器支持硬件虚拟化,需要在nova.conf里面设置
[libvirt]
virt_type=kvm

启动nova-compute
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

验证计算节点
[root@linux-node1 ~]# openstack host list

计算节点加入控制节点
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova