DevOps学院

DevOps学院

中国新一代IT在线教育平台
运维知识体系

运维知识体系

运维知识体系总结,持续更新,欢迎转载。
缓存知识体系

缓存知识体系

运维知识体系之缓存,分层多级缓存体系。
速云科技

速云科技

DevOps咨询、企业内训、落地解决方案。

新人学运维如何 发展前景如何啊

DevOps王建泽 回复了问题 • 2 人关注 • 1 个回复 • 577 次浏览 • 2018-04-17 18:40 • 来自相关话题

基于OpenStack构建企业私有云(8)Cinder

OpenStack赵班长 发表了文章 • 0 个评论 • 168 次浏览 • 2018-04-07 18:59 • 来自相关话题

控制节点部署

1.Cinder安装[root@linux-node1 ~]# yum install -y openstack-cinder
2.数据库配置[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
#在 [database] 部分,配置数据库访问。
connection=mysql+pymysql://cinder:cinder@192.168.56.11/cinder
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
验证数据库状态
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ucinder -pcinder -e "use cinder;show tables;"
3.Keystone相关配置[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

4.RabbitMQ相关配置[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
5.其它配置在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
6.配置Nova以使用块设备存储,注意所有 编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
os_region_name = RegionOne
7.重启nova-api服务[root@linux-node1 ~]# systemctl restart openstack-nova-api.service
8.启动cinder服务,并设置为开机自动启动。# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
9.Cinder注册Service和Endpoint# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
# openstack endpoint create --region RegionOne \
volumev2 public http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 internal http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 admin http://192.168.56.11:8776/v2/%\(project_id\)s 
# openstack endpoint create --region RegionOne \
volumev3 public http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 internal http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 admin http://192.168.56.11:8776/v3/%\(project_id\)s
存储节点配置
对于CentOS环境,默认是已经安装了LVM。如果没有可以使用以下命令安装并启动。
    安装 LVM 包:[root@linux-node1 ~]# yum install -y lvm2 device-mapper-persistent-data
    启动LVM的metadata服务并且设置该服务随系统启动:[root@linux-node1 ~]# systemctl enable lvm2-lvmetad.service
[root@linux-node1 ~]# systemctl start lvm2-lvmetad.service
把/dev/sdb创建为LVM的物理卷:[root@linux-node2 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created

创建名为cinder-volumes的逻辑卷组[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
    在``devices``部分,添加一个过滤器,只接受``/dev/sdb``设备,拒绝其他所有设备:
    devices {
    ...
    filter = [ "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "r/.*/"]


存储节点安装

   存储节点安装和控制节点类型,还是分为两步:
1.    软件安装。
2.    从控制节点SCP配置文件。
安装isci-target和cinder[root@linux-node2 ~]# yum install -y openstack-cinder targetcli python-keystone
同步控制节点配置文件
由于存储节点大多数配置和控制节点相同,可以直接使用控制节点配置好的cinder.conf。再此基础上进行小的变动。[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/
设置Cinder后端驱动
[root@linux-node2 ~]# vim /etc/cinder/cinder.conf[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=iSCSI-Storage
在 [DEFAULT] 部分,启用 LVM 后端:[DEFAULT]
...
enabled_backends = lvm


[DEFAULT]
glance_api_servers = http://192.168.56.11:9292
启动块存储卷服务及其依赖的服务,并将其配置为随系统启动: # systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
  查看全部
控制节点部署

1.Cinder安装
[root@linux-node1 ~]# yum install -y openstack-cinder

2.数据库配置
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
#在 [database] 部分,配置数据库访问。
connection=mysql+pymysql://cinder:cinder@192.168.56.11/cinder
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
验证数据库状态
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ucinder -pcinder -e "use cinder;show tables;"

3.Keystone相关配置
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

4.RabbitMQ相关配置
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11

5.其它配置
[oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

6.配置Nova以使用块设备存储,注意所有
    编辑文件 /etc/nova/nova.conf 并添加如下到其中:
[cinder]
os_region_name = RegionOne

7.重启nova-api服务
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service

8.启动cinder服务,并设置为开机自动启动。
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

9.Cinder注册Service和Endpoint
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
# openstack endpoint create --region RegionOne \
volumev2 public http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 internal http://192.168.56.11:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 admin http://192.168.56.11:8776/v2/%\(project_id\)s
 
 
# openstack endpoint create --region RegionOne \
volumev3 public http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 internal http://192.168.56.11:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 admin http://192.168.56.11:8776/v3/%\(project_id\)s

存储节点配置
对于CentOS环境,默认是已经安装了LVM。如果没有可以使用以下命令安装并启动。
    安装 LVM 包:
[root@linux-node1 ~]# yum install -y lvm2 device-mapper-persistent-data

    启动LVM的metadata服务并且设置该服务随系统启动:
[root@linux-node1 ~]# systemctl enable lvm2-lvmetad.service
[root@linux-node1 ~]# systemctl start lvm2-lvmetad.service

把/dev/sdb创建为LVM的物理卷:
[root@linux-node2 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created


创建名为cinder-volumes的逻辑卷组
[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
    在``devices``部分,添加一个过滤器,只接受``/dev/sdb``设备,拒绝其他所有设备:
    devices {
    ...
    filter = [ "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    filter = [ "a/sda/", "r/.*/"]


存储节点安装

   存储节点安装和控制节点类型,还是分为两步:
1.    软件安装。
2.    从控制节点SCP配置文件。
安装isci-target和cinder
[root@linux-node2 ~]# yum install -y openstack-cinder targetcli python-keystone

同步控制节点配置文件
由于存储节点大多数配置和控制节点相同,可以直接使用控制节点配置好的cinder.conf。再此基础上进行小的变动。
[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/

设置Cinder后端驱动
[root@linux-node2 ~]# vim /etc/cinder/cinder.conf
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=iSCSI-Storage

在 [DEFAULT] 部分,启用 LVM 后端:
[DEFAULT]
...
enabled_backends = lvm


[DEFAULT]
glance_api_servers = http://192.168.56.11:9292

启动块存储卷服务及其依赖的服务,并将其配置为随系统启动:
 # systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service

 

基于OpenStack创建企业私有云(7)Horizon

OpenStack赵班长 发表了文章 • 0 个评论 • 106 次浏览 • 2018-04-06 23:01 • 来自相关话题

1.安装Horizon[root@linux-node2 ~]# yum install -y openstack-dashboard
2.Horizon配置[root@linux-node2 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.56.11"
#允许所有主机访问
ALLOWED_HOSTS = ['*', ]
#设置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
"compute": 2,
}
开启多域支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
设置默认的域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
#设置Keystone地址
OPENSTACK_HOST = "192.168.56.11"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#为通过仪表盘创建的用户配置默认的 user 角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#设置Session存储到Memcached
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.56.11:11211',
}
}
#启用Web界面上修改密码
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
'can_set_password': True,
'requires_keypair': False,
}
#设置时区
TIME_ZONE = "Asia/Shanghai"
#禁用自服务网络的一些高级特性
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
3.启动服务[root@linux-node2 ~]# systemctl enable httpd.service
[root@linux-node2 ~]# systemctl restart httpd.service
好的,现在你就可以使用http://192.168.56.12/dashaboard来访问仪表盘了。用户名和密码可以使用admin或者demo。需要你亲自来体验他们到底有什么不同。 查看全部
1.安装Horizon
[root@linux-node2 ~]# yum install -y openstack-dashboard

2.Horizon配置
[root@linux-node2 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.56.11"
#允许所有主机访问
ALLOWED_HOSTS = ['*', ]
#设置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
"compute": 2,
}
开启多域支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
设置默认的域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
#设置Keystone地址
OPENSTACK_HOST = "192.168.56.11"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#为通过仪表盘创建的用户配置默认的 user 角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#设置Session存储到Memcached
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.56.11:11211',
}
}
#启用Web界面上修改密码
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
'can_set_password': True,
'requires_keypair': False,
}
#设置时区
TIME_ZONE = "Asia/Shanghai"
#禁用自服务网络的一些高级特性
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}

3.启动服务
[root@linux-node2 ~]# systemctl enable httpd.service
[root@linux-node2 ~]# systemctl restart httpd.service

好的,现在你就可以使用http://192.168.56.12/dashaboard来访问仪表盘了。用户名和密码可以使用admin或者demo。需要你亲自来体验他们到底有什么不同。

基于OpenStack构建企业私有云(6)创建第一台云主机

OpenStack赵班长 发表了文章 • 0 个评论 • 124 次浏览 • 2018-04-06 22:56 • 来自相关话题

1.创建网络[root@linux-node1 ~]# openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
2.创建子网[root@linux-node1 ~]# openstack subnet create --network provider \
--allocation-pool start=192.168.56.100,end=192.168.56.200 \
--dns-nameserver 223.5.5.5 --gateway 192.168.56.2 \
--subnet-range 192.168.56.0/24 provider-subnet
3. 创建云主机类型[root@linux-node1 ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
4.创建密钥对[root@linux-node1 ~]# source demo-openstack.sh
[root@linux-node1 ~]# ssh-keygen -q -N ""
[root@linux-node1 ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
[root@linux-node1 ~]# openstack keypair list
5.添加安全组规则 [root@linux-node1 ~]# openstack security group rule create --proto icmp default
[root@linux-node1 ~]# openstack security group rule create --proto tcp --dst-port 22 default
启动实例[root@linux-node1 ~]# source demo-openstack.sh
[root@linux-node1 ~]# openstack flavor list
1.查看可用的镜像[root@linux-node1 ~]# openstack image list
2.查看可用的网络[root@linux-node1 ~]# openstack network list
3.查看可用的安全组[root@linux-node1 ~]# openstack security group list
4.创建虚拟机[root@linux-node1 ~]# openstack server create --flavor m1.nano --image cirros \
--nic net-id=5c4d0706-24cd-4d42-ba78-36a05b6c81c8 --security-group default \
--key-name mykey demo-instance
#注意指定网络的时候需要使用ID,而不是名称
5.查看虚拟机[root@linux-node1 ~]# openstack server list
[root@linux-node1 ~]# openstack console url show demo-instance

  查看全部
1.创建网络
[root@linux-node1 ~]# openstack network create  --share --external \
--provider-physical-network provider \
--provider-network-type flat provider

2.创建子网
[root@linux-node1 ~]# openstack subnet create --network provider \
--allocation-pool start=192.168.56.100,end=192.168.56.200 \
--dns-nameserver 223.5.5.5 --gateway 192.168.56.2 \
--subnet-range 192.168.56.0/24 provider-subnet

3. 创建云主机类型
[root@linux-node1 ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

4.创建密钥对
[root@linux-node1 ~]# source demo-openstack.sh
[root@linux-node1 ~]# ssh-keygen -q -N ""
[root@linux-node1 ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
[root@linux-node1 ~]# openstack keypair list

5.添加安全组规则
 [root@linux-node1 ~]# openstack security group rule create --proto icmp default
[root@linux-node1 ~]# openstack security group rule create --proto tcp --dst-port 22 default

启动实例
[root@linux-node1 ~]# source demo-openstack.sh
[root@linux-node1 ~]# openstack flavor list

1.查看可用的镜像
[root@linux-node1 ~]# openstack image list

2.查看可用的网络
[root@linux-node1 ~]# openstack network list

3.查看可用的安全组
[root@linux-node1 ~]# openstack security group list

4.创建虚拟机
[root@linux-node1 ~]# openstack server create --flavor m1.nano --image cirros \
--nic net-id=5c4d0706-24cd-4d42-ba78-36a05b6c81c8 --security-group default \
--key-name mykey demo-instance
#注意指定网络的时候需要使用ID,而不是名称

5.查看虚拟机
[root@linux-node1 ~]# openstack server list
[root@linux-node1 ~]# openstack console url show demo-instance


 

基于OpenStack构建企业私有云(5)Neutron

OpenStack赵班长 发表了文章 • 0 个评论 • 110 次浏览 • 2018-04-06 18:08 • 来自相关话题

1.Neutron安装[root@linux-node1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
2.Neutron数据库配置[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron@192.168.56.11:3306/neutron
3.Keystone连接配置[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
4.RabbitMQ相关设置[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
5.Neutron网络基础配置[DEFAULT]
core_plugin = ml2
service_plugins =
6.网络拓扑变化Nova通知配置[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[nova]
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
7.在 [oslo_concurrency] 部分,配置锁路径:[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
8.Neutron ML2配置[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve #支持多选,所以把所有的驱动都选择上。
tenant_network_types = flat,vlan,gre,vxlan,geneve #支持多项,所以把所有的网络类型都选择上。
mechanism_drivers = linuxbridge,openvswitch,l2population #选择插件驱动,支持多选,开源的有linuxbridge和openvswitch
#启用端口安全扩展驱动
extension_drivers = port_security,qos

[ml2_type_flat]
#设置网络提供
flat_networks = provider

[securitygroup]
#启用ipset
enable_ipset = True

9.Neutron Linuxbridge配置[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0

[vxlan]
#禁止vxlan网络
enable_vxlan = False

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True


10.Neutron DHCP-Agent配置[root@linux-node1 ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True


11.Neutron metadata配置
   [root@linux-node1 ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = 192.168.56.11

metadata_proxy_shared_secret = unixhot.com
12.Neutron相关配置在nova.conf[root@linux-node1 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = unixhot.com

[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
13.重启计算API 服务# systemctl restart openstack-nova-api.service
启动网络服务并配置他们开机自启动。# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
14.Neutron服务注册# openstack service create --name neutron --description "OpenStack Networking" network
创建endpoint
# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696
15.测试Neutron安装[root@linux-node1 ~]# openstack network agent list
Neutron计算节点部署

安装软件包 [root@linux-node2 ~]# yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables

1.Keystone连接配置[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
2.RabbitMQ相关设置[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
#请注意是在DEFAULT配置栏目下,因为该配置文件有多个transport_url的配置
3.锁路径[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
4.配置LinuxBridge配置[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/
5.设置计算节点的nova.conf[root@linux-node2 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron


重启计算服务[root@linux-node2 ~]# systemctl restart openstack-nova-compute.service
启动计算节点linuxbridge-agent[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service
在控制节点上测试Neutron安装[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack network agent list
看是否有linux-node2.example.com的Linux bridge agent

  查看全部
1.Neutron安装
[root@linux-node1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables

2.Neutron数据库配置
[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron@192.168.56.11:3306/neutron

3.Keystone连接配置
[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

4.RabbitMQ相关设置
[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11

5.Neutron网络基础配置
[DEFAULT]
core_plugin = ml2
service_plugins =

6.网络拓扑变化Nova通知配置
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[nova]
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

7.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

8.Neutron ML2配置
[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve #支持多选,所以把所有的驱动都选择上。
tenant_network_types = flat,vlan,gre,vxlan,geneve #支持多项,所以把所有的网络类型都选择上。
mechanism_drivers = linuxbridge,openvswitch,l2population #选择插件驱动,支持多选,开源的有linuxbridge和openvswitch
#启用端口安全扩展驱动
extension_drivers = port_security,qos

[ml2_type_flat]
#设置网络提供
flat_networks = provider

[securitygroup]
#启用ipset
enable_ipset = True

9.Neutron Linuxbridge配置
[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0

[vxlan]
#禁止vxlan网络
enable_vxlan = False

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True


10.Neutron DHCP-Agent配置
[root@linux-node1 ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True


11.Neutron metadata配置
   
[root@linux-node1 ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = 192.168.56.11

metadata_proxy_shared_secret = unixhot.com

12.Neutron相关配置在nova.conf
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = unixhot.com

[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

13.重启计算API 服务
# systemctl restart openstack-nova-api.service

启动网络服务并配置他们开机自启动。
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

14.Neutron服务注册
# openstack service create --name neutron --description "OpenStack Networking" network
创建endpoint
# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696

15.测试Neutron安装
[root@linux-node1 ~]# openstack network agent list

Neutron计算节点部署

安装软件包
 [root@linux-node2 ~]# yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables


1.Keystone连接配置
[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

2.RabbitMQ相关设置
[root@linux-node2 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
#请注意是在DEFAULT配置栏目下,因为该配置文件有多个transport_url的配置

3.锁路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

4.配置LinuxBridge配置
[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/

5.设置计算节点的nova.conf
[root@linux-node2 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron


重启计算服务
[root@linux-node2 ~]# systemctl restart openstack-nova-compute.service

启动计算节点linuxbridge-agent
[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service

在控制节点上测试Neutron安装
[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack network agent list

看是否有linux-node2.example.com的Linux bridge agent

 

基于OpenStack构建企业私有云(4)Nova

OpenStack赵班长 发表了文章 • 0 个评论 • 123 次浏览 • 2018-04-06 15:49 • 来自相关话题

1.控制节点安装[root@linux-node1 ~]# yum install -y openstack-nova-api openstack-nova-placement-api \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler
2.数据库配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api_database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova_api
[database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova
3.RabbitMQ配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
4.Keystone相关配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
5.关闭Nova的防火墙功能[DEFAULT]
use_neutron=true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
6.VNC配置[root@linux-node1 ~]# vim /etc/nova/nova.conf
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.11
7.设置glance[glance]
api_servers = http://192.168.56.11:9292
8.在 [oslo_concurrency] 部分,配置锁路径:[oslo_concurrency]
lock_path=/var/lib/nova/tmp
9.设置启用的api[DEFAULT]
enabled_apis=osapi_compute,metadata
10.设置placement[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.56.11:35357/v3
username = placement
password = placement
11.修改nova-placement-api.conf[root@linux-node1 ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
# systemctl restart httpd

12.同步数据库[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
13.创建cell1的cell[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
14.同步nova数据库[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
15.验证cell0和cell1的注册是否正确[root@linux-node1 ~]# nova-manage cell_v2 list_cells
16.测试数据库同步情况[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova;show tables;"
[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova_api;show tables;"
17.启动Nova Service# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
11.Nova服务注册# source admin-openstack.sh
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2.1

# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement internal http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement admin http://192.168.56.11:8778验证控制节点服务[root@linux-node1 ~]# openstack host list
计算节点安装[root@linux-node2 ~]# yum install -y openstack-nova-compute sysfsutils

[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/nova.conf
[root@linux-node2 ~]# chown root:nova /etc/nova/nova.conf
1.删除多余的数据配置

2.修改VNC配置
计算节点需要监听所有IP,同时设置novncproxy的访问地址[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.12
novncproxy_base_url = http://192.168.56.11:6080/vnc_auto.html3.虚拟化适配[root@linux-node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
[libvirt]
virt_type=qemu
如果返回的是非0的值,那么表示计算节点服务器支持硬件虚拟化,需要在nova.conf里面设置
[libvirt]
virt_type=kvm
启动nova-compute# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
验证计算节点[root@linux-node1 ~]# openstack host list
计算节点加入控制节点[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova




  查看全部
1.控制节点安装
[root@linux-node1 ~]# yum install -y openstack-nova-api openstack-nova-placement-api \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler

2.数据库配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api_database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova_api
[database]
connection= mysql+pymysql://nova:nova@192.168.56.11/nova

3.RabbitMQ配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11

4.Keystone相关配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

5.关闭Nova的防火墙功能
[DEFAULT]
use_neutron=true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

6.VNC配置
[root@linux-node1 ~]# vim /etc/nova/nova.conf
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.11

7.设置glance
[glance]
api_servers = http://192.168.56.11:9292

8.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path=/var/lib/nova/tmp

9.设置启用的api
[DEFAULT]
enabled_apis=osapi_compute,metadata

10.设置placement
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.56.11:35357/v3
username = placement
password = placement

11.修改nova-placement-api.conf
[root@linux-node1 ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
# systemctl restart httpd


12.同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

13.创建cell1的cell
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

14.同步nova数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova

15.验证cell0和cell1的注册是否正确
[root@linux-node1 ~]# nova-manage cell_v2 list_cells

16.测试数据库同步情况
[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova;show tables;"
[root@linux-node1 ~]#mysql -h 192.168.56.11 -unova -pnova -e " use nova_api;show tables;"

17.启动Nova Service
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service

11.Nova服务注册
# source admin-openstack.sh
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2.1

# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement internal http://192.168.56.11:8778
# openstack endpoint create --region RegionOne placement admin http://192.168.56.11:8778
验证控制节点服务
[root@linux-node1 ~]# openstack host list

计算节点安装
[root@linux-node2 ~]# yum install -y openstack-nova-compute sysfsutils

[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/nova.conf
[root@linux-node2 ~]# chown root:nova /etc/nova/nova.conf

1.删除多余的数据配置

2.修改VNC配置
计算节点需要监听所有IP,同时设置novncproxy的访问地址
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.56.12
novncproxy_base_url = http://192.168.56.11:6080/vnc_auto.html
3.虚拟化适配
[root@linux-node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
[libvirt]
virt_type=qemu
如果返回的是非0的值,那么表示计算节点服务器支持硬件虚拟化,需要在nova.conf里面设置
[libvirt]
virt_type=kvm

启动nova-compute
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

验证计算节点
[root@linux-node1 ~]# openstack host list

计算节点加入控制节点
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova





 

基于OpenStack构建企业私有云(3)Glance

OpenStack赵班长 发表了文章 • 0 个评论 • 110 次浏览 • 2018-04-06 15:43 • 来自相关话题

1.安装Glance[root@linux-node1 ~]# yum install -y openstack-glance
2.Glance数据库配置

Glance-api.conf[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[database]
connection= mysql+pymysql://glance:glance@192.168.56.11/glance
glance-registry.conf[root@linux-node1 ~]# vim /etc/glance/glance-registry.conf
[database]
connection= mysql+pymysql://glance:glance@192.168.56.11/glance

3.设置Keystone[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor=keystone
glance-registry.conf配置[root@linux-node1 ~]# vim /etc/glance/glance-registry.conf
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor=keystone
4.设置Glance镜像存储[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[glance_store]
stores = file,http
default_store=file
filesystem_store_datadir=/var/lib/glance/images/
5.同步数据库[root@linux-node1 ~]# su -s /bin/sh -c "glance-manage db_sync" glance
6.启动Glance服务# systemctl enable openstack-glance-api.service
# systemctl enable openstack-glance-registry.service
# systemctl start openstack-glance-api.service
# systemctl start openstack-glance-registry.service
7.Glance服务注册
   想要让别的服务可以使用Glance,就需要在Keystone上完成服务的注册。注意需要先source一下admin的环境变量。[root@linux-node1 ~]# source admin-openstack.sh
# openstack service create --name glance --description "OpenStack Image service" image
# openstack endpoint create --region RegionOne image public http://192.168.56.11:9292
# openstack endpoint create --region RegionOne image internal http://192.168.56.11:9292
# openstack endpoint create --region RegionOne image admin http://192.168.56.11:9292
8.测试Glance状态[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack image list
9.Glance镜像
在刚开始实施OpenStack平台阶段,如果没有制作镜像。可以使用一个实验的镜像进行测试,这是一个小的Linux系统。[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# wget http://download.cirros-cloud.n ... k.img

[root@linux-node1 src]# openstack image create "cirros" --disk-format qcow2 \
--container-format bare --file cirros-0.3.5-x86_64-disk.img --public
[root@linux-node1 src]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| cf154a84-a73a-451b-bcb3-83c98e7c0d3e | cirros | active |
+--------------------------------------+--------+--------+



  查看全部
1.安装Glance
[root@linux-node1 ~]# yum install -y openstack-glance

2.Glance数据库配置

Glance-api.conf
[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[database]
connection= mysql+pymysql://glance:glance@192.168.56.11/glance

glance-registry.conf
[root@linux-node1 ~]# vim /etc/glance/glance-registry.conf
[database]
connection= mysql+pymysql://glance:glance@192.168.56.11/glance


3.设置Keystone
[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor=keystone

glance-registry.conf配置
[root@linux-node1 ~]# vim /etc/glance/glance-registry.conf
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor=keystone

4.设置Glance镜像存储
[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[glance_store]
stores = file,http
default_store=file
filesystem_store_datadir=/var/lib/glance/images/

5.同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "glance-manage db_sync" glance

6.启动Glance服务
# systemctl enable openstack-glance-api.service
# systemctl enable openstack-glance-registry.service
# systemctl start openstack-glance-api.service
# systemctl start openstack-glance-registry.service

7.Glance服务注册
   想要让别的服务可以使用Glance,就需要在Keystone上完成服务的注册。注意需要先source一下admin的环境变量。
[root@linux-node1 ~]# source admin-openstack.sh
# openstack service create --name glance --description "OpenStack Image service" image
# openstack endpoint create --region RegionOne image public http://192.168.56.11:9292
# openstack endpoint create --region RegionOne image internal http://192.168.56.11:9292
# openstack endpoint create --region RegionOne image admin http://192.168.56.11:9292

8.测试Glance状态
[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack image list

9.Glance镜像
在刚开始实施OpenStack平台阶段,如果没有制作镜像。可以使用一个实验的镜像进行测试,这是一个小的Linux系统。
[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# wget http://download.cirros-cloud.n ... k.img

[root@linux-node1 src]# openstack image create "cirros" --disk-format qcow2 \
--container-format bare --file cirros-0.3.5-x86_64-disk.img --public
[root@linux-node1 src]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| cf154a84-a73a-451b-bcb3-83c98e7c0d3e | cirros | active |
+--------------------------------------+--------+--------+



 

基于OpenStack构建企业私有云(2)KeyStone

OpenStack赵班长 发表了文章 • 0 个评论 • 134 次浏览 • 2018-04-06 14:15 • 来自相关话题

1.安装keystone# yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached2.设置Memcache开启启动并启动Memcached[root@linux-node1 ~]# systemctl enable memcached.service
[root@linux-node1 ~]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.56.11,::1"
[root@linux-node1 ~]# systemctl start memcached.service3.Keystone配置

1)配置KeyStone数据库[root@linux-node1 ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:keystone@192.168.56.11/keystone
2)设置Token和Memcached[token]
provider = fernet
3).同步数据库:[root@linux-node1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ukeystone -pkeystone -e " use keystone;show tables;"
4)初始化fernet keys[root@linux-node1 ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@linux-node1 ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone5)初始化keystone[root@linux-node1 ~]# keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-admin-url http://192.168.56.11:35357/v3/ \
--bootstrap-internal-url http://192.168.56.11:35357/v3/ \
--bootstrap-public-url http://192.168.56.11:5000/v3/ \
--bootstrap-region-id RegionOne 6).验证Keystone配置[root@linux-node1 ~]# grep "^[a-z]" /etc/keystone/keystone.conf
connection = mysql+pymysql://keystone:keystone@192.168.56.11/keystone
provider = fernet7)KeyStone启动 [root@linux-node1 ~]# vim /etc/httpd/conf/httpd.confServerName 192.168.56.11:80
创建配置文件
[root@linux-node1 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动keystone,并查看端口。[root@linux-node1 ~]# systemctl enable httpd.service
[root@linux-node1 ~]# systemctl start httpd.service
设置环境变量[root@linux-node1 ~]# export OS_USERNAME=admin
[root@linux-node1 ~]# export OS_PASSWORD=admin
[root@linux-node1 ~]# export OS_PROJECT_NAME=admin
[root@linux-node1 ~]# export OS_USER_DOMAIN_NAME=Default
[root@linux-node1 ~]# export OS_PROJECT_DOMAIN_NAME=Default
[root@linux-node1 ~]# export OS_AUTH_URL=http://192.168.56.11:35357/v3
[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3
创建项目和demo用户# openstack project create --domain default --description "Demo Project" demo
# openstack user create --domain default --password demo demo
# openstack role create user
# openstack role add --project demo --user demo user
创建Service项目# openstack project create --domain default --description "Service Project" service创建glance用户# openstack user create --domain default --password glance glance
# openstack role add --project service --user glance admin创建nova用户# openstack user create --domain default --password nova nova
# openstack role add --project service --user nova admin创建placement用户# openstack user create --domain default --password placement placement
# openstack role add --project service --user placement admin创建Neutron用户# openstack user create --domain default --password neutron neutron
# openstack role add --project service --user neutron admin创建cinder用户# openstack user create --domain default --password cinder cinder
# openstack role add --project service --user cinder admin
验证Keystone[root@linux-node1 ~]# unset OS_AUTH_URL OS_PASSWORD
[root@linux-node1 ~]# openstack --os-auth-url http://192.168.56.11:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:

[root@linux-node1 ~]# openstack --os-auth-url http://192.168.56.11:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password: [root@linux-node1 ~]# vim /root/admin-openstack.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.56.11:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2[root@linux-node1 ~]# vim /root/demo-openstack.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.56.11:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack token issue
[root@linux-node1 ~]# source demo-openstack.sh
[root@linux-node1 ~]# openstack token issue

  查看全部
1.安装keystone
# yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached
2.设置Memcache开启启动并启动Memcached
[root@linux-node1 ~]# systemctl enable memcached.service
[root@linux-node1 ~]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.56.11,::1"
[root@linux-node1 ~]# systemctl start memcached.service
3.Keystone配置

1)配置KeyStone数据库
[root@linux-node1 ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:keystone@192.168.56.11/keystone

2)设置Token和Memcached
[token]
provider = fernet

3).同步数据库:
[root@linux-node1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ukeystone -pkeystone -e " use keystone;show tables;"

4)初始化fernet keys
[root@linux-node1 ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@linux-node1 ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5)初始化keystone
[root@linux-node1 ~]# keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-admin-url http://192.168.56.11:35357/v3/ \
--bootstrap-internal-url http://192.168.56.11:35357/v3/ \
--bootstrap-public-url http://192.168.56.11:5000/v3/ \
--bootstrap-region-id RegionOne
 6).验证Keystone配置
[root@linux-node1 ~]# grep "^[a-z]" /etc/keystone/keystone.conf
connection = mysql+pymysql://keystone:keystone@192.168.56.11/keystone
provider = fernet
7)KeyStone启动 [root@linux-node1 ~]# vim /etc/httpd/conf/httpd.conf
ServerName 192.168.56.11:80
创建配置文件
[root@linux-node1 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动keystone,并查看端口。
[root@linux-node1 ~]# systemctl enable httpd.service
[root@linux-node1 ~]# systemctl start httpd.service

设置环境变量
[root@linux-node1 ~]# export OS_USERNAME=admin
[root@linux-node1 ~]# export OS_PASSWORD=admin
[root@linux-node1 ~]# export OS_PROJECT_NAME=admin
[root@linux-node1 ~]# export OS_USER_DOMAIN_NAME=Default
[root@linux-node1 ~]# export OS_PROJECT_DOMAIN_NAME=Default
[root@linux-node1 ~]# export OS_AUTH_URL=http://192.168.56.11:35357/v3
[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3

创建项目和demo用户
# openstack project create --domain default --description "Demo Project" demo
# openstack user create --domain default --password demo demo
# openstack role create user
# openstack role add --project demo --user demo user

创建Service项目
# openstack project create --domain default --description "Service Project" service
创建glance用户
# openstack user create --domain default --password glance glance
# openstack role add --project service --user glance admin
创建nova用户
# openstack user create --domain default --password nova nova
# openstack role add --project service --user nova admin
创建placement用户
# openstack user create --domain default --password placement placement
# openstack role add --project service --user placement admin
创建Neutron用户
# openstack user create --domain default --password neutron neutron
# openstack role add --project service --user neutron admin
创建cinder用户
# openstack user create --domain default --password cinder cinder
# openstack role add --project service --user cinder admin

验证Keystone
[root@linux-node1 ~]# unset OS_AUTH_URL OS_PASSWORD
[root@linux-node1 ~]# openstack --os-auth-url http://192.168.56.11:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:

[root@linux-node1 ~]# openstack --os-auth-url http://192.168.56.11:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
 
[root@linux-node1 ~]# vim /root/admin-openstack.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.56.11:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@linux-node1 ~]# vim /root/demo-openstack.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.56.11:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@linux-node1 ~]# source admin-openstack.sh
[root@linux-node1 ~]# openstack token issue
[root@linux-node1 ~]# source demo-openstack.sh
[root@linux-node1 ~]# openstack token issue


 

基于OpenStack构建企业私有云(1)实验环境准备

OpenStack赵班长 发表了文章 • 1 个评论 • 258 次浏览 • 2018-04-04 19:47 • 来自相关话题

一.基础软件包安装

1.安装EPEL仓库# rpm -ivh [url]http://mirrors.aliyun.com/epel ... h.rpm[/url]

2.安装OpenStack仓库# yum install -y centos-release-openstack-queens
3.安装OpenStack客户端# yum install -y python-openstackclient
 4.安装openstack SELinux管理包# yum install -y openstack-selinux

二.MySQL数据库部署

1.MySQL安装[root@linux-node1 ~]# yum install -y mariadb mariadb-server python2-PyMySQL
2.修改MySQL配置文件[root@linux-node1 ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.56.11 #设置监听的IP地址
default-storage-engine = innodb #设置默认的存储引擎
innodb_file_per_table = on#使用独享表空间
collation-server = utf8_general_ci #服务器的默认校对规则
character-set-server = utf8 #服务器安装时指定的默认字符集设定
max_connections = 4096 #设置MySQL的最大连接数,生产请根据实际情况设置。



3.启动MySQL Server并设置开机启动[root@linux-node1 ~]# systemctl enable mariadb.service
[root@linux-node1 ~]# systemctl start mariadb.service

4.进行数据库安全设置[root@linux-node1 ~]# mysql_secure_installation

5.数据库创建

[root@linux-node1 ~]# mysql -u root -p
Enter password:

MariaDB [(none)]>

Keystone数据库
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

Glance数据库
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

Nova数据库
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

Neutron 数据库
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

Cinder数据库
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

三:消息代理RabbitMQ
1.安装RabbitMQ[root@linux-node1 ~]# yum install -y rabbitmq-server
2.设置开启启动,并启动RabbitMQ[root@linux-node1 ~]# systemctl enable rabbitmq-server.service
[root@linux-node1 ~]# systemctl start rabbitmq-server.service
3.添加openstack用户。[root@linux-node1 ~]# rabbitmqctl add_user openstack openstack
Creating user "openstack" ...



4.给刚才创建的openstack用户,创建权限。[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
5.启用Web监控插件[root@linux-node1 ~]# rabbitmq-plugins list
[root@linux-node1 ~]# rabbitmq-plugins enable rabbitmq_management

  查看全部
一.基础软件包安装

1.安装EPEL仓库
# rpm -ivh [url]http://mirrors.aliyun.com/epel ... h.rpm[/url]


2.安装OpenStack仓库
# yum install -y centos-release-openstack-queens

3.安装OpenStack客户端
# yum install -y python-openstackclient

 4.安装openstack SELinux管理包
# yum install -y openstack-selinux


二.MySQL数据库部署

1.MySQL安装
[root@linux-node1 ~]# yum install -y mariadb mariadb-server python2-PyMySQL

2.修改MySQL配置文件
[root@linux-node1 ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.56.11 #设置监听的IP地址
default-storage-engine = innodb #设置默认的存储引擎
innodb_file_per_table = on#使用独享表空间
collation-server = utf8_general_ci #服务器的默认校对规则
character-set-server = utf8 #服务器安装时指定的默认字符集设定
max_connections = 4096 #设置MySQL的最大连接数,生产请根据实际情况设置。



3.启动MySQL Server并设置开机启动
[root@linux-node1 ~]# systemctl enable mariadb.service
[root@linux-node1 ~]# systemctl start mariadb.service


4.进行数据库安全设置
[root@linux-node1 ~]# mysql_secure_installation


5.数据库创建

[root@linux-node1 ~]# mysql -u root -p
Enter password:

MariaDB [(none)]>

Keystone数据库
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

Glance数据库
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

Nova数据库
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

Neutron 数据库
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

Cinder数据库
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

三:消息代理RabbitMQ
1.安装RabbitMQ
[root@linux-node1 ~]# yum install -y rabbitmq-server

2.设置开启启动,并启动RabbitMQ
[root@linux-node1 ~]# systemctl enable rabbitmq-server.service
[root@linux-node1 ~]# systemctl start rabbitmq-server.service

3.添加openstack用户。
[root@linux-node1 ~]# rabbitmqctl add_user openstack openstack
Creating user "openstack" ...



4.给刚才创建的openstack用户,创建权限。
[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

5.启用Web监控插件
[root@linux-node1 ~]# rabbitmq-plugins list
[root@linux-node1 ~]# rabbitmq-plugins enable rabbitmq_management


 

用python制作一个带有web界面的简易服务器监控工具,求解

回复

运维杂谈zhangye 发起了问题 • 1 人关注 • 0 个回复 • 414 次浏览 • 2018-03-20 21:11 • 来自相关话题