source/installguide/locale/zh_CN/LC_MESSAGES/hypervisor/kvm.mo (28 lines of code) (raw):

���d� � 2� � �  $ 1 jO r� �- ��&E9`��k�n@�S�z��(%B^h]��%�����,��#�2Fbv�'���)?�W'���&<FY��Q� B7M*�X�i �s Ye��{�p=��P> � � � �� �h!K?#�$�$2�$��$-�%,�%�% �%|�%x&�&Q�&@�&5'.T'`�'�'�'( 0(:(O(U(~m(3�( )�0)�),�)I*W]*��*mk+S�+�-,��,��-�p.��.`�/�0��06j1k�1| 2W�2��29�3V�3L4�g4~5c�5E�5�*7\�7;#8%_8b�8�8�89)97F98~9�9�9J�9 :4(:0]:3�:U�:�;��;L}< �<O�<$>(>=>X> _>j>{>�>�>�>�>�>��>)�@ �@�@ A A'Ag8A-�Ai�B8DHDXD]D fD#qD�D �D�D�DfE[kE�EQ�El,F�FGG1G(AGSjGO�G�H��H�LI�I&�IJ#,J)PJzJ�J�J�J�J�JK'K<K�PK�K��K�L�L�L��Lb�M5NDHN �N�N��NV�OY�O�@P Q Q"Q9QYNQt�QiR:�R�R�R�Rk�R�^S�T%�U!V%=VvcV!�V$�V!W %W`3W �W�W1�W)�W;X1=XSoX�X�X�XYY'Y-Y[KY'�Y�Y��YrZ!�Z:�ZO�Z�5[e�[A.\�p\�]��]GV^y�^D_�]_��_z`U�`]�`MJa��a!"bnDb0�b�bvdcL�c(dn8e<�e0�e%fm;f�f�f�f(�f0g/5gegjg6og�g,�g(�ghD/h�thH%i>ni �i��i�j�j �j�j �j�jk/k!Nk pk ~k�k****Format for Disks, Templates, and Snapshots******Fiber Channel support****Local storage support****NFS support****SMB/CIFS****Storage over-provisioning****custom:** you can explicitly specify one of the supported named model in /usr/share/libvirt/cpu\_map.xml**host-model:** libvirt will identify the CPU model in /usr/share/libvirt/cpu\_map.xml which most closely matches the host, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs.**host-passthrough:** libvirt will tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration: the guest can only be migrated to an exactly matching host CPU.**iSCSI support**16509 (libvirt)179822 (SSH)4 GB of memory49152 - 49216 (libvirt live migration)5900 - 6100 (VNC consoles)64-bit x86 CPU (more cores results in better performance)Add "-l" to the following lineAdd the host to CloudStackAll hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags.All the required packages were installed when you installed libvirt, so we only have to configure the network.At least 1 NICBefore continuing, make sure that you have applied the latest updates to your host.By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not enable the firewall.By default these bridges are called *cloudbr0* and *cloudbr1*, but you do have to make sure they are available on each hypervisor.CLVMCentOS / RHEL: 6.3Change the following lineCheck for a fully qualified hostname.Check to see whether AppArmor is installed on your machine. If not, you can skip this section.Check to see whether SELinux is installed on your machine. If not, you can skip this section.CloudStack does various things which can be blocked by security mechanisms like AppArmor and SELinux. These have to be disabled to ensure the Agent has all the required permissions.CloudStack supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity.CloudStack uses libvirt for managing virtual machines. Therefore it is vital that libvirt is configured correctly. Libvirt is a dependency of cloudstack-agent and should already be installed.Configure Apparmor (Ubuntu)Configure CPU model for KVM guest (Optional)Configure OpenVswitchConfigure SELinux (RHEL and CentOS)Configure Security Policies (AppArmor and SELinux)Configure in RHEL or CentOSConfigure in UbuntuConfigure the Security PoliciesConfigure the network bridgesConfigure the network using OpenVswitchConfiguring the firewallConfiguring the network bridgesDisable the AppArmor profiles for libvirtFirst we configure eth0First we create a main bridge connected to the eth0 interface. Next we create three fake bridges, each connected to a specific vlan tag.First we start by installing the agent:For the most part it will be sufficient for the host administrator to specify the guest CPU config in the per-host configuration file (/etc/cloudstack/agent/agent.properties). This will be achieved by introducing two new configuration parameters:Here are some examples:Hyper-VHypervisor Support for Primary StorageIf you want to use the Linux Kernel Virtual Machine (KVM) hypervisor to run guest virtual machines, install KVM on the host(s) in your cloud. The material in this section doesn't duplicate KVM installation docs. It provides the CloudStack-specific steps that are needed to prepare a KVM host to work with CloudStack.In RHEL or CentOS, SELinux is installed and enabled by default. You can verify this with:In RHEL or CentOS:In Ubuntu AppArmor is installed and enabled by default. You can verify this with:In Ubuntu:In addition, the following hardware requirements apply:In additional,the CloudStack Agent allows host administrator to control the guest CPU model which is exposed to KVM instances. By default, the CPU model of KVM instance is likely QEMU Virtual CPU version x.x.x with least CPU features exposed. There are a couple of reasons to specify the CPU model:In order to do so we have to open the following TCP ports (if you are using a firewall):In order to forward traffic to your instances you will need at least two bridges: *public* and *private*.In order to have live migration working libvirt has to listen for unsecured TCP connections. We also need to turn off libvirts attempt to use Multicast DNS advertising. Both of these settings are in ``/etc/libvirt/libvirtd.conf``Install NTPInstall and Configure libvirtInstall and configure libvirtInstall and configure the AgentIt depends on the distribution you are using how to configure these, below you'll find examples for RHEL/CentOS and Ubuntu.It depends on the distribution you are using how to configure these, below you'll find examples for RHEL/CentOS.It depends on the firewall you are using how to open these ports. Below you'll find examples how to open these ports in RHEL/CentOS and Ubuntu.It is NOT recommended to run services on this host not controlled by CloudStack.KVMKVM Hypervisor Host InstallationKVM Installation OverviewKVM is included with a variety of Linux-based operating systems. Although you are not required to run these distributions, the following are recommended:KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the CloudStack does not attempt to mount or unmount the storage as is done with NFS. The CloudStack requires that the administrator insure that the storage is availableLocal storage is an option for primary storage for vSphere, XenServer, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration.Log in to your OS as root.Make sure it looks similar to:Make sure that the machine can reach the Internet.Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!Modify the interfaces file to look like this:Must support HVM (Intel-VT or AMD-V enabled)NFSNFS and iSCSINTP is required to synchronize the clocks of the servers in your cloud. Unsynchronized clocks can cause unexpected problems.Network exampleNoNow we have the VLAN interfaces configured we can add the bridges on top of them.Now we just configure it is a plain bridge without an IP-AddressOn RHEL or CentOS modify ``/etc/sysconfig/libvirtd``:On Ubuntu: modify ``/etc/default/libvirt-bin``On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1Open ports in RHEL/CentOSOpen ports in UbuntuPrepare the Operating SystemPreparingPrimary Storage TypeQCOW2Qemu/KVM: 1.0 or higherRHEL and CentOS use iptables for firewalling the system, you can open extra ports by executing the following iptable commands:Repeat all of these steps on every hypervisor host.Restart libvirtSet the SELINUX variable in ``/etc/selinux/config`` to "permissive". This ensures that the permissive setting will be maintained after a system reboot.Set the following parameters:System Requirements for KVM Hypervisor HostsThe Hypervisor and Management server don't have to be in the same subnet!The OS of the Host must be prepared to host the CloudStack Agent and run KVM instances.The default bridge in CloudStack is the Linux native bridge implementation (bridge module). CloudStack includes an option to work with OpenVswitch, the requirements are listed belowThe default firewall under Ubuntu is UFW (Uncomplicated FireWall), which is a Python wrapper around iptables.The following table shows storage options and parameters for different hypervisors.The goal is to have three bridges called 'mgmt0', 'cloudbr0' and 'cloudbr1' after this section. This should be used as a guideline only. The exact configuration will depend on your network layout.The goal is to have two bridges called 'cloudbr0' and 'cloudbr1' after this section. This should be used as a guideline only. The exact configuration will depend on your network layout.The host is now ready to be added to a cluster. This is covered in a later section, see :ref:`adding-a-host`. It is recommended that you continue to read the documentation before adding the host!The hypervisor needs to be able to communicate with other hypervisors and the management server needs to be able to reach the hypervisor.The main requirement for KVM hypervisors is the libvirt and Qemu version. No matter what Linux distribution you are using, make sure the following requirements are met:The most important factor is that you keep the configuration consistent on all your hypervisors.The network configurations below depend on the ifup-ovs and ifdown-ovs scripts which are part of the openvswitch installation. They should be installed in /etc/sysconfig/network-scripts/The network interfaces using OpenVswitch are created using the ovs-vsctl command. This command will configure the interfaces and persist them to the OpenVswitch database.The procedure for installing a KVM Hypervisor Host is:The required packages were installed when libvirt was installed, we can proceed to configuring the network.The required packages were installed when openvswitch and libvirt were installed, we can proceed to configuring the network.Then set SELinux to permissive starting immediately, without requiring a system reboot.There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN's, one for your private network and one for the public network.There are three choices to fulfill the cpu model changes:These iptable settings are not persistent accross reboots, we have to save them first.This is a very important section, please make sure you read this thoroughly.This section details how to configure bridges using the native implementation in Linux. Please refer to the next section if you intend to use OpenVswitchThis should return a fully qualified hostname such as "kvm1.lab.example.org". If it does not, edit /etc/hosts so that it does.To ensure a consistent default CPU across all machines,removing reliance of variable QEMU defaults;To make sure that the native bridge module will not interfere with openvswitch the bridge module should be added to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the module is not loaded either by rebooting or executing rmmod bridge before executing next steps.To manage KVM instances on the host CloudStack uses a Agent. This Agent communicates with the Management server and controls all the instances on the host.To maximise performance of instances by exposing new host CPU features to the KVM instances;To open the required ports, execute the following commands:Turn on NTP for time synchronization.Turning on "listen\_tcp" in libvirtd.conf is not enough, we have to change the parameters as well:Ubuntu: 12.04(.1)Uncomment the following line:VHDVLAN 100 for management of the hypervisorVLAN 200 for public network of the instances (cloudbr0)VLAN 300 for private network of the instances (cloudbr1)VMDKVMFSWe assume that the hypervisor has one NIC (eth0) with three tagged VLAN's:We do the same for cloudbr1We have to configure the base bridge with the trunk.We now have to configure the three VLAN bridges:We now have to configure the three VLAN interfaces:When you deploy CloudStack, the hypervisor host must not have any VMs already runningWith NFS storage, CloudStack manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type.With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.Within a single cluster, the hosts must be of the same distribution version.XenServerXenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the CloudStack can still support storage over-provisioning by running on thin-provisioned storage volumes.YesYes, via Existing SRYes, via Shared Mountpointcustomhost-modelhost-passthroughlibvirt: 0.9.11 or higherlibvirt: 0.9.4 or higheropenvswitch: 1.7.1 or higherso it looks like:to thisvSphereProject-Id-Version: Apache CloudStack Installation RTD Report-Msgid-Bugs-To: POT-Creation-Date: 2014-06-30 11:42+0200 PO-Revision-Date: 2014-06-30 10:26+0000 Last-Translator: FULL NAME <EMAIL@ADDRESS> Language-Team: Chinese (China) (http://www.transifex.com/projects/p/apache-cloudstack-installation-rtd/language/zh_CN/) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language: zh_CN Plural-Forms: nplurals=1; plural=0; ****磁盘、模板和快照的格式******支持FC****支持本地存储****支持NFS****SMB/CIFS****存储超配****custom:** 您可以指定一个在/usr/share/libvirt/cpu\_map.xml文件中所支持的型号名称。**host-model:** libvirt可以识别出在/usr/share/libvirt/cpu\_map.xml中与主机最接近的CPU型号,然后请求其他的CPU flags完成匹配。如果虚拟机迁移到其他CPU稍有不同的主机中,保持好的可靠性/兼容性能提供最多的功能和最大限度提示的性能。**host-passthrough:** libvirt 会告诉KVM没有修改过CPU passthrough的主机。与host-model的差别是不仅匹配flags特性,还要匹配CPU的每一个特性。他能提供最好的性能, 同时对一些检查CPU底层特性的应用程序很重要,但这样会带来一些迁移的代价:虚拟机只会迁移到CPU完全匹配的主机上。**支持iSCSI**16509 (libvirt)179822 (SSH)4GB 内存49152 - 49216 (libvirt在线迁移)5900 - 6100 (VNC 控制台)64位x86 CPU(多核性能更佳)在下列行添加 "-l"添加主机到CloudStack同一群集中的所有节点架构必须一致。CPU的型号、数量和功能参数必须相同。在安装libvirt时所需的其他软件也会被安装,所以只需配置网络即可。至少一块网卡在我们开始之前,请确保所有的主机都安装了最新的更新包。默认情况下,Ubuntu中并未启用UFW。在关闭情况下执行这些命令并不能启用防火墙。 默认情况下,这些桥接被称为*cloudbr0*和*cloudbr1*,但必须确保他们在每个hypervisor上都是可用的。CLVMCentOS / RHEL: 6.3查找如下行检查FQN完全合格/限定主机名。检查你的机器中是否安装了AppArmor。如果没有,请跳过此部分。检查你的机器是否安装了SELinux。如果没有,请跳过此部分。CloudStack的会被例如AppArmor和SELinux的安全机制阻止。必须关闭安全机制并确保 Agent具有所必需的权限。CloudStack支持在一个群集内有多个主存储池。比如,使用2个NFS服务器提供主存储。或原有的1个iSCSI LUN达到一定容量时再添加第二个iSCSI LUN。CloudStack使用libvirt管理虚拟机。因此正确地配置libvirt至关重要。CloudStack-agent依赖于Libvirt,应提前安装完毕。配置AppArmor(Ubuntu)配置KVM虚拟机的CPU型号(可选)配置OpenVswitch配置SELinux(RHEL和CentOS):配置安全策略 (AppArmor 和 SELinux)在RHEL或CentOS中配置:在Ubuntu中配置:配置安全策略配置网络桥接配置使用OpenVswitch网络配置防火墙配置网络桥接在AppArmor配置文件中禁用libvirt首先配置eth0:首先我们创建一个连接至eth0接口的主桥接。然后我们创建三个虚拟桥接,每个桥接都连接指定的VLAN。首先我们安装Agent:在大多数情况下,主机管理员需要每个主机配置文件(/etc/cloudstack/agent/agent.properties)中指定guest虚拟机的CPU配置。这将通过引入两个新的配置参数来实现:这里有一些示例:Hyper-VHypervisor对主存储的支持如果你想使用 KVM hypervisor来运行虚拟机,请在你的云环境中安装KVM主机。本章节不会复述KVM的安装文档。它提供了KVM主机与CloudStack协同工作所要准备特有的步骤。在RHEL或者CentOS中,SELinux是默认安装并启动的。你可以使用如下命令验证:在RHEL/CentOS上:Ubuntu中默认安装并启动AppArmor。使用如下命令验证:在Ubuntu上:此外,硬件要求如下:此外,CloudStack Agent允许主机管理员控制KVM实例中的CPU型号。默认情况下,KVM实例的CPU型号为只有少数CPU特性且版本为xxx的QEMU Virtual CPU。指定CPU型号有几个原因:为了达到这个目的,我们需要开通以下TCP端口(如果使用防火墙):为了转发流量到实例,至少需要两个桥接网络: *public* 和 *private*。为了实现动态迁移libvirt需要监听不可靠的TCP连接。还要关闭libvirts尝试使用组播DNS进行广播。这些都可以在 /etc/libvirt/libvirtd.conf文件中进行配置。安装NTP安装和配置libvirt安装和配置libvirt安装和配置Agent配置方式取决于发行版类型,下面给出RHEL/CentOS和Ubuntu的配置示例。如何配置这些文件取决于你使用的发行版本,在下面的内容中会提供RHEL/CentOS下的示例。如何打开这些端口取决于你使用的发行版本。在RHEL/CentOS 及Ubuntu中的示例如下。不建议在主机中运行与CloudStack无关的服务。KVMKVM Hypervisor主机安装KVM安装概述KVM包含在多种基于Linux的操作系统中。尽管不作要求,但是我们推荐以下发行版:KVM支持 "Shared Mountpoint"存储。Shared Mountpoint是群集中每个服务器本地文件系统中的一个路径。群集所有主机中的该路径必须一致,比如/mnt/primary1。并假设Shared Mountpoint是一个集群文件系统如OCFS2。在这种情况下,CloudStack不会把它当做NFS存储去尝试挂载或卸载。CloudStack需要管理员确保该存储是可用的。在vSphere, XenServer和KVM中,本地存储是一个可选项。当选择了使用本地存储,所有主机会自动创建本地存储池。想要系统虚拟机 (例如虚拟路由器)使用本地存储,请设置全局配置参数system.vm.use.local.storage为true.使用root用户登录操作系统。确保内容如下所示:确保机器可以连接到互联网.在发生配置错误和网络故障的时,请确保可以能通过其他方式例如IPMI或ILO连接到服务器。如下所示修改接口文件:必须支持HVM(Intel-VT或者AMD-V)NFSNFS 和 iSCSINTP服务用来同步云中的服务器时间。时间不同步会带来意想不到的问题。网络示例否配置VLAN接口以便能够附加桥接网络。现在只配置一个没有IP的桥接。在RHEL或者CentOS中修改 ``/etc/sysconfig/libvirtd``:在Ubuntu中:修改 ``/etc/default/libvirt-bin`` 在VLAN 100 中,配置Hypervisor的IP为 192.168.42.11/24,网关为192.168.42.1在RHEL/CentOS中打开端口在Ubuntu中打开端口:准备操作系统准备主存储类型QCOW2Qemu/KVM: 1.0 或更高版本RHEL 及 CentOS使用iptables作为防火墙,执行以下iptables命令来开启端口:在所有主机中重复上述步骤。重启libvirt服务在 ``/etc/selinux/config`` 中设置SELINUX变量值为 "permissive"。这样能确保对SELinux的设置在系统重启之后依然生效。设定下列参数:KVM Hypervisor 主机系统要求Hypervisor与管理服务器不需要在同一个子网!主机的操作系统必须为运行CloudStack Agent和KVM实例做些准备。CloudStack中的默认使用Linux本身的桥接(bridge模块)方式实现。也可选择在CloudStack中使用OpenVswitch,具体要求如下:Ubuntu中的默认防火墙是UFW(Uncomplicated FireWall),使用Python围绕iptables进行包装。下表显示了针对不同Hypervisors的存储选项和参数。本章节的目标是设置三个名为'mgmt0', 'cloudbr0'和'cloudbr1' 桥接网络。这仅仅是指导性的,实际情况还要取决于你的网络状况。本章节的目标是配置两个名为 'cloudbr0'和'cloudbr1'的桥接网络。这仅仅是指导性的,实际情况还要取决于你的网络布局。现在主机已经为加入群集做好准备。后面的章节有介绍,请参阅 :ref:`adding-a-host`。强烈建议在添加主机之前阅读此部分内容。hypervisor之间和hypervisor与管理服务器之间要能够通讯。KVM hypervisors主要要求在于libvirt和Qemu版本。不管您使用何种Linux版本,请确保满足以下要求:最重要的因素是所有hypervisors上的配置要保持一致。以下网络配置依赖ifup-ovs和ifdown-ovs脚本,安装openvswitch后会提供该脚本。安装路径为位/etc/sysconfig/network-scripts/。使用ovs-vsctl命令创建基于OpenVswitch的网络接口。该命令将配置此接口并将信息保存在OpenVswitch数据库中。安装KVM主机的过程:网络桥接所需的软件在安装libvirt时就已被安装,继续配置网络。所需的安装包在安装openvswitch和libvirt的时就已经安装,继续配置网络。然后使SELinux立即运行于permissive模式,无需重新启动系统。配置网络有很多方法。在基本网络模式中你应该拥有2个 (V)LAN,一个用于管理网络,一个用于公共网络。更改CPU型号有三个选择:这些iptables配置并不会持久保存,重启之后将会消失,我们必须手动保存这些配置。本章节非常重要,请务必彻底理解。本章节详细介绍了如何使用Linux自带的软件配置桥接网络。如果要使用OpenVswitch,请看下一章节。该命令会返回完全合格/限定主机名,例如"kvm1.lab.example.org"。如果没有,请编辑 /etc/hosts。确保所有机器的默认CPU保持一致,消除对QEMU变量的依赖。将系统自带的网络桥接模块加入黑名单,确保该模块不会与openvswitch模块冲突。请参阅你所使用发行版的modprobe文档并找到黑名单。确保该模块不会在重启时自动加载或在下一操作步骤之前卸载该桥接模块。CloudStack使用Agent来管理KVM实例。Agent与管理服务器通讯并控制主机上所有的虚拟机。通过主机CPU的特性最大化提升KVM实例的性能;要打开所需端口,请执行以下命令:启用NTP服务以确保时间同步.除了在libvirtd.conf中打开"listen_tcp"以外,我们还必须修改/etc/sysconfig/libvirtd中的参数:Ubuntu: 12.04(.1)取消如下行的注释:VHDVLAN 100 作为hypervisor的管理网络 VLAN 200 作为实例的公共网络 (cloudbr0)VLAN 300 作为实例的专用网络 (cloudbr1)VMDKVMFS假设hypervisor中的网卡(eth0)有3个VLAN标签:同样建立cloudbr1必须将基础桥接配置为trunk模式。现在对三个VLAN桥接进行配置:现在配置3个VLAN接口:在部署CloudStack时,Hypervisor主机不能运行任何虚拟机在NFS存储中,CloudStack管理超配。这种情况下,使用全局配置参数storage.overprovisioning.factor来控制超配的范围。且不受hyperviso类型约束。配置完成之后重启网络,通过重启检查一切是否正常。同一集群中主机必须使用相同版本的Linux系统。XenServerXenServer通过在iSCSI和FC卷上使用集群化的LVM系统来存储VM镜像,并且不支持存储超配。尽管存储本身支持自动精简配置。不过CloudStack仍然支持在有自动精简配置的存储卷上使用存储超配。是支持,通过已有的SR支持,通过Shared Mountpointcustomhost-modelhost-passthroughlibvirt: 0.9.11 或更高版本libvirt: 0.9.4 或更高版本openvswitch: 1.7.1或更高版本如下所示:修改为vSphere