source/installguide/locale/zh_CN/LC_MESSAGES/hypervisor/xenserver.mo (24 lines of code) (raw):

����\ \ ] |h �� �� &; &b $� � P� PNkQ�& C36wD��9iP �E��!�a�l)����O���+�,�v+���Nv2-����<�H�\�w ) XJ W� P� 8L!7�!1�!R�!<B""6�"L�"#,#A#�T#GF%[�&(�&O'�c'_�'�G(��(kx)��)��*:r+��,KJ.H�.o�/2O1 �1w�1U2$]2_�20�3+5?54Z5,�5$�5@�5B"6�e6D97Z~7V�708JM8�8'�9P�9#.:R:i:.z:�:�:f�:YJ;`�;_<e<'}<Y�<L�<L=g=�=��=�>1�>��>�?@�?'@�;@:�@_A�aAL BIVBM�Bd�B�SC�D��Dd]E��E�F��F�sG'cH&�H�Hv�HR?II�IU�I42JTgJ�J��J�K2�K3�K�2L��M�bNU�O�?P��P�Q �Q�Q RR)R�FR3Te8U6�U��VT�W_�W�XX� Y��Y�/Z\�Z\[.n[��[n]Yw]��]�w^&)_&P_$w_�_P�_P`NY`Q�`<�`V7aZ�aV�a@b Wb bbH�b�b]�blIc$�c^�c]:d�d�dN�d0 f~>f.�f#�fg�0h��heUk�l�l>�l)4n?^nH�n��n�o]�o\p<`p0�p<�p. q6:q*qq�q-�qU�q0rLrhr��r tB!udu;zun�ub%v��vvFw`�wx��x+my��zO%|�u|)q}*�~ �~o�~RA0�?��- �7�8J�$��'��7Ђ7��@�BŃF�@O���O�����ƅ;��:�S�.c� ����jӆK>�h��U�I�!_�Q��=ӈ�'�=�XM����,-��Z�1�NC���b��.�RC����N,�B{�/��I��8��ŎyR�G̏�� ʐ�א�}�0A�!r���v��E�Bd�E��0�K�j����=�-P�4~����ʖO�Di�}���,�� -�;�K�[�k�p����W��=�� �B��Q��:�� �����-�\��\�*l�(Optional)(Optional) If you want to enable multipath I/O on a FiberChannel SAN, refer to the documentation provided by the SAN vendor.**This label is important. CloudStack looks for a network by a name you configure. You must use the same name-label for all hosts in the cloud for the management network.****This label is important. CloudStack looks for a network by a name you configure. You must use the same name-label for all hosts in the cloud for the public network.**/opt/xensource/bin/cloud-clean-vlan.sh/opt/xensource/bin/make\_migratable.sh/opt/xensource/bin/setupxenserver.sh/opt/xensource/sm/NFSSR.py/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-clean-vlan.sh/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/make\_migratable.sh/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/setupxenserver.sh/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py1 NIC for private, public, and storage2 NICs on private, 1 NIC on public, storage uses management network2 NICs on private, 2 NICs on public, 2 NICs on storage2 NICs on private, 2 NICs on public, storage uses management network36 GB of local disk4 GB of memory64-bit x86 CPU (more cores results in better performance)Add one or more server lines in this file with the names of the NTP servers you want to use. For example:Adding More Hosts to the ClusterAfter all hosts are up, run the following on one host in the cluster:After the upgrade is complete, copy the following files from the management server to this host, in the directory locations shown below:All NIC bonding is optional.All XenServers in a cluster must have the same username and password as configured in CloudStack.All hosts within a cluster must be homogeneous. The CPUs must be of the same type, count, and feature flags.At least 1 NICBack up the database:Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.Be sure the hardware is certified compatible with the new version of XenServer.Bonding can be implemented on a separate, public network. The administrator is responsible for creating a bond for the public network if that network will be bonded and will be separate from the management network.Check to be sure you see the new SCSI disk.Citrix XenServer Installation for CloudStackCloudStack configures network traffic of various types to use different NICs or bonds on the XenServer host. You can control this process and provide input to the Management Server through the use of XenServer network name labels. The name labels are placed on physical interfaces or bonds and configured in CloudStack. In some simple cases the name labels are not required.CloudStack natively supports NFS, iSCSI and local storage. If you are using one of these storage types, there is no need to create the XenServer Storage Repository ("SR").CloudStack supports the use of a second NIC (or bonded pair of NICs, described in :ref:`nic-bonding-for-xenserver`) for the public network. If bonding is not used, the public network can be on any NIC and can be on different NICs on the hosts in a cluster. For example, the public network can be on eth0 on node A and eth1 on node B. However, the XenServer name-label for the public network must be identical across all hosts. The following examples set the network label to "cloud-public". After the management server is installed and running you must configure it with the name of the chosen network label (e.g. "cloud-public"); this is discussed in `"Management Server Installation" <installation.html#management-server-installation>`_.CloudStack supports the use of multiple guest networks with the XenServer hypervisor. Each network is assigned a name-label in XenServer. For example, you might have two networks with the labels "cloud-guest" and "cloud-guest2". After the management server is installed and running, you must add the networks and use these labels so that CloudStack is aware of the networks.Complete the Bonding Setup Across the ClusterConfigure XenServer dom0 MemoryConfigure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see `http://support.citrix.com/article/CTX126531 <http://support.citrix.com/article/CTX126531>`_. The article refers to XenServer 5.6, but the same information applies to XenServer 6.0.Configuring Multiple Guest Networks for XenServer (Optional)Configuring Public Network with a Dedicated NIC for XenServer (Optional)Connect FiberChannel cable to all hosts in the cluster and to the FiberChannel storage host.Copy the script from the Management Server in /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.Copy this Management Server fileCreate a new network for the bond. For example, a new network with name "cloud-private".Create a new network for the bond. For example, a new network with name "cloud-public".Create the FiberChannel SR. In name-label, use the unique ID you just generated.Creating a Private Bond on the First Host in the ClusterCreating a Public Bond on the First Host in the ClusterDisconnect the XenServer cluster from CloudStack.Download the CSP software onto the XenServer host from one of the following links:Edit the NTP configuration file to point to your NTP server.Extract the file:Find the physical NICs that you want to bond together.Follow this procedure on each new host before adding the host to CloudStack:For XenServer 5.6 SP2:For XenServer 6.0.2:For XenServer 6.0:For the separate storage network to work correctly, it must be the only interface that can ping the primary storage device's IP address. For example, if eth0 is the management network NIC, ping -I eth0 <primary storage device IP> must fail. In all deployments, secondary storage devices must be pingable from the management network NIC or bond. If a secondary storage device has been placed on the storage network, it must also be pingable via the storage network NIC or bond on the hosts as well.From `https://www.citrix.com/English/ss/downloads/ <https://www.citrix.com/English/ss/downloads/>`_, download the appropriate version of XenServer for your CloudStack version (see `"System Requirements for XenServer Hosts" <#system-requirements-for-xenserver-hosts>`_). Install it using the Citrix XenServer Installation Guide.Give the storage network a different name-label than what will be given for other networks.Hardware virtualization support requiredHere is an example to set up eth5 to access a storage network on 172.16.0.0/24.If no bonding is done, the administrator must set up and name-label the separate storage network on all hosts (masters and slaves).If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):If you add a host to this XenServer pool, you need to migrate all VMs on this host to other hosts, and eject this host from XenServer pool.If you are using a single dedicated NIC to provide public network access, follow this procedure on each new host that is added to CloudStack before adding the host.If you are using two NICs bonded together to create a public network, see :ref:`nic-bonding-for-xenserver`.If you encounter difficulty, address the support team for the SAN provided by your vendor. If they are not able to solve your issue, see Contacting Support.If you plan on using NIC bonding, the NICs on all hosts in the cluster must be cabled exactly the same. For example, if eth0 is in the private bond on one host in a cluster, then eth0 must be in the private bond on all hosts in the cluster.If you upgraded from XenServer 5.6 GA to XenServer 5.6 SP2, change any VMs that have the OS type CentOS 5.5 (32-bit), Oracle Enterprise Linux 5.5 (32-bit), or Red Hat Enterprise Linux 5.5 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2, change any VMs that have the OS type CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32-bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).If you upgraded from XenServer 5.6 to XenServer 6.0.2, do all of the above.If you want to use the Citrix XenServer hypervisor to run guest virtual machines, install XenServer 6.0 or XenServer 6.0.2 on the host(s) in your cloud. For an initial installation, follow the steps below. If you have previously installed XenServer and want to upgrade to another version, see :ref:`upgrading-xenserver-version`.If, however, you would like to use storage connected via some other technology, such as FiberChannel, you must set up the SR yourself. To do so, perform the following steps. If you have your hosts in a XenServer pool, perform the steps on the master node. If you are working with a single XenServer which is not part of a cluster, perform the steps on that XenServer.Install CloudStack XenServer Support Package (CSP)Install NTP.Live migrate all VMs on this host to other hosts. See the instructions for live migration in the Administrator's Guide.Log in to one of the hosts in the cluster, and run this command to clean up the VLAN:Log in to the CloudStack UI as root.Make note of the values you will need when you add this storage to CloudStack later (see `"Add Primary Storage" <configuration.html#add-primary-storage>`_). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the name-label you set earlier (in this example, e6849e96-86c3-4f2c-8fcc-350cc711be3d).Make note of the values you will need when you add this storage to the CloudStack later (see `"Add Primary Storage" <configuration.html#add-primary-storage>`_). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the same name used to create the SR.Make sure NTP will start again upon reboot.Management Network BondingMust support HVM (Intel-VT or AMD-V enabled in BIOS)Must support HVM (Intel-VT or AMD-V enabled)NIC Bonding for XenServer (Optional)Navigate to the XenServer cluster, and click Actions – Manage.Navigate to the XenServer cluster, and click Actions – Unmanage.Note that you can download the most recent release of XenServer without having a Citrix account. If you wish to download older versions, you will need to create an account and look through the download archives.Now the bonds are set up and configured properly across the cluster.Now you have a bonded pair that can be recognized by CloudStack as the management network.Now you have a bonded pair that can be recognized by CloudStack as the public network.Older Versions of XenServer:On the storage server, run this command to get a unique ID for the new SR.Once XenServer has been installed, you may need to do some additional network configuration. At this point in the installation, you should have a plan for what NICs the host will have and what traffic each NIC will carry. The NICs should be cabled as necessary to implement your plan.Physical Networking Setup for XenServerPlug in the storage repositories (physical block devices) to the XenServer host:Primary Storage Setup for XenServerPublic Network BondingReboot the host.Reconnect the XenServer cluster to CloudStack.Repeat step 2 on every host.Repeat step 4 on every host.Repeat these steps for each additional guest network, using a different name-label and uuid each time.Repeat these steps to upgrade every host in the cluster to the same version of XenServer.Rescan the SCSI bus. Either use the following command or use XenCenter to perform an HBA rescan.Restart the Management Server and Usage Server. You only need to do this once for all clusters.Restart the NTP client.Restart the host machine when prompted.Run the following command on one host in the XenServer cluster to clean up the host tags:Run the following command, substituting your own name-label and uuid values.Run the following command.Run the following script:Run the script:Run xe network-list and find one of the guest networks. Once you find the network make note of its UUID. Call this <UUID-Guest>.Run xe network-list and find the public network. This is usually attached to the NIC that is public. Once you find the network make note of its UUID. Call this <UUID-Public>.Separate Storage Network for XenServer (Optional)Slave hosts in a cluster must be cabled exactly the same as the master. For example, if eth0 is in the private bond on the master, it must be in the management network for added slave hosts.Statically allocated IP AddressStill logged in to the host, run the upgrade preparation script:System Requirements for XenServer HostsThe IP address assigned for the management network interface must be static. It can be set on the host itself or obtained via static DHCP.The XenServer host is now ready to be added to CloudStack.The administrator must bond the management network NICs prior to adding the host to CloudStack.The host must be certified as compatible with one of the following. See the Citrix Hardware Compatibility Guide: `http://hcl.xensource.com <http://hcl.xensource.com>`_The host must be set to use NTP. All hosts in a pod must have the same time.The lack of up-do-date hotfixes can lead to data corruption and lost VMs.The output should look like this, although the specific ID will be different:The output should look like this, although the specific file name will be different (scsi-<scsiID>):These command shows the eth0 and eth1 NICs and their UUIDs. Substitute the ethX devices of your choice. Call the UUID's returned by the above command slave1-UUID and slave2-UUID.These command shows the eth2 and eth3 NICs and their UUIDs. Substitute the ethX devices of your choice. Call the UUID's returned by the above command slave1-UUID and slave2-UUID.These steps should be run on only the first host in a cluster. This example creates the cloud-public network with two physical NICs (eth2 and eth3) bonded into it.This command returns a unique ID for the SR, like the following example (your ID will be different):This section tells how to upgrade XenServer software on CloudStack hosts. The actual upgrade is described in XenServer documentation, but there are some additional steps you must perform before and after the upgrade.Time SynchronizationTo create a human-readable description for the SR, use the following command. In uuid, use the SR ID returned by the previous command. In name-description, set whatever friendly text you prefer.To enable security groups, elastic load balancing, and elastic IP on XenServer, download and install the CloudStack XenServer Support Package (CSP). After installing XenServer, perform the following additional steps on each XenServer host.To solve this issue, run the following:To this location on the XenServer hostTo upgrade XenServer:Troubleshooting: If you see the error "can't eject CD," log in to the VM and umount the CD, then run the script again.Troubleshooting: If you see the following error message, you can safely ignore it.Troubleshooting: You might see the following error when you migrate a VM:Upgrade the XenServer software on all hosts in the cluster. Upgrade the master first.Upgrade the database. On the Management Server node:Upgrade to the newer version of XenServer. Use the steps in XenServer documentation.Upgrading XenServer VersionsUse the following steps to create a bond in XenServer. These steps should be run on only the first host in a cluster. This example creates the cloud-private network with two physical NICs (eth0 and eth1) bonded into it.Username and PasswordWatch the cluster status until it shows Unmanaged.Watch the status to see that all the hosts come up.When configuring networks in a XenServer environment, network traffic labels must be properly configured to ensure that the virtual interfaces are created by CloudStack are bound to the correct physical device. The name-label of the XenServer network must match the XenServer traffic label specified while creating the CloudStack network. This is set by running the following command:When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.When setting up the storage repository on a Citrix XenServer, you can enable multipath I/O, which uses redundant physical components to provide greater reliability in the connection between the server and the SAN. To enable multipathing, use a SAN solution that is supported for Citrix servers and follow the procedures in Citrix documentation. The following links provide a starting point:When you deploy CloudStack, the hypervisor host must not have any VMs already runningWith all hosts added to the pool, run the cloud-setup-bond script. This script will complete the configuration and set up of the bonds across all hosts in the cluster.With the bonds (if any) established on the master, you should add additional, slave hosts. Run the following command for all additional hosts to be added to the cluster. This will cause the host to join the master in a single XenServer pool.XenServer 5.6 SP2XenServer 6.0XenServer 6.0.2XenServer 6.1.0XenServer 6.2.0XenServer Installation StepsXenServer expects all nodes in a cluster will have the same network cabling and same bonds implemented. In an installation the master will be the first host that was added to the cluster and the slave hosts will be all subsequent hosts added to the cluster. The bonds present on the master set the expectation for hosts added to the cluster later. The procedure to set up bonds on the master and slaves are different, and are described below. There are several important implications of this:XenServer supports Source Level Balancing (SLB) NIC bonding. Two NICs can be bonded together to carry public, private, and guest traffic, or some combination of these. Separate storage networks are also possible. Here are some example supported configurations:You can also ask your SAN vendor for advice about setting up your Citrix repository for multipathing.You can optionally set up a separate storage network. This should be done first on the host, before implementing the bonding steps below. This can be done using one or two available NICs. With two NICs bonding may be done as above. It is the administrator's responsibility to set up a separate storage network.You can set up two separate storage networks as well. For example, if you intend to implement iSCSI multipath, dedicate two non-bonded NICs to multipath. Each of the two networks needs a unique name-label.You might need to change the OS type settings for VMs running on the upgraded hosts.You must re-install Citrix XenServer if you are going to re-use a host from a previous install.You must set bonds on the first host added to a cluster. Then you must use xe commands as below to establish the same bonds in the second and subsequent hosts added to a cluster.`http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz>`_`http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz>`_`http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz>`_`http://support.citrix.com/article/CTX118791 <http://support.citrix.com/article/CTX118791>`_`http://support.citrix.com/article/CTX125403 <http://support.citrix.com/article/CTX125403>`_iSCSI Multipath Setup for XenServer (Optional)Project-Id-Version: Apache CloudStack Installation RTD Report-Msgid-Bugs-To: POT-Creation-Date: 2014-06-30 11:42+0200 PO-Revision-Date: 2014-06-30 10:27+0000 Last-Translator: FULL NAME <EMAIL@ADDRESS> Language-Team: Chinese (China) (http://www.transifex.com/projects/p/apache-cloudstack-installation-rtd/language/zh_CN/) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language: zh_CN Plural-Forms: nplurals=1; plural=0; (可选)(可选)如果想在FC SAN上启用多路径I/O,请参考SAN供应商提供的文档。**标签非常重要。因为CloudStack根据配置的名称查找网络。必须对云中所有主机的管理网络使用同样的名称标签(name-label)。****该标签非常重要。因为CloudStack根据配置的名称来查找网络。必须对云中所有物理主机的公共网络使用同样的名称标签(name-label)。**/opt/xensource/bin/cloud-clean-vlan.sh/opt/xensource/bin/make\_migratable.sh/opt/xensource/bin/setupxenserver.sh/opt/xensource/sm/NFSSR.py/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-clean-vlan.sh/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/make\_migratable.sh/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/setupxenserver.sh/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py管理网络,公共网络,存储网络使用一块网卡管理网络使用2块网卡,公共网络使用1块网卡,存储共用管理网络管理网络使用2块网卡,公共网络使用2块网卡,存储网络使用2块网卡管理网络使用2块网卡,共用网络使用2块网卡,存储共用管理网络36GB本地磁盘空间4GB 内存64位x86 CPU(多核性能更佳)在文件中添加一行或多行你想使用的服务器地址. 例如:在群集中添加更多主机当所有的主机都是运行状态后, 在集群中的一台主机上运行下列的命令:在升级完成之后,将下列文件从管理服务器复制到这台主机,文件所在路径如下:所有的网卡绑定都是可选的CloudStack中同一群集下的所有XenServer主机必须拥有同样的用户名和密码。集群中的主机必须是相同架构。CPU的型号、数量和功能参数必须相同。至少一块网卡备份数据库:确保安装了所有系统补丁供应商提供的所有系统补丁。请随时关注供应商的技术支持渠道,一旦发布补丁,请尽快打上补丁。CloudStack不会跟踪或提醒你安装这些程序。安装最新的补丁对主机至关重要。虚拟化供应商可能不会对过期的系统提供技术支持。确保硬件可以被新版本XenServer支持。可以在单独的公共网络上执行绑定。管理员可以将使用网卡绑定的公共网络与管理网络区分开。检查确保已经识别到新的SCSI磁盘。为CloudStack安装Citrix XenServerCloudStack使用XenServer上的不同网卡或绑定来配置不同流量类型。你可以在管理服务器上通过输入XenServer适配器标签来控制这些流量。CloudStack中的标签名称就是物理接口或绑定的名称。在简单的网络中,可以不填写标签。CloudStack原生支持NFS,iSCSI和本地存储。如果使用其中的一种存储类型,则不需要创建XenServer存储库("SR")。CloudStack支持公共网络使用第二块网卡(或者是2块网卡绑定,在 :ref:`nic-bonding-for-xenserver` 有描述)。如果没有使用绑定,公共网络可以使用群集中不同主机上的不同网卡。比如,公共网络可以使用节点A上的eth0和节点B上的eth1。但是,用于公共网络的名称标签在所有主机上必须一致。举个例子,我们在XenServer上设定了网络标签"cloud-public"。在安装完管理服务器之后,你必须使用( "cloud-public")作为对应的CloudStack的网络流量标签;这个在 `"管理服务器的安装" <installation.html#management-server-installation>`_中有介绍。CloudStack支持在XenServer上使用多个来宾网络。在XenServer中为每个网络都被分配名称标签。例如,你可能有两个名称分别为"cloud-guest"和"cloud-guest2"的网络。在配置好管理服务器并且运行后,你必须使用这些标签将这些网络添加到CloudStack中,以便CloudStack能够识别到这些网络。完成群集内的绑定设置配置XenServer dom0内存配置XenServer dom0占用更大的内存可以使XenServer运行更多的虚拟机。我们建议修改为2940MB。如何来做这些更改,请看 `http://support.citrix.com/article/CTX126531 <http://support.citrix.com/article/CTX126531>`_。这篇文档是针对XenServer 5.6的,但同样适用于XenServer6.0。XenServer配置多个来宾网络(可选)使用专用的XenServer网卡来配置公用网络(可选)通过光纤通道将集群中的所有主机连接至光纤存储设备将管理服务器中/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh拷贝到master主机中并确认该脚本可执行。复制管理服务器的文件为该绑定创建一个新的网络。例如,一个名为“cloud-public”的新网络。为网卡绑定创建一个新的网络。比如,一个名为"cloud-public"的新网络。创建FC SR 。在name-label中,输入刚才获取的ID。在群集的第一台主机中创建专用绑定在集群的第一台主机上创建公共网络网卡绑定在CloudStack中断开XenServer群集连接。请从下列链接之一下载CSP至XenServer主机:编辑NTP配置文件指定NTP服务器。解压文件:找到想要绑定在一起的物理网卡。在将主机添加到CloudStack之前,请在每个新主机上执行以下操作:适用于XenServer 5.6 SP2:适用于XenServer 6.0.2:适用于XenServer 6.0:让独立的存储网络工作正常,必须可以通过它的接口ping通主存储设备的IP。比如,如果eth0是管理网络的网卡,ping -I eth0 <primary storage device IP>必须是不通的。在所有的部署环境里,通过管理网络必须能ping通辅助存储设备的IP。如果辅助存储设备是在存储网络中,那么通过存储网络同样也要能ping通它。 从 `https://www.citrix.com/English/ss/downloads/ <https://www.citrix.com/English/ss/downloads/>`_下载适合CloudStack的XenServer版本 (查看 `"XenServer主机的系统要求" <#system-requirements-for-xenserver-hosts>`_)。使用XenServer安装向导安装XenServer。为存储网络设置一个与其他网络不同的名称标签。硬件虚拟化支持此处示例配置eth5访问172.16.0.0/24的存储网络。如果没有做绑定,管理员必须为所有主机上(主、从节点)的存储网络设置名称标签。如果XenServer主机所在的区域使用的是基本网络模式 ,请禁用Open vSwitch (OVS):如果添加一台主机到这个XenServer资源池中,你必须将这台主机上的所有的虚机迁移到其他主机,然后将这台主机从原来的XenServer资源池中移除。如果你指定单网卡作为公共网络,在加入CloudStack环境之前按照下面的流程配置每个主机。如果你使用双网卡绑定作为公共网络,请参阅 :ref:`nic-bonding-for-xenserver`。如果遇到任何困难, 请通过SAN供应商团队获得支持. 如果仍然无法解决你的问题, 请联系技术支持.如果你计划使用网卡绑定,那么所有主机上的网卡的连接必须相同。比如,如果群集中的一台主机eth0为专用绑定,那么其他主机上的eth0也必须为专用绑定。如果是从XenServer 5.6 GA 升级到XenServer 5.6 SP2,更改虚机操作系统类型CentOS 5.5 (32-bit), Oracle Enterprise Linux 5.5 (32-bit)或者Red Hat Enterprise Linux 5.5 (32-bit)为Other Linux (32-bit)。同样这些虚机的64位版本操作系统类型也要改为Other Linux (64-bit)。如果是从XenServer 5.6 SP2升级到XenServer 6.0.2,更改虚拟操作系统类型CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32-bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit)或者Red Hat Enterprise Linux 5.7 (32-bit)为Other Linux (32-bit)。同样这些虚机的64位版本操作系统类型也要改为Other Linux (64-bit)。如果从XenServer 5.6升级到XenServer 6.0.2,需完成以上所有操作。如果你想用Citrix XenServer作为hypervisor运行虚拟机,安装XenServer 6.0或XenServer 6.0.2版本。你需要按照以下步骤来完成初次安装。如果你想从之前的旧版本升级,请参阅 :ref:`upgrading-xenserver-version`。但是,如果你想使用其他技术的存储,如FC存储,你必须自行设置SR。请按以下步骤操作来设置SR。如果你要设置的主机存在于XenServer池中,请在master主机上执行这些操作。如果是单台节点主机,则在该XenServer主机上操作即可。安装 CloudStack XenServer 支持包(CSP)安装NTP.将该主机上的所有虚拟机动态迁移到其他主机。请参照管理指南了解动态迁移指导。登录到群集中的任意一台主机,运行下面的命令清除VLAN信息:使用admin账户登录CloudStack管理界面。记录这些值,之后在CloudStack中添加存储时会用到(查看 `"添加主存储" <configuration.html#add-primary-storage>`_)。在添加主存储对话框中的协议一项,选择PreSetup。在SR Name-Label中,输入之前设置的 name-label(在本示例中为e6849e96-86c3-4f2c-8fcc-350cc711be3d)。记录这些值,之后在CloudStack中添加存储时将会用到(查看 `"添加主存储" <configuration.html#add-primary-storage>`_)。在添加主存储对话框中的协议一项,选择PreSetup。在SR Name-Label中,输入之前创建的SR名称。确保机器重启之后NTP会自动启动。管理网络绑定必须支持HVM(BIOS中要打开Intel-VT或者AMD-V)必须支持HVM(Intel-VT或者AMD-V)XenServer中的网卡绑定(可选)在XenServer群集导航页中,点击操作-管理。导航至XenServer群集,点击操作-取消管理。下载最新版本的XenServer不需要使用Citrix账户。如果你要下载旧版本的XenServer,你需要注册一个账户。现在,群集内所有主机的网卡绑定都已配置正确。现在有了可以被CloudStack识别为管理网络的网卡绑定。现在有了被CloudStack识别为公共网络的网卡绑定。较旧版本的XenServer:在存储服务器中,运行下列命令获取新添加SR的唯一ID标识。XenServer安装完成后, 需要对网络做一些额外的设置。此时, 你应该对主机上的网卡及每个网络传输的流量有一个整体规划。网络连接必须满足你的规划.XenServer物理网络设置将XenServer主机连接至存储库(物理的块设备):为XenServer配置主存储公共网络网卡绑定重启主机。在CloudStack中重新连接XenServer群集。在所有主机上重复步骤2.在所有主机上重复步骤4对每个额外的来宾网络都重复这些步骤,注意每次要使用不同的名称标签和UUID。重复这些步骤,将每台主机都升级到相同的XenServer版本。重新扫描SCSI总线。使用下列命令或者在XenCenter中使用HBA rescan按钮来完成扫描。重启管理服务和Usage服务. 只需为所有的集群做一次这样的操作。重启NTP客户端。操作完成之后重启主机。在XenServer群集中的一台主机上运行下面命令来清除主机标签:运行下面的命令, 替换你自己的名称标签和UUID.运行以下命令。执行下列脚本:运行脚本:运行xe network-list,找出来宾网络。并记录UUID。可以称为<UUID-Guest>。运行xe network-list,查找公共网络。通常它的名字就是public。找到之后记录UUID。可以称之为<UUID-Public>。XenServer设置单独的存储网络(可选)群集中的Slave主机必须与Master主机拥有完全相同的布线。例如,如果eth0在群集中的主机上的专用网络绑定中,那么eth0必须在群集中的所有主机上专用网络绑定中。静态分配的IP仍旧在这台已经登录的主机中,运行下面的升级准备脚本:XenServer主机的系统要求分配给管理网络的IP必须是静态IP。它可以通过DHCP保留或者在主机上指定。现在可以在CloudStack中添加XenServer。在将主机添加至CloudStack之前管理员必须绑定管理网络的网卡。主机必须通过下列任一版本的兼容性认证。请查看Citrix硬件兼容性指导: `http://hcl.xensource.com <http://hcl.xensource.com>`_主机必须配置NTP。同一提供点中的所有主机时间必须相同。缺乏最新补丁更新可能会导致数据和虚拟机丢失。输出结果如下所示,指定的ID不同:输出结果如下所示,指定的文件名稍许不同(scsi-<scsiID>):如上命令显示了eth0、eth1和对应的UUID。并替换你的网卡编号。上述命令返回的UUID称为slave1-UUID和slave2-UUID。以上命令显示了eth2、eth3和对应的UUID。替换你的网卡编号。将上述命令返回的UUID称为slave1-UUID和slave2-UUID。以下步骤仅在群集的第一台主机中运行。本示例为cloud-public绑定了两块物理网卡(eth2和eth3)。这条命令会获取SR的ID,以下面为例(你的UUID不同):本章节介绍了如何升级CloudStack环境中的XenServer。实际升级的操作在XenServer文档中有详细描述,但是在升级前后有些额外的步骤必须执行。时间同步使用下面的命令为SR创建通俗易懂的描述, uuid参数使用使用之前命令返回的SR ID. 在名称描述中,设置任何你自己喜欢的描述。要在XenServer上启用安全组、弹性负载均衡和弹性IP,请下载安装CloudStack XenServer支持包(CSP)。安装完XenServer之后,在每台XenServer主机上执行下列步骤。解决这个问题,请运行下面的命令:复制到XenServer主机的路径升级XenServer:故障排除:如果看到 "can't eject CD," 错误,请登录到虚拟机将光盘卸载,并重新运行脚本。故障排除:如果看到下面的错误信息,可以忽略它。故障排除:迁移虚拟机时可能会遇到下面的错误:升级群集中所有的XenServer主机。首先升级master节点。升级数据库。在管理服务器节点上:要升级更新版本的XenServer。请执行XenServer文档中的步骤。升级XenServer版本使用下述步骤在XenServer上创建网卡绑定。这些步骤只在群集中第一台主机上运行即可。本示例为cloud-private网络绑定了两块物理网卡(eth0和eth1)。用户名和密码查看群集状态直到显示未受管理。查看状态以确保所有的主机都恢复正常.在XenServer环境中配置网络时,必须正确的配置网络流量标签,确保CloudStack将虚拟接口绑定到正确的物理设备上。XenServer的网络标签必须与配置CloudStack的网络时指定的XenServer流量标签一致。运行以下命令来设置:当复制和粘贴命令时,请确保没有多余的换行符,因为一些文档查看器可能会在复制时加上换行符。在Citrix XenServer上配置存储库时,你可以启用多路径I/O,它是通过在服务器与SAN设备之间提供冗余的物理组件实现,提供了更高的可靠性。要启用多路径,请使用 XenServer文档中提供的SAN驱动程序。从下面链接开始:在部署CloudStack时,Hypervisor主机不能运行任何虚拟机所有主机添加至资源池后,运行 cloud-setup-bond脚本。此脚本将配置群集中所有主机的网卡绑定。在master主机添加了(任何)绑定网卡的基础上,可以添加其他的slave主机。对所有添加至群集的其他主机执行下面的命令。他的作用是将这些主机都加入master所在的同一个XenServer池中。XenServer 5.6 SP2XenServer 6.0XenServer 6.0.2XenServer 6.1.0XenServer 6.2.0XenServer安装步骤XenServer期望群集中所有的节点都拥有相同的网络布线,以及相同的绑定。在安装完成后第一台主机将成为Master,而随后加入到群集中的其他主机都将成为Slave。Master主机上的网卡绑定设置会覆盖其他主机的网络配置。在Master和Slaver上设置网卡绑定的步骤不同,重点包括几个部分:XenServer支持Source Level Balancing (SLB) NIC绑定。两块网卡可被绑定在一起承载公共网络、管理网络和来宾网络流量,或这些网络的组合。单独的存储网络同样可以。以下是一些配置的示例:你同样可以咨询SAN提供商提供关于配置Citrix存储库多路径的建议。在绑定网卡之前,你可以有选择的配置一个独立的存储网络。使用1到2个网卡,上述给出了两块网卡绑定的示例。配置一个独立的存储网络是管理的职责。你可以设置两个独立的存储网络。例如,使用两块独立网卡实现iSCSI多路径。当然,两个网络都需要自己的名称标签。需要更改升级后的主机中虚拟机的操作系统类型。如果你想使用以前装的某台主机,你必须重新安装Citrix XenServer.必须在添加至群集的第一台主机上设置网卡绑定。然后必须使用如下的xe命令,使第二台和随后添加至该群集的主机上自动配置与Master主机相同的网卡绑定。`http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz>`_`http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz>`_`http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz>`_`http://support.citrix.com/article/CTX118791 <http://support.citrix.com/article/CTX118791>`_`http://support.citrix.com/article/CTX125403 <http://support.citrix.com/article/CTX125403>`_XenServer iSCSI多路径设置(可选)