source/installguide/locale/zh_CN/LC_MESSAGES/hypervisor/vsphere.mo (31 lines of code) (raw):

��B, <<,=VjS�FR\���DrA�� 9 ZF`V� �2 '>)f���� m�{J�gv* 4� &� E�!�C#e4$��$g�%!�%*&�?&9)Y)�h)mV+��+(�,&�, -4-)F-p-��-d//l0S�0��0A�2��3 t4!�4�4]�4f5$�54�54�5F6=W6�6�6 �6U�6�)8^�8� 9�9�9 �9�9�9�9Q�9%N:[t:]�:+.;1Z;D�;:�;k =Rx=R�=\>,{>w�>P ?Nq?�?.�?. @2:@jm@f�@u?AO�A B�B�C��C�D�D�D(�DZ�D�ZE��Eh�F(LG(uGM�HY�HFI}NJa�JP.K�KdL:yL:�L{�L2kM��M*'N�ROD�Ob5Pe�P�P<Q.LQ${QA�Q'�Q Rg!R�R[�R�S?�S�S#�S TUTnT�T<�T4�T9 UAGUC�U�U�U V+VGV+bV��V'YAYYYqY�Y�Y�Y�Y�Y �Y�YZ?Z]ZNvZ9�Z�Z [6[P[�m[�[\ 5\A\U\r\Q�\��\$�]I�]�]5 ^C^L^#j^�^��^!/_3Q_?�_�_�_�_,`^<`%�`d�`�&d�$e�eF�f6gTUgP�gO�g/Kh{h>�i8�i�j��j��k+lsElN�l m�)m�m��n��of�p�q%�q� r+�sx�v,tw7�w��wnmyD�y#!z�Ez{89{r{�{�{/�{<�{�)|��|G�g�-b�����2��k��Q������cY�P��d��s�Z��CV�y��D�9Y�3��6Lj6�� 5�?� T�b� z�������Ή��� � $�/�@�'T�|� ������̊ފy�l�}�'��-��*��A,�+n�'��+@�,/�5\�,��*��'�,�.?��n�%?�Ue�R��A�WP�������=H�;��” ٔ� �� �7–G���B�/;�$k�$����ǘ�ؘ�}��J���I�f��/a�����[��Y�H.��w�fi�Р)�����ƣ;٣Z��p�!1�$S�$x���$��ڦ�����)i�W��;�'�{=� ��Ƭ�b��[\���.ҭ0�<2�3o�����Ԯ0ۮ| �k��������� ����ıڱV�!B�Hd�H��-��.$�;S����Dw�E��F�CI���h��K�Ha�!��.̶&��"�BA�N��dӷB8� {�����w��"�!�!�(�8�RN�n����T��'I��q�Dq�R��� �Z�PL�<�����i\�9��;�`<�-��u���A�p'�9��V��b)� ��9��+��0��10�b���Z����\�lc�.����� (�D5� z���7��*��-��.+�4Z�����������#�('�P�i������ ���������� ���4 �U�Kh�9������/��E�!��!�&�7�Q�k�S�����#h�<����4���' �!H�j��y�# �,0�0]� ������#��D��&�*C��n��2����1����G��a �=��'�����/��-��x)����in�$��y��Rw���k���L�������]��� �$������]�|�*��2��D��m&�C���������&�� ��(�,<�)i�����1�;��X�\�{���-��o���-�s���M�S��?.�Xn����SJ�����F��$ �%.�(T�}� ���� ���� ���� � �4�H�\�s� ���� ����"�� ���� �"� 3�GA��������&��*����A�+E�'q�+��@��,�53�,i�*��'��,��.�(Applies only to VMware vSphere version 4.x)**dvSwitch0**: If type of virtual switch is VMware vNetwork Distributed virtual switch**epp0**: If type of virtual switch is Cisco Nexus 1000v Distributed virtual switch**nexusdvs**: Represents Cisco Nexus 1000v distributed virtual switch.**vSwitch0**: If type of virtual switch is VMware vNetwork Standard virtual switch**vmware.use.dvswitch**: Set to true to enable any kind (VMware DVS and Cisco Nexus 1000v) of distributed virtual switch in a CloudStack deployment. If set to false, the virtual switch that can be used in that CloudStack deployment is Standard virtual switch.**vmware.use.nexus.vswitch**: This parameter is ignored if vmware.use.dvswitch is set to false. Set to true to enable Cisco Nexus 1000v distributed virtual switch in a CloudStack deployment.**vmwaredvs**: Represents VMware vNetwork distributed virtual switch**vmwaresvs**: Represents VMware vNetwork Standard virtual switch36 GB of local disk4 GB of memory44364-bit x86 CPU (more cores results in better performance)A Cisco Nexus 1000v virtual switch is installed to serve the datacenter that contains the vCenter cluster. This ensures that CloudStack doesn't have to deal with dynamic migration of virtual adapters or networks across other existing virtual switches. See `Cisco Nexus 1000V Installation and Upgrade Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_5_1/install_upgrade/vsm_vem/guide/n1000v_installupgrade.html>`_ for guidelines on how to install the Nexus 1000v VSM and VEM modules.A cluster of servers (ESXi 4.1 or later) is configured in the vCenter.A contiguous range of non-routable VLANs. One VLAN will be assigned for each customer.A default virtual switch vSwitch0 is created. CloudStack requires all ESXi hosts in the cloud to use the same set of virtual switch names. If you change the default virtual switch name, you will need to configure one or more CloudStack configuration variables as well.About Cisco Nexus 1000v Distributed Virtual SwitchAbout VMware Distributed Virtual SwitchAdd Hosts or Configure Clusters (vSphere)Add iSCSI targetAdding VLAN RangesAdditional switches of any type can be added for each cluster in the same zone. While adding the clusters with different switch type, traffic labels is overridden at the cluster level.After the zone is created, if you want to create an additional cluster along with Nexus 1000v virtual switch in the existing zone, use the Add Cluster option. For information on creating a cluster, see `"Add Cluster: vSphere" <configuration.html#add-cluster-vsphere>`_.All ESXi hosts should enable CPU hardware virtualization support in BIOS. Please note hardware virtualization support is not enabled by default on most servers.All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled).All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). All hosts within a cluster must be homogeneous. That means the CPUs must be of the same type, count, and feature flags.All hosts within a cluster must be homogenous. That means the CPUs must be of the same type, count, and feature flags.All information given in :ref:`nexus-vswift-preconf`All resources used for CloudStack must be used for CloudStack only. CloudStack cannot share instance of ESXi or storage with other management consoles. Do not share the same storage volumes that will be used by CloudStack with a different set of ESXi servers that are not managed by CloudStack.All the required VLANS must be trunked into all network switches that are connected to the ESXi hypervisor hosts. These would include the VLANS for Management, Storage, vMotion, and guest VLANs. The guest VLAN (used in Advanced Networking; see Network Setup) is a contiguous range of VLANs that will be managed by CloudStack.Alternatively, at the cluster level, you can create an additional cluster with VDS enabled in the existing zone. Use the Add Cluster option. For information as given in `“Add Cluster: vSphere” <configuration.html#add-cluster-vsphere>`_.Alternatively, verify the host state is properly synchronized and updated in the CloudStack database.An Ethernet port profile configured on the Nexus 1000v virtual switch should not use in its set of system VLANs, or any of the VLANs configured or intended to be configured for use towards VMs or VM resources in the CloudStack environment.Apply All Necessary Hotfixes. The lack of up-do-date hotfixes can lead to data corruption and lost VMs.Apply the patch on the ESXi host.Applying Hotfixes to a VMware vSphere HostAssign ESXi host's physical NIC adapters, which correspond to each physical network, to the port profiles. In each ESXi host that is part of the vCenter cluster, observe the physical networks assigned to each port profile and note down the names of the port profile for future use. This mapping information helps you when configuring physical networks during the zone configuration on CloudStack. These Ethernet port profile names are later specified as VMware Traffic Labels for different traffic types when configuring physical networks during the zone configuration. For more information on configuring physical networks, see `"Configuring a vSphere Cluster with Nexus 1000v Virtual Switch" <#configuring-a-vsphere-cluster-with-nexus-1000v-virtual-switch>`_.Assigning Physical NIC AdaptersAt least 1 NICBe sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor's support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.Before you run the vlan command, ensure that the configuration mode is enabled in Nexus 1000v virtual switch.By default a virtual switch on ESXi hosts is created with 56 ports. We recommend setting it to 4088, the maximum number of ports allowed. To do that, click the "Properties..." link for virtual switch (note this is not the Properties link for Networking).Cancel the maintenance mode on the host.Check Enabled to enable the initiator.Choose Add Datastore... command.Click OK to save.Click Yes in the confirmation dialog box.Click the Configure... button.CloudStack allows you to use vCenter to configure three separate networks per ESXi host. These networks are identified by the name of the vSwitch they are connected to. The allowed networks for configuration are public (for traffic to/from the public internet), guest (for guest-guest traffic), and private (for management and usually storage traffic). You can use the default virtual switch for all three, or create one or two other vSwitches for those traffic types.CloudStack expects that the Management Network of the ESXi host is configured on the standard vSwitch and searches for it in the standard vSwitch. Therefore, ensure that you do not migrate the management network to Nexus 1000v virtual switch during configuration.CloudStack requires ESXi. ESX is not supported.CloudStack requires VMware vSphere 4.1 or 5.0. VMware vSphere 4.0 is not supported.CloudStack supports Cisco Nexus 1000v dvSwitch (Distributed Virtual Switch) for virtual network configuration in a VMware vSphere environment. This section helps you configure a vSphere cluster with Nexus 1000v virtual switch in a VMware vCenter environment. For information on creating a vSphere cluster, see `"VMware vSphere Installation and Configuration" <#vmware-vsphere-installation-and-configuration>`_CloudStack supports VMware vNetwork Distributed Switch (VDS) for virtual network configuration in a VMware vSphere environment. This section helps you configure VMware VDS in a CloudStack deployment. Each vCenter server instance can support up to 128 VDS instances and each VDS instance can manage up to 500 VMware hosts.CloudStack supports orchestration of virtual networks in a deployment with a mix of Virtual Distributed Switch, Standard Virtual Switch and Nexus 1000v Virtual Switch.Cluster NameConfigure NIC Bonding for vSphereConfigure Virtual SwitchConfigure clusters in vCenter and add hosts to them, or add hosts without clusters to vCenterConfigure host physical networking,virtual switch, vCenter Management Network, and extended port rangeConfigure vCenter Management NetworkConfiguring Distributed Virtual Switch in CloudStackConfiguring Nexus 1000v Virtual Switch in CloudStackConfiguring a VMware Datacenter with VMware Distributed Virtual SwitchConfiguring a vSphere Cluster with Nexus 1000v Virtual SwitchCreate an iSCSI datastoreCreating a Port ProfileDescriptionDetermine the public VLAN, System VLAN, and Guest VLANs to be used by the CloudStack. Ensure that you add them to the port profile database. Corresponding to each physical network, add the VLAN range to port profiles. In the VSM command prompt, run the switchport trunk allowed vlan<range> command to add the VLAN ranges to the port profile.Disconnect the VMware vSphere cluster from CloudStack. It should remain disconnected long enough to apply the hotfix on the host.Disk storage - 2GB. Disk requirements may be higher if your database runs on the same machine.During a zone creation, ensure that you select VMware vNetwork Distributed Virtual Switch when you configure guest and public traffic type.ESXI VLAN IP AddressESXi Host setupESXi VLANESXi VLAN IP GatewayESXi VLAN NetmaskESXi host setupEach cluster managed by CloudStack is the only cluster in its vCenter datacenter.Enable iSCSI initiator for ESXi hostsEnable the option to override the zone-wide guest traffic for the cluster you are creating.Enable this option to override the zone-wide public traffic for the cluster you are creating.Enabling Nexus Virtual Switch in CloudStackEnabling Virtual Distributed Switch in CloudStackEnsure that all the VMs are migrated to other hosts in that cluster.Ensure that you create required port profiles to be used by CloudStack for different traffic types of CloudStack, such as Management traffic, Guest traffic, Storage traffic, and Public traffic. The physical networks configured during zone creation should have a one-to-one relation with the Ethernet port profiles.Enter the host name or the IP address of the vCenter host where you have deployed the Nexus virtual switch.Enter the name of the cluster you created in vCenter. For example, "cloudcluster".Enter the name of the cluster you created in vCenter. For example,"cloud.cluster".Enter the name or the IP address of the vCenter host where you have deployed the VMware VDS.Enter the password for the user named above.Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.Enter the vCenter datacenter that the cluster is in. For example, "cloud.dc.VM".Enter the vCenter datacenter that the cluster is in. For example, "clouddcVM".Ethernet port profile namesExtend Port Range for CloudStack Console ProxyFollow the wizard to create a iSCSI datastore.Following are the global configuration parameters:Following installation, perform the following configuration, which are described in the next few sections:For a smoother configuration of Nexus 1000v switch, gather the following information before you start:For a smoother configuration of VMware VDS, note down the VDS name you have added in the datacenter before you start:For a smoother installation, gather the following information before you start:For example:For information on creating a port profile, see `Cisco Nexus 1000V Port Profile Configuration Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/port_profile/configuration/guide/n1000v_port_profile.html>`_.For more information, see `"vCenter Server and the vSphere Client Hardware Requirements" <http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/c_vc_hw.html>`_.For more information, see `Cisco Nexus 1000V Getting Started Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_b/getting_started/configuration/guide/n1000v_gsg.pdf>`_.Guest Traffic vSwitch NameGuest Traffic vSwitch TypeHardware requirements:Hardware virtualization support requiredIP Address Range in the ESXi VLAN. One address per Virtual Router is used from this range.If nothing specified (left empty), zone-level default virtual switchwould be defaulted, based on the value of global parameter you specify.If the ESXi hosts have multiple VMKernel ports, and ESXi is not using the default value "Management Network" as the management network name, you must follow these guidelines to configure the management network port group so that CloudStack can find it:If there is only one host in that cluster, shutdown all the VMs and move the host into maintenance mode.If you are using NFS, skip this section.If you haven't already, you'll need to download and purchase vSphere from the VMware Website (`https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1 <https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1>`_) and install it by following the VMware vSphere Installation Guide.If you want the VLAN 200 to be used on the switch, run the following command:If you want the VLAN range 1350-1750 to be used on the switch, run the following command:If you want to separate traffic in this way you should first create and configure vSwitches in vCenter according to the vCenter instructions. Take note of the vSwitch names you have used for each traffic type. You will configure CloudStack to use these vSwitches.If you want to use the VMware vSphere hypervisor to run guest virtual machines, install vSphere on the host(s) in your cloud.In both these cases, you must specify the following parameters to configure Nexus virtual switch:In both these cases, you must specify the following parameters to configure VDS:In the CloudStack UI, go to Configuration - Global Settings and set vmware.management.portgroup to the management network label from the ESXi hosts.In the Details page, click Delete Nexus dvSwitch icon. |DeleteButton.png: button to delete dvSwitch|In the Infrastructure page, click View all under Clusters.In the dvSwitch tab, click the name of the virtual switch.In the host configuration tab, click the "Hardware/Networking" link to bring up the networking configuration page as above.In the left navigation bar, select Infrastructure.In the vCenter datacenter that is served by the Nexus virtual switch, ensure that you delete all the hosts in the corresponding cluster.In the vSwitch properties dialog box, you may see a vCenter management network. This same network will also be used as the CloudStack management network. CloudStack requires the vCenter management network to be configured properly. Select the management network item in the dialog, then click Edit.In this dialog, you can change the number of switch ports. After you've done that, ESXi hosts are required to reboot in order for the setting to take effect.In this example, the allowed VLANs added are 1, 140-147, and 196-203In vCenter, go to hosts and Clusters/Configuration, and click Storage Adapters link. You will see:In vSwitch properties dialog, select the vSwitch and click Edit. You should see the following dialog:Increasing PortsInformation listed in :ref:`networking-checklist-for-vmware`Information listed in :ref:`vcenter-checklist`Log in to the CloudStack UI as root.Log in with Admin permissions to the CloudStack administrator UI.Make sure the following values are set:Management Server VLANManagement and Storage network does not support VDS. Therefore, use Standard Switch for these networks.Management traffic enabled.Memory - 3GB RAM. RAM requirements may be higher if your database runs on the same machine.Microsoft SQL Server 2005 Express disk requirements. The bundled database requires up to 2GB free disk space to decompress the installation archive.Move each of the ESXi hosts in the cluster to maintenance mode.Multipath storageMultipathing for vSphere (Optional)NIC bondingNIC bonding on vSphere hosts may be done according to the vSphere installation guide.Name of the cluster.Name of the datacenter.Name of the virtual / distributed virtual switch at vCenter.Name of virtual switch to be used for guest traffic.Name of virtual switch to be used for the public traffic.Navigate to the VMware cluster, click Actions, and select Manage.Navigate to the VMware cluster, click Actions, and select Unmanage.Network Configuration ChecklistNetworking - 1Gbit or 10Gbit.Networking Checklist for VMwareNexus 1000v VSM CredentialsNexus 1000v VSM IP addressNexus 1000v Virtual Switch PreconfigurationNexus 1000v switch uses vEthernet port profiles to simplify network provisioning for virtual machines. There are two types of port profiles: Ethernet port profile and vEthernet port profile. The Ethernet port profile is applied to the physical uplink ports-the NIC ports of the physical NIC adapter on an ESXi server. The vEthernet port profile is associated with the virtual NIC (vNIC) that is plumbed on a guest VM on the ESXi server. The port profiles help the network administrators define network policies which can be reused for new virtual machines. The Ethernet port profiles are created on the VSM and are represented as port groups on the vCenter server.Nexus dvSwitch IP AddressNexus dvSwitch PasswordNexus dvSwitch UsernameNexus vSwitch RequirementsNotesOptionalOther requirements:Override Guest TrafficOverride Public TrafficParametersParameters DescriptionPassword for the above user.Perform the following on each of the ESXi hosts in the cluster:Physical Host NetworkingPort 443 is configured by default; however, you can change the port if needed.Possible valid values are vmwaredvs, vmwaresvs, nexusdvs.Preparation ChecklistPreparation Checklist for VMwarePrepare storage for iSCSIPrerequisites and GuidelinesProcessor - 2 CPUs 2.0GHz or higher Intel or AMD x86 processors. Processor requirements may be higher if the database runs on the same machine.Public Traffic vSwitch NamePublic Traffic vSwitch TypePublic VLANPublic VLAN GatewayPublic VLAN IP Address RangePublic VLAN NetmaskPut all target ESXi hypervisors in a cluster in a separate Datacenter in vCenter.Range of Public IP Addresses available for CloudStack use. These addresses will be used for virtual router on CloudStack to route private traffic to external networks.Reconnect the cluster to CloudStack:Refer to Cisco Nexus 1000V Command Reference of specific product version.Removing Nexus Virtual SwitchRepeat these steps for all ESXi hosts in the cluster.RequiredRestart the host if prompted.Right click on the datacenter node.Secure HTTP Port NumberSee `“Log In to the UI” <http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/ui.html#log-in-to-the-ui>`_.Select Home/Inventory/Datastores.Select iSCSI software adapter and click Properties.Select the cluster where you want to remove the virtual switch.Separating TrafficSoftware requirements:Statically allocated IP AddressStorage Preparation for vSphere (iSCSI only)Storage multipathing on vSphere nodes may be done according to the vSphere installation guide.System Requirements for vSphere HostsThe Cisco Nexus 1000V virtual switch is a software-based virtual machine access switch for VMware vSphere environments. It can span multiple hosts running VMware ESXi 4.0 and later. A Nexus virtual switch consists of two components: the Virtual Supervisor Module (VSM) and the Virtual Ethernet Module (VEM). The VSM is a virtual appliance that acts as the switch's supervisor. It controls multiple VEMs as a single network device. The VSM is installed independent of the VEM and is deployed in redundancy mode as pairs or as a standalone appliance. The VEM is installed on each VMware ESXi server to provide packet-forwarding capability. It provides each virtual machine with dedicated switch ports. This VSM-VEM architecture is analogous to a physical Cisco switch's supervisor (standalone or configured in high-availability mode) and multiple linecards architecture.The CloudStack management network must not be configured as a separate virtual network. The CloudStack management network is the same as the vCenter management network, and will inherit its configuration. See :ref:`configure-vcenter-management-network`.The Ethernet port profile created for a Basic zone configuration does not trunk the guest VLANs because the guest VMs do not get their own VLANs provisioned on their network interfaces in a Basic zone.The Ethernet port profile created to represent the physical network or networks used by an Advanced zone configuration trunk all the VLANs including guest VLANs, the VLANs that serve the native VLAN, and the packet/control/data/management VLANs of the VSM.The IP address of the VSM component of the Nexus 1000v virtual switch.The IP address of the vCenter.The Nexus 1000v VSM is not deployed on a vSphere host that is managed by CloudStack.The Public Traffic vSwitch Type field when you add a VMware VDS-enabled cluster.The VLANs used for control, packet, and management port groups can be the same.The admin name to connect to the VSM appliance.The cluster that will be managed by CloudStack should not contain any VMs. Do not run the management server, vCenter or any other VMs on the cluster that is designated for CloudStack use. Create a separate cluster for use of CloudStack and make sure that they are no VMs in this cluster.The corresponding password for the admin user specified above.The default value depends on the type of virtual switch:The following information specified in the Nexus Configure Networking screen is displayed in the Details tab of the Nexus dvSwitch in the CloudStack UI:The host must be certified as compatible with vSphere. See the VMware Hardware Compatibility Guide at `http://www.vmware.com/resources/compatibility/search.php <http://www.vmware.com/resources/compatibility/search.php>`_.The password for the vCenter user specified above. The password for this vCenter user is required when you configure the switch in CloudStack.The possible values for traffic labels are:The switch name in the Edit traffic label dialog while configuring a public and guest traffic during zone creation.The switch name in the traffic label while updating the switch type in a zone.The three fields to fill in are:The vCenter user with administrator-level privileges. The vCenter User ID is required when you configure the virtual switch in CloudStack.This field would be used for only public traffic as of now. In case of guest traffic this field would be ignored and could be left empty for guest traffic. By default empty string would be assumed which translates to untagged VLAN for that specific traffic type.This option is displayed only if you enable the Override Guest Traffic option. Select VMware vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.This option is displayed only if you enable the Override Public Traffic option. Select VMware vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.This procedure should be done on one host in the cluster. It is not necessary to do this on all hosts.This section discusses prerequisites and guidelines for using Nexus virtual switch in CloudStack. Before configuring Nexus virtual switch, ensure that your system meets the following requirements:This user must have admin privileges.To make a CloudStack deployment Nexus enabled, you must set the vmware.use.nexus.vswitch parameter true by using the Global Settings page in the CloudStack UI. Unless this parameter is set to "true" and restart the management server, you cannot see any UI options specific to Nexus virtual switch, and CloudStack ignores the Nexus virtual switch specific parameters specified in the AddTrafficTypeCmd, UpdateTrafficTypeCmd, and AddClusterCmd API calls.To make a CloudStack deployment VDS enabled, set the vmware.use.dvswitch parameter to true by using the Global Settings page in the CloudStack UI and restart the Management Server. Unless you enable the vmware.use.dvswitch parameter, you cannot see any UI options specific to VDS, and CloudStack ignores the VDS-specific parameters that you specify. Additionally, CloudStack uses VDS for virtual network infrastructure if the value of vmware.use.dvswitch parameter is true and the value of vmware.use.nexus.dvswitch parameter is false. Another global parameter that defines VDS configuration is vmware.ports.per.dvportgroup. This is the default number of ports per VMware dvPortGroup in a VMware environment. Default value is 256. This number directly associated with the number of guest network you can create.Traffic label format in the last case is [["Name of vSwitch/dvSwitch/EthernetPortProfile"][,"VLAN ID"[,"vSwitch Type"]]]Type of virtual switch. Specified as string.Under the properties dialog, add the iSCSI target info:Unless the CloudStack global parameter "vmware.use.nexus.vswitch" is set to "true", CloudStack by default uses VMware standard vSwitch for virtual network infrastructure. In this release, CloudStack doesn’t support configuring virtual networks in a deployment with a mix of standard vSwitch and Nexus 1000v virtual switch. The deployment can have either standard vSwitch or Nexus 1000v virtual switch.Use of iSCSI requires preparatory work in vCenter. You must add an iSCSI target and create an iSCSI datastore.Use one label for the management network port across all ESXi hosts.Use this VDS name in the following:Use vCenter to create a vCenter cluster and add your desired hosts to the cluster. You will later add the entire cluster to CloudStack. (see `“Add Cluster: vSphere” <configuration.html#add-cluster-vsphere>`_).VLAN ID set to the desired IDVLAN ID to be used for this traffic wherever applicable.VLAN InformationVLAN Range for Customer useVLAN for the Public Network.VLAN on which all your ESXi hypervisors reside.VLAN on which the CloudStack Management server is installed.VMware VDS does not support multiple VDS per traffic type. If a user has many VDS switches, only one can be used for Guest traffic and another one for Public traffic.VMware VDS is an aggregation of host-level virtual switches on a VMware vCenter server. VDS abstracts the configuration of individual virtual switches that span across a large number of hosts, and enables centralized provisioning, administration, and monitoring for your entire datacenter from a centralized interface. In effect, a VDS acts as a single virtual switch at the datacenter level and manages networking for a number of hosts in a datacenter from a centralized VMware vCenter server. Each VDS maintains network runtime state for VMs as they move across multiple hosts, enabling inline monitoring and centralized firewall services. A VDS can be deployed with or without Virtual Standard Switch and a Nexus 1000V virtual switch.VMware VDS is supported only on Public and Guest traffic in CloudStack.VMware vCenter Standard Edition 4.1 or 5.0 must be installed and available to manage the vSphere hosts.VMware vSphere Installation and ConfigurationVSM Configuration ChecklistValueWatch the cluster status until it shows Unmanaged.Watch the status to see that all the hosts come up. It might take several minutes for the hosts to come up.When the maximum number of VEM modules per VSM instance is reached, an additional VSM instance is created before introducing any more ESXi hosts. The limit is 64 VEM modules for each VSM instance.When you remove a guest network, the corresponding dvportgroup will not be removed on the vCenter. You must manually delete them on the vCenter.Whether you create a Basic or Advanced zone configuration, ensure that you always create an Ethernet port profile on the VSM after you install it and before you create the zone.You can configure Nexus dvSwitch by adding the necessary resources while the zone is being created.You can configure VDS by adding the necessary resources while a zone is created.You do not have to create any vEthernet port profiles – CloudStack does that during VM deployment.You must also add all the public and private VLANs or VLAN ranges to the switch. This range is the VLAN range you specify in your zone.You must re-install VMware ESXi if you are going to re-use a host from a previous install.You need to extend the range of firewall ports that the console proxy works with on the hosts. This is to enable the console proxy to work with VMware-based VMs. The default additional port range is 59000-60000. To extend the port range, log in to the VMware ESX service console on each host and run the following commands:You should have a plan for cabling the vSphere hosts. Proper network configuration is required before adding a vSphere host to CloudStack. To configure an ESXi host, you can use vClient to add it as standalone host to vCenter first. Once you see the host appearing in the vCenter inventory tree, click the host node in the inventory tree, and navigate to the Configuration tab.You should now create a VMFS datastore. Follow these steps to do so:You will need the following VSM configuration parameters:You will need the following information about VLAN.You will need the following information about vCenter.You will need the following information about vCenter:dvSwitch0dvSwitch0,,vmwaredvsdvSwitch0,200dvSwitch1,300,vmwaredvsempty stringmyEthernetPortProfile,,nexusdvsvCenter ChecklistvCenter Cluster NamevCenter Credentials ChecklistvCenter DatacentervCenter Datacenter NamevCenter HostvCenter IPvCenter PasswordvCenter RequirementvCenter Server Standard is recommended.vCenter Server requirements:vCenter UservCenter User IDvCenter User PasswordvCenter User namevCenter credentialsvCenter must be configured to use the standard port 443 so that it can communicate with the CloudStack Management Server.vMotion enabled.vSphere Installation StepsvSphere Standard is recommended. Note however that customers need to consider the CPU constraints in place with vSphere licensing. See `http://www.vmware.com/files/pdf/vsphere\_pricing.pdf <http://www.vmware.com/files/pdf/vsphere_pricing.pdf>`_ and discuss with your VMware sales representative.vSphere and vCenter, both version 4.1 or 5.0.|dvSwitchConfig.png: Configuring dvSwitch||traffic-type.png||vds-name.png: Name of the dvSwitch as specified in the vCenter.||vmwareiscsidatastore.png: iscsi datastore||vmwareiscsigeneral.png: iscsi general||vmwareiscsiinitiator.png: iscsi initiator||vmwareiscsiinitiatorproperties.png: iscsi initiator properties||vmwareiscsitargetadd.png: iscsi target add||vmwarenexusaddcluster.png: vmware nexus add cluster||vmwarenexusportprofile.png: vSphere client||vsphereincreaseports.png: vSphere client||vspheremgtnetwork.png: vSphere client||vspherephysicalnetwork.png: vSphere client||vspherevswitchproperties.png: vSphere client|Project-Id-Version: Apache CloudStack Installation RTD Report-Msgid-Bugs-To: POT-Creation-Date: 2014-06-30 11:42+0200 PO-Revision-Date: 2014-06-30 10:27+0000 Last-Translator: FULL NAME <EMAIL@ADDRESS> Language-Team: Chinese (China) (http://www.transifex.com/projects/p/apache-cloudstack-installation-rtd/language/zh_CN/) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language: zh_CN Plural-Forms: nplurals=1; plural=0; (仅适用于VMware vSphere 4.x)**dvSwitch0**:如果虚拟交换机类型为VMware vNetwork分布式虚拟交换机**epp0**:如果虚拟交换机类型为Cisco Nexus 1000v分布式虚拟交换机**nexusdvs**:代表Cisco Nexus 1000v分布式虚拟交换机。**vSwitch0**:如果虚拟交换机的类型是VMware vNetwork标准虚拟交换机。**vmware.use.dvswitch**:设置为true则是在CloudStack部署时启用任意种类(VMware DVS和Cisco Nexus 1000v) 分布式交换机。如果设置为false,则CloudStack部署时使用的的虚拟交换机是标准虚拟交换机。**vmware.use.nexus.vswitch**:如果vmware.use.dvswitch设置为false,则忽略该参数。设置为true则是部署CloudStack时启用Cisco Nexus 1000v分布式交换机。**vmwaredvs**:表示VMware vNetwork分布式虚拟交换机**vmwaresvs**:表示VMware vNetwork 标准虚拟交换机36GB本地磁盘空间4GB 内存44364位x86 CPU(多核性能更佳)Cisco Nexus 1000v虚拟交换机安装在vCenter数据中心所包含的群集主机。这将确保CloudStack不必处理跨现存虚拟交换机之间的虚拟网卡或网络的动态迁移。如何安装Nexus 1000v VSM和VEM 模块,请参考 `Cisco Nexus 1000V Installation and Upgrade Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_5_1/install_upgrade/vsm_vem/guide/n1000v_installupgrade.html>`_。在vCenter中配置ESXi(ESXi 4.1或更高)群集。连续的不可路由的VLAN范围。每个用户会分配一个VLAN。默认情况下,系统会创建虚拟交换机vSwitch0。CloudStack要求云中所有ESXi主机中的虚拟交换机都使用相同的名字。如果你改变了默认虚拟交换机名称,那么你需要改变一个或多个CloudStack配置。关于Cisco Nexus 1000v分布式虚拟交换机关于VMware分布式虚拟交换机添加主机或配置集群(vSphere)添加iSCSI目标添加VLAN范围在同一个zone中可以为每个群集添加任何类型的交换机。当在群集中添加不同类型的交换机时,流量标签会被群集级别的覆盖。区域创建完成后,如果你想在区域中创建其他同样使用Nexus 1000v虚拟交换机的群集时,使用添加群集选项。关于创建群集的更多信息,请参见 `"添加群集: vSphere" <configuration.html#add-cluster-vsphere>`_。所有ESXi主机都应该在BIOS中启用CPU硬件虚拟化支持。请注意,大多数服务器默认不启用该功能。所有主机必须为64位架构并且支持HVM(启用Intel-VT或AMD-V)。所有主机必须为64位架构并且支持HVM(启用Intel-VT或AMD-V)。同一群集中的所有节点必须为同一架构。CPU型号、数量和功能参数必须相同。同一群集中的所有节点必须为同一架构。CPU型号、数量和功能参数必须相同。所有信息在ref:`nexus-vswift-preconf`中:所有用于CloudStack的资源只能被CloudStack使用。CloudStack无法与其他管理控制台共享ESXi实例或存储。请不要与其他不受CloudStack管理的ESXi共享同一个存储卷。所有必需的vlan必须通过所有网络交换机端口的trunk模式连接到ESXi hypervisor主机。其中包含管理,存储,vMotion和来宾等VLANs。CloudStack管理一段连续的来宾VLANs范围(高级网络;请参阅 网络设置) 。另外在群集层面,使用添加群集选项,可以在已经存在并启用VDS的区域中创建附加群集。更多信息请参阅 `“添加群集: vSphere” <configuration.html#add-cluster-vsphere>`_.或者,确认主机状态在数据库中得到正确同步和更新。在Nexus 1000v虚拟交换机上配置的以太网端口配置文件,不应该使用在它的VLANS系统设置中,或者使用在其他任何VLANS配置以及打算使用在CloudStack环境中的虚拟机或者虚拟机资源的配置。安装一切必要的补丁程序。缺乏最新补丁程序可能会导致数据和虚拟机丢失。在ESXi主机中应用补丁。为VMware vSphere主机安装补丁程序分配ESXi主机上的对应每个物理网络的网卡适配器到端口配置文件。在vCenter群集中的每个ESXi主机上,观察分配到每个端口配置文件的物理网络并记下配置文件的名字。在创建zone期间,当你配置物理网络的时候,这个映射信息能帮助你。这些Ethernet端口配置文件的名字就是随后你在配置zone的时候,配置物理网络时所要输入的不同流量对应的VMware流量标签。配置物理网络所需的更多信息,请参考 `"Configuring a vSphere Cluster with Nexus 1000v Virtual Switch" <#configuring-a-vsphere-cluster-with-nexus-1000v-virtual-switch>`_。分配物理网络适配器至少一块网卡请确保安装了hypervisor供应商发布的所有补丁程序。随时关注供应商支持信息,一旦发布补丁就立即安装。CloudStack不会跟踪或提醒你安装这些补丁。安装最新的补丁程序对主机至关重要。hypervisor供应商可能不会对过期的系统提供技术支持。在你运行vlan命令之前,确保Nexus 1000v虚拟交换机中启用了配置模式。ESXi主机的虚拟交换机默认有56个端口。我们建议设为最大允许数4088。要设置该值,请点击虚拟交换机的“属性…”链接(注意,不是网络的属性)。在主机上取消维护模式。勾选启用以便启用启动器。选择添加数据存储...命令。点击 确定 保存。在确认对话框中点击确定。点击配置...按钮。CloudStack允许你使用vCenter为每个ESXi主机配置三个独立的网络。CloudStack通过vSwitch的名称识别网络。允许配置的网络分别为公共网络(与公共互联网之间的流量),来宾网络(来宾-来宾流量)和管理网络(与管理服务器和存储之间的流量)。您可以设置3种网络使用默认的虚拟交换机,或创建1个/2个其他虚拟交换机来承载这些网络流量。CloudStack期望ESXi主机的管理网络被配置在标准的vSwitch之上并且在标准的vSwitch中搜索该网络。因此,确保在配置期间不要把管理网络迁移到Nexus 1000v上。CloudStack仅支持ESXi,不支持ESX。CloudStack要求VMwarevSphere 4.1或者5.0版本,不支持VMware vSphere 4.0版本。CloudStack支持在VMware vSphere环境中使用Cisco Nexus 1000v dvSwitch (分布式虚拟交换机)。本章节能帮助你配置vSphere群集使用Nexus 1000v虚拟交换机。关于创建vSphere群集的更多信息,请参考 `"VMware vSphere安装和配置" <#vmware-vsphere-installation-and-configuration>`_CloudStack支持在VMware vSphere环境中为虚拟网络配置VMware vNetwork分布式交换机(VDS)。本章节能帮助你在CloudStack中配置VMware VDS。每个vCenter服务器实例最多支持128 VDS实例,每个VDS实例最多可以管理500台VMware服务器。CloudStack支持混合部署分布式虚拟交换机、标准虚拟交换机和Nexus 1000v虚拟交换机的虚拟网络。群集名称配置vSphere的网卡绑定配置虚拟交换机在vCenter中配置集群并添加主机,或不使用集群,直接在vCenter中添加主机。配置主机的物理网络,虚拟交换机,vCenter管理网络和扩展的端口范围配置vCenter管理网络在CloudStack中配置分布式虚拟交换机在CloudStack中配置Nexus 1000v虚拟交换机配置VMware数据中心使用VMware分布式虚拟交换机配置vSphere群集使用Nexus 1000v虚拟交换机创建iSCSI数据存储创建端口配置文件描述决定CloudStack所使用的公共VLAN,系统VLAN和来宾VLAN。确保你添加他们到端口配置文件数据库。对应每个物理网络,为端口配置文件添加VLAN范围。在VSM命令提示符下,运行 switchport trunk allowed vlan<range>命令为端口配置文件添加VLAN范围。在CloudStack中断开与VMware vSphere 群集的连接。应断开足够长的时间以便在主机上安装补丁程序。磁盘空间 - 2GB。如果数据库跟VC在同一台服务器中,可能会需要更多的磁盘空间。在创建zone的过程中配置来宾网络和公共网络流量类型时,请确保你选择了VMware vNetwork Distributed Virtual Switch。ESXi VLAN IP地址ESXi主机配置ESXi VLANESXi VLAN网关IPESXi VLAN子网掩码ESXi主机安装CloudStack只能管理存在当前vCenter数据中心管理之下的每一个群集。ESXi主机中启用iSCSI启动器启用该选项时,整个区域中群集的来宾流量将被替代。启用该选项时,整个区域中群集的公共流量将被替代。在CloudStack中启用Nexus虚拟交换机。在CloudStack中启用分布式虚拟交换机确保所有的VM已经迁移到群集中的其他主机。请确保你为每种CloudStack流量类型创建了所需的端口配置文件,如管理流量,来宾流量,存储流量和公共流量。在区域创建期间配置的物理网络跟Ethernet应该是一一对应的关系。输入已经部署了Nexus虚拟交换机的vCenter主机名或IP。输入在vCenter中创建的群集名称。比如,"cloudcluster"。输入在vCenter中创建的群集名称。比如,"cloud.cluster"。输入已经部署VMware VDS的vCenter主机名称或者IP地址。输入上面用户的密码。输入CloudStack连接到vCenter所使用的用户名。这个用户必须拥有所有管理员权限。输入群集所属的vCenter数据中心名称。比如, "cloud.dc.VM"。输入群集所属的vCenter数据中心名称。比如,"clouddcVM"。以太网端口配置文件名称为CloudStack控制台代理扩展端口范围按照向导创建iSCSI数据存储。以下为全局配置参数:安装过程中,执行下面几个章节描述的以下配置:要顺利配置 Nexus 1000v交换机,请在开始之前收集下列信息:为了顺利的配置VMware VDS,在开始之前,请牢记添加到数据中心中的VDS名称:为了能顺利地安装,在安装之前请收集以下信息:例如:关于创建端口配置文件的更多信息,请参考 `Cisco Nexus 1000V 端口配置指南 <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/port_profile/configuration/guide/n1000v_port_profile.html>`_。更多信息,请参考 `"vCenter Server 和 vSphere Client 硬件要求" <http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/c_vc_hw.html>`_。更多信息,请参考 `Cisco Nexus 1000V 入门指南 <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_b/getting_started/configuration/guide/n1000v_gsg.pdf>`_.来宾流量虚拟交换机名称来宾流量虚拟交换机类型硬件要求:硬件虚拟化支持ESXi VLAN的IP地址范围。每个虚拟路由器使用该范围内的一个IP。如果什么都没指定(留空),基于全局参数指定的值,区域层面默认使用虚拟交换机。如果ESXi主机具有多个VMKernel端口,并且ESXi没有使用默认的“Management Network”作为管理网络名称,您必须按照以下指南配置管理网络端口组,以便CloudStack可以发现管理网络。如果群集中只有一台主机,关闭所有VMs再将主机置为维护模式。如果使用NFS,请跳过本章节。如果你还没用准备好安装介质,请在VMware网站下载和购买(`https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1 <https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1>`_)并按照VMware vSphere安装指南安装。如果你想在交换机中使用VLAN 200,运行下面的命令:如果你想在交换机中使用VLAN范围1350-1750,请运行下面的命令:如果您想以这样的方式隔离流量,首先应按照vCenter的指南在vCenter中创建并配置虚拟交换机。请记录每种网络使用的虚拟交换机名称。您需要配置CloudStack使用这些虚拟交换机。如果计划使用VMware vSphere平台运行虚拟机,请在云中安装vSphere主机。在所有的情况下,配置Nexus虚拟交换机时必须指定下列参数:在所有情况下,配置VDS时必须指定下列参数:在CloudStack管理界面中,点击配置-全局设置,修改vmware.management.portgroup为ESXi主机管理网络的标签。在详细页面中,点击删除Nexus dvSwitch的图标。 |DeleteButton.png: button to delete dvSwitch|在基础架构页面,点击群集中的查看所有。在dvSwitch标签页中,点击虚拟交换机的名字。在主机的配置标签页中,点击"硬件/网络"链接转到上图中的网络配置页。在左侧导航栏中,选择基础架构。在使用了Nexus虚拟交换机的vCenter数据中心中,确认你在对应的群集中删除了所有的主机。在vSwith属性对话框中,您可以看到一个vCenter管理网络。CloudStack的管理网络也使用该网络。CloudStack要求正确配置vCenter的管理网络。在对话框中点击管理网络,然后点击编辑。在该对话框中,您可以修改端口数量。修改完后,为使配置生效,需要重启ESXi主机。在这个示例下,添加的VLANs是1,140-147和196-203在vCenter中,点击主机/集群的配置,点击存储适配器。您将看到:在vSwitch属性对话框中,选择vSwitch,点击“编辑”,您将看到以下对话框:增加端口:ref:`networking-checklist-for-vmware` 中列出的信息:ref:`vcenter-checklist` 中列出的信息使用admin账户登录CloudStack管理界面。使用管理员登录到CloudStack管理界面。请确保配置以下设置:管理服务器VLAN管理和存储网络不支持使用VDS。因此在这些网络中使用标准交换机。启用管理流量。内存 - 3GB。如果数据库跟VC在同一台服务器中,可能会需要更多内存。Microsoft SQL Server 2005 Express磁盘要求。最多需要 2 GB 的可用磁盘空间解压安装文件。将集群中每个ESXi都置为维护模式。存储多路径vSphere多路径(可选)网卡绑定vSphere主机的网卡绑定可以按照vSphere安装指南完成。群集名称数据中心名称vCenter中 虚拟/分布式 虚拟交换机的名称。用于宾流量的虚拟交换机名称。用于公共流量的虚拟交换机名称。导航到VMware群集,点击操作-管理。导航至VMware群集,点击操作-取消管理。网络配置清单网络 - 1Gbit或者10Gbit。VMware网络清单Nexus 1000v VSM 凭证Nexus 1000v VSM IP地址Nexus 1000v虚拟交换机预配置Nexus 1000v交换机使用vEthernet端口配置文件为虚拟机提供简单的网络。它有两种类型的端口配置文件:Ethernet端口配置文件和vEthernet端口配置文件。Ethernet端口配置文件应用到物理上联端口-ESXi服务器上面的物理网卡的端口。vEthernet端口配置文件关联到ESXi服务器上运行的虚拟机的虚拟网卡。端口配置文件可以帮助网络管理员快速的为其他虚机定义同样的网络策略。VSM创建的Ethernet端口策略在vCenter服务器上显示为端口组。Nexus dvSwitch IP 地址Nexus dvSwitch 密码Nexus dvSwitch 用户名Nexus vSwitch要求备注可选项其他要求:替代来宾流量替代公共流量参数参数描述上述用户的密码。在集群中每个ESXi主机上执行以下操作:物理主机网络默认配置使用443端口;如果需要,可以更改为其他端口。合理且有效的值为vmwaredvs, vmwaresvs, nexusdvs。准备工作清单VMware的准备工作清单准备iSCSI存储先决条件和指南处理器 - 2颗2.0GHz的Intel或AMD x86 CPUs, 或更高的处理器。如果数据库跟VC在同一台服务器中,可能会需要更高的处理器。公共流量虚拟交换机名称公共流量虚拟交换机类型公共网络VLAN公共网络VLAN的网关公共VLAN IP地址范围公共网络VLAN的子网掩码将所有目标ESXi hypervisors加入vCenter中独立数据中心之下的群集。CloudStack使用的公共网络IP范围。CloudStack中的虚拟路由器使用这些地址,用于路由专用流量至外部网络。重新连接该集群到CloudStack:参考特定产品版本的Cisco Nexus 1000V 命令参考。移除Nexus虚拟交换机在集群中所有ESXi主机上重复以上步骤。要求如果提示重启,请重启主机。右键点击数据中心节点。HTTPS端口号请参考 `“登录用户界面” <http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/ui.html#log-in-to-the-ui>`_。选择主页/清单/数据存储。选择iSCSI软件适配器并点击属性。选择你想要移除虚拟交换机的群集。流量隔离软件要求:静态分配的IPvSphere的存储准备(仅限iSCSI)vSphere的存储多路径可以根据vSphere的安装文档配置。vSphere主机的系统要求Cisco Nexus 1000V虚拟交换机是一个内置软件程序的虚拟机。它能跨越多个运行VMware ESXi 4.0或更高版本的主机部署。Nexus虚拟交换机包括两个组件:Virtual Supervisor Module (VSM)和 Virtual Ethernet Module (VEM)。VSM是一个管理交换机的虚拟设备。它单独控制多个VEM。VSM安装在每台VMware ESXi服务器当中提供包转发功能。它为每个虚拟机提供单独的交换机接口。VSM-VEM架构与物理交换机的管理机制(单独或者高可用模式)和多线卡架构类似。CloudStack 管理网络不能使用独立的虚拟网络。CloudStack使用vCenter的管理网络,并继承vCenter管理网络的配置。 可参考 :ref:`configure-vcenter-management-network`.为基础区域配置的Ethernet端口配置文件不会trunk来宾VLANs,因为在基础区域中,来宾VMs无法通过它们的网络接口获得所属的VLANs。创建Ethernet端口配置文件用于表示物理网络或高级区域配置中trunk所有的VLANs网络,VLANs包括来宾网VLANs,本征VLAN以及VSM的数据/控制/管理VLANs。Nexus 1000v虚拟交换机VSM组件的IP地址。vCenter IP地址Nexus 1000v VSM不能部署在由CloudStack管理的vSphere主机上。当添加一个启用了VDS的VMware群集时,则显示公共网络流量vSwitch类型区域。用于控制,数据和管理端口组可以在同一VLAN。连接至VSM appliance的admin名称。将被CloudStack管理的群集中不应该存在任何VM。且不应该在CloudStack管理的群集中运行任何管理服务器、vCenter或者VM。为CloudStack创建一个独立的群集,并确保群集中不存在VM。上面指定的admin用户所对应的密码。默认值取决于虚拟交换机的类型:打开CloudStack管理界面,请在Nexus dvSwitchde 详情标签页中的Nexus 配置网络界面指定下列信息:主机必须通过vSphere的兼容性认证。请查看VMware硬件兼容性指南 `http://www.vmware.com/resources/compatibility/search.php <http://www.vmware.com/resources/compatibility/search.php>`_.上述vCenter用户的密码。在CloudStack中配置虚拟交换机时需要提供vCenter用户密码。流量标签可以使用如下值:在创建zone的过程中配置公共网络和来宾网络时,在编辑流量标签对话框中配置交换机名称。当在zone中更新交换机类型时,流量标签所使用的交换机名称。填写如下区域:拥有管理员权限的vCenter用户。在CloudStack中配置虚拟交换时需要提供vCenter用户ID。此时,该区域仅用于公共网络流量。来宾网络区域会被忽略并且留空。默认为空字符即假设特定流量类型不填加VLAN标签。在启用替代来宾流量时该选项才会显示。选择VMware vNetwork Distributed Virtual Switch。如果全局设置中vmware.use.dvswitch参数为true,则默认选项为VMware vNetwork Distributed Virtual Switch。在启用替代公共流量时该选项才会显示。选择VMware vNetwork Distributed Virtual Switch。如果全局设置中vmware.use.dvswitch参数为true,则默认选项为VMware vNetwork Distributed Virtual Switch。该过程只需要在集群中的一台主机上执行,不需要在所有主机上执行。本章节探讨了在CloudStack中使用Nexus虚拟交换机的先决条件与指南。在配置Nexus虚拟交换机之前,请确保你的系统符合下列条件:用户必须拥有管理员权限。为了使CloudStack启用部署Nexus,你必须在CloudStack UI中全局设置页面设置vmware.use.nexus.vswitch参数为true。只有这个参数设置为 "true"并且重启了管理服务器,你才可以看见Nexus虚拟交换机的相关页面,CloudStack在AddTrafficTypeCmd, UpdateTrafficTypeCmd和 AddClusterCmd API的调用中会忽略Nexus虚拟交换机的指定参数。在部署CloudStack时启用VDS,请在CloudStack管理界面中的全局设置页面中设置vmware.use.dvswitch parameter为true并重启管理服务器。只有启用了vmware.use.dvswitch参数,你才能在管理界面中指定VDS,并且CloudStack会忽略你指定的VDS-specific参数。另外,如果vmware.use.dvswitch参数的值为true且vmware.use.nexus.dvswitch参数的值为false,那么CloudStack中虚拟网络架构使用VDS。另外一个定义VDS配置的全局参数是vmware.ports.per.dvportgroup。它表示在VMware环境里每个VMware dvPortGroup中默认端口数量。默认是256。这个数值直接关系到你创建的来宾网络的数量。在最后一种情况下流量标签格式是[["Name of vSwitch/dvSwitch/EthernetPortProfile"][,"VLAN ID"[,"vSwitch Type"]]]虚拟交换机类型。指定字符串。在属性对话框中,添加iSCSI目标信息:除非CloudStack全局配置中的"vmware.use.nexus.vswitch"设置为"true",否则CloudStack会默认使用VMware标准vSwitch。在本版本中,CloudStack不支持在标准vSwitch和Nexus 1000v虚拟交换机的混合环境中配置虚拟网络。只能支持单一的标准vSwitch或Nexus 1000v虚拟交换机部署。使用iSCSI需要在vCenter中做一些准备工作。您必须添加iSCSI目标并创建iSCSI数据存储。在所有ESXi主机上使用同一个管理网络端口组名称。如下需要使用VDS名称:使用vCenter创建集群,向其中添加期望的主机。随后您可以将整个集群加入到Cloudstack中。(参考 `“添加群集: vSphere” <configuration.html#add-cluster-vsphere>`_)。VLAN ID设置为期望的ID任何适用于这个流量的 VLAN IDVLAN信息客户使用的VLAN范围公共网络的VLAN全部ESXi hypervisors主机所在的VLAN。CloudStack管理服务器所在的VLAN。VMware VDS不支持每种流量类型使用多个VDS。如果一个用户有多个VDS交换机,那么来宾网络和公共网络只能各使用一个VDS。在VMware vCenter服务器中的VMware VDS聚合主机层面虚拟交换机 。各个主机层面的虚拟交换机被抽象处理成一个在数据中心层面上横跨多个主机的大型 VDS,通过一个集中的接口,使用集中配置、管理、监控你的整个数据中心。实际上,VDS可以看作是数据中心层面中一个整体的虚拟交换机,通过VMware vCenter服务器管理数据中心中大量的主机网络。在虚拟机跨多个主机移动时保持其网络运行时状态,支持嵌入式监控和集中式防火墙服务。VDS能与标准虚拟交换机和 Nexus 1000V虚拟交换机一起部署或者独立部署。VMware VDS只支持CloudStack中的公共和来宾流量。必须安装VMware vCenter Standard 4.1或5.0版本,并且能够管理vSphere主机。VMware vSphere 安装和配置VSM配置清单值查看群集状态直到显示未受管理。查看状态以确保所有的主机都恢复正常。所有主机都恢复正常可能需要几分钟时间。当每个VSM实例的VEM模块达到最大数量时,需要在添加额外的VSM实例之前引入更多的ESXi主机。每个VSM实例的VEM模块数量限制为64个。当你移除来宾网络时,对应vCenter中的dv端口组不会被移除。你必须手动在vCenter中删除。无论创建简单或者高级区域,确保你总是在安装完VSM之后,创建区域之前,创建一个VSM的Ethernet配置文件。可以在创建区域时,通过添加必要的资源用于配置Nexus dvSwitch。在创建区域时需要添加必要的资源才能配置VDS。你无须创建任何vEthernet端口配置文件-CloudStack会在VM部署期间创建。你同样在该交换机中添加所有的公共和管理VLANs或VLAN范围。这个范围是你在区域中指定的VLAN范围。如果你计划利用之前安装的主机,那么必须重新安装VMware ESXi。为使控制台代理可以和主机一起工作,您需要扩展主机的防火墙端口范围。这是为了使控制台代理可以访问VMware的VM。为扩展端口范围,请登录到每台主机的VMware ESX服务控制台,然后执行以下命令:您应该准备一个vSphere主机连接的规划。将主机添加至CloudStack之前,需要合理地配置网络。要配置ESXi主机,可以先用vClient将它作为独立主机添加到vCenter。当您在vCenter的导航树中看到该主机时,点击它,选择配置页。现在您应该创建一个VMFS数据存储。请按照以下步骤:你需要下列的VSM配置参数:你需要以下关于VLAN的信息。你需要关于vCenter的下列信息:需要vCenter以下信息:dvSwitch0dvSwitch0,,vmwaredvsdvSwitch0,200dvSwitch1,300,vmwaredvs空字符串myEthernetPortProfile,,nexusdvsvCenter检查清单vCenter群集名称vCenter凭证清单vCenter数据中心vCenter数据库名称vCenter 主机vCenter IPvCenter 密码vCenter要求建议使用vCenter标准版。vCenter服务器的系统要求:vCenter用户vCenter用户IDvCenter用户的密码vCenter用户名vCenter凭证必须配置vCenter使用443端口与CloudStack管理服务器通讯。启用vMotion。vSphere安装步骤建议使用vSphere标准版。但是VMware用户需要考虑vSphere授权对CPU的限制。请参考 `http://www.vmware.com/files/pdf/vsphere\_pricing.pdf <http://www.vmware.com/files/pdf/vsphere_pricing.pdf>`_ 并与销售代表讨论。vSphere和vCenter版本,4.1或5.0。|dvSwitchConfig.png: Configuring dvSwitch||traffic-type.png||vds-name.png: Name of the dvSwitch as specified in the vCenter.||vmwareiscsidatastore.png: iscsi datastore||vmwareiscsigeneral.png: iscsi general||vmwareiscsiinitiator.png: iscsi initiator||vmwareiscsiinitiatorproperties.png: iscsi initiator properties||vmwareiscsitargetadd.png: iscsi target add||vmwarenexusaddcluster.png: vmware nexus add cluster||vmwarenexusportprofile.png: vSphere client||vsphereincreaseports.png: vSphere client||vspheremgtnetwork.png: vSphere client||vspherephysicalnetwork.png: vSphere client||vspherevswitchproperties.png: vSphere client|