This document was ed by and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this report form. Report 3i3n4
Overview 26281t
& View Commissioning Open Mss Vmware as PDF for free.
Important Notice on Product Safety This product may present safety risks due to laser, electricity, heat, and other sources of danger. Only trained and qualified personnel may install, operate, maintain or otherwise handle this product and only after having carefully read the safety information applicable to this product. The safety information is provided in the Safety Information section in the “Legal, Safety and Environmental Information” part of this document or documentation set.
Nokia is continually striving to reduce the adverse environmental effects of its products and services. We would like to encourage you as our customers and s to us in working towards a cleaner, safer environment. Please recycle product packaging and follow the recommendations for power use and proper disposal of our products and their components. If you should have questions regarding our Environmental Policy or any of the environmental services we offer, please us at Nokia for any additional information.
Summary of changes..................................................................... 8
1 1.1
Overview of the commissioning of the Open MSS Cloud on VMware.......................................................................................... 9 roles and requirements..........................................................9
2 2.1 2.1.1 2.1.2 2.2
2.3.1
Hardware requirements of the reference configuration................ 11 Computing capacity of the reference hardware configuration...... 11 High Availability............................................................................ 11 Dynamic Resource Scheduler......................................................12 Networking architecture of the reference hardware configuration.... 12 iSCSI and FCoE data access.......................................................12 Enclosures and ToR switches...................................................... 13 Storage recommendations for the reference hardware configuration.................................................................................13 FC-, FCoE-, or iSCSI-based block storage connection................13
3
Software requirements of the reference configuration................. 15
4 4.1 4.2 4.3 4.4 4.4.1
Deployment requirements for the VMware Virtual Infrastructure..... 16 High availability settings...............................................................16 Open MSS capacity requirements............................................... 18 vSphere networking setup............................................................19 vSphere disk setup.......................................................................21 Open MSS storage requirements.................................................22
5 5.1 5.2 5.3
Deployment requirements toward vCloud.................................... 24 Increasing the element count in the OVF file............................... 24 Open MSS Organization vDC allocations.................................... 25 Open MSS networks.................................................................... 26
6 6.1 6.2
Application content provisioning...................................................28 Updating the Open MSS disk image content............................... 28 ing the Open MSS VSA template to the vCloud Organization vDC Catalog........................................................... 32 ing the Open MSS template to the vCloud Organization vDC Catalog.................................................................................36 Open MSS VSA base configuration for CAM...............................40 Naming the deployed vApp.......................................................... 40 Setting the VSA time zone........................................................... 40 Generating SSH key pairs............................................................41 Configuring the OaM IP addresses.............................................. 44 Naming the VSA VMs.................................................................. 45
Open MSS base configuration for CAM....................................... 45 Naming the deployed Open MSS vApp....................................... 45 Setting the C number of the Open MSS instance........................ 46
7 7.1 7.1.1 7.1.2 7.1.3 7.2 7.2.1 7.2.2 7.2.3
Application deployment with Cloud Application Manager............ 47 Deploying the Open MSS VSA from template............................. 47 Creating the Open MSS VSA application.....................................47 Deploying the Open MSS VSA application.................................. 53 Starting the Open MSS VSA application......................................54 Deploying the Open MSS from template......................................55 Creating the Open MSS application.............................................55 Deploying the Open MSS application.......................................... 63 Starting the Open MSS application.............................................. 64
8 8.1 8.1.1 8.1.2 8.1.3 8.2 8.3
Health check of generic operations..............................................66 Console access............................................................................66 Accessing the OMU console........................................................ 66 Accessing the VMware console................................................... 68 Configuring the VSA.....................................................................71 Checking the disk connections.....................................................72 Checking the operational status of the Open MSS Cloud............80
Checklist for Open MSS Cloud deployment.................................86
Ongoing Open MSS application creation........................................... 62
Figure 41
Finishing the Open MSS application creation.....................................63
Figure 42
Starting the Open MSS deployment................................................... 64
Figure 43
Start the Open MSS with the start operation...................................... 65
Figure 44
Accessing Open MSS MML interface via Putty.................................. 67
Figure 45
Accessing Open MSS MML interface via HIT.................................... 67
Figure 46
Open MSS vApp OMU-0 virtual machine console in vCloud Web interface..............................................................................................69
Figure 47
Open console for OMU-0 in vCloud Web interface............................ 70
Figure 48
Switch from Service Terminal to MML in VM vCloud console............ 71
Figure 49
Power off VM...................................................................................... 84
Figure 50
Confirm power off............................................................................... 84
Figure 51
Power on VM...................................................................................... 85
Figure 52
Confirm power on............................................................................... 85
roles and requirements................................................................ 9
Table 2
Size of deployment artifacts............................................................... 14
Table 3
Minimum number of ESXi blades for different increments................. 19
Table 4
NSX Edge Service Gateways (ESG) requirements for a single Open MSS Cloud......................................................................................... 19
Table 5
External networks necessary for the Open MSS Cloud..................... 20
Table 6
Connections used in Open MSS VMs................................................ 21
Table 7
Open MSS storage allocations........................................................... 23
Table 8
Reserved capacity allocations for vU............................................ 25
Table 9
Reserved allocations for Memory vU.............................................26
Summary of changes Changes between document issues are cumulative. Therefore, the latest document issue contains all changes made to previous issues. Changes between issues 2-0-1 and 2-0-0 Information on changing the EMB supervision timer has been added to section Checking the operational status of the Open MSS Cloud. The list of files distributed through NOLS was updated in section Application content provisioning. The CR3MSSTP.OVF and cr3ipxtx.img files were removed from section Updating the Open MSS disk image content, and the IP injector script command was updated. The VSA template name was corrected in section ing the Open MSS VSA template to the vCloud Organization vDC Catalog. The section Naming the VSA VMs was added. Changes between issues 2-0-0 and 1-1-0 A new interface has been introduced. Changes between issues 1-1-0 and 1-0-0 The external networking solution has been modified.
Overview of the commissioning of the Open MSS Cloud on VMware
1 Overview of the commissioning of the Open MSS Cloud on VMware The commissioning procedure for Open MSS Cloud consits of the application content provisioning and the application deployment with CAM, based on the assumption that the basic vSphere infrastructure and at least one vCloud Director cell is already installed and ready to use. The Open MSS Cloud application is a Virtual Network Function (VNF), which can be commissioned on top of HP BladeSystem hardware with VMware as the software platform. Before commissioning the VNF, the network has to be planned and the hardware and software platforms configured to accommodate the Open MSS Cloud. There are basic hardware and software requirements and recommendations for the virtualized Open MSS deployment and operation. The hardware requirements originate from the Nokia Cloud Infrastructure (NCI) reference architectures based on HP BladeSystem. A Virtual Infrastructure (VI) also has to be set up for the correct deployment of the VNF. The deployment of the Open MSS Cloud has to be performed through the Cloud Application Manager (CAM) interface. When the Open MSS Cloud commissioning is successfully completed, it is integrated with NetAct and ready for network integration. After deployment, it is possible to check the basic operation of the VNF by following the health check procedure for generic operations. Considerations for Open MSS Cloud deployment can be found in section Checklist for Open MSS Cloud deployment.
Other resources Web links VMware documentation
1.1 roles and requirements Commissioning the Open MSS Cloud requires the definition of roles with different knowledge areas. Table 1
roles and requirements Role
Description
vSphere/vCloud
A who has istrative privileges on all the VMware SW components that build up the cloud infrastructure.
Expected knowledge vSphere 6.x installation, configuration, and istration vCloud 6.x installation, configuration, and istration
Overview of the commissioning of the Open MSS Cloud on VMware
Table 1
Commissioning Open MSS Cloud on VMware
roles and requirements (Cont.) Role
Description
Expected knowledge V certified
Infrastructure Network
A who has istrative privileges on all the physical and virtual networking of the cloud infrastructure. If the vSphere/vCloud is the virtual networking , he should be considered as the networking as well.
vSphere 6.x configuration and istration on network level vCloud 6.x configuration and istration on network level HP Virtual Connect configuration and istration HP c7000 configuration and istration HP Datacenter switching configuration
Infrastructure Storage
Open MSS Commissioner
A who has istrative privileges on all the physical and virtual storage of the cloud infrastructure. If the vSphere/vCloud is the virtual storage , he should be considered as the storage as well.
vSphere 6.x configuration and istration on storage level
A who commissions the Open MSS on top of vCloud from Nokia's technical team.
vCloud 6.x configuration and istration on organization level
EMC VNX installation, configuration, and istration
Nokia CAM Basic Linux Open MSS installation and configuration
Hardware requirements of the reference configuration
2 Hardware requirements of the reference configuration All the hardware requirements originate from the Nokia reference configuration, which is based on HP Blade Systems, HP Flex Networking, and EMC Unified Storage solutions. The Open MSS Cloud has minimum HW requirements and recommendations in the following areas: Computing capacity Networking architecture Storage recommendations
• • •
2.1 Computing capacity of the reference hardware configuration The configuration includes HP Half Height Blades in one or more HP c7000 Enclosures. The number of blades and the number of enclosures depend on the planned size of the IaaS Cloud. The required blade capacity is calculated based on the Hyper threaded (logical) U core capacity and the required memory capacity. The ESXi Hypervisors require two Logical Cores and minimum 2 GB memory to operate, but that can also depend on the load of the emulated virtual I/O devices, for example, the Network Interface Cards (vNICs). ESXi hosts (blades) are organized into clusters to provide High Availability (HA) and Dynamic Resource Scheduler (DRS) features.
Other resources Web links Nokia Cloud Infrastructure on VMware
2.1.1 High Availability The High Availability (HA) features require minimum three blades. For proper allocation, always calculate with a maximum of 70-80% utilization for each blade, and provide enough capacity to avoid overbooking. If at least three hosts are used in an ESXi Host Cluster with an HA configuration tolerating one blade failure, the reservation already reaches 30% per host. When calculating the number of blades, the number usually has to be rounded up. Although this may result in over dimensioning, it also provides some security in the dimensioning of the infrastructure.
Hardware requirements of the reference configuration
Commissioning Open MSS Cloud on VMware
2.1.2 Dynamic Resource Scheduler The VMware ESXi Hypervisor DRS optimizes the execution of the VMs in the most effective way, so some minimal overbooking of resources is possible. In order to avoid any performance degradation, always calculate with the maximum utilization of logical cores when estimating the application needs. Always provide enough memory for the Telco applications, because relying on hypervisor memory swapping might result in a decrease of the performance.
Other resources Web links Nokia Cloud Infrastructure on VMware
2.2 Networking architecture of the reference hardware configuration The hardware reference configuration relies on HP Flex Networking Architecture, utilizing Data Center Bridging devices and FlexFabric converged networking. Nokia's reference configuration has minimum requirements and recommendations in the following areas: • • •
data access enclosures ToR switches
2.2.1 iSCSI and FCoE data access The reference architecture is based on iSCSI data access, however, the latest hardware and firmware elements are able to provide FCoE data access as well. Already deployed Fiber Channel (FC) switching can also be used, although in this case the FlexFabric Interconnect modules have to be utilized in the first two bays, instead of the Flex10/10D or an external FC to Fiber Channel over Ethernet (FCoE) converged switch.
Other resources Web links Nokia Cloud Infrastructure on VMware
Hardware requirements of the reference configuration
2.2.2 Enclosures and ToR switches This guide is based on a solution with a single rack configuration, including a maximum of two enclosures. The capacity can be extended with similar setups. In a smaller setup, where the number of the enclosures does not exceed 6-8, 10/40Gbit ToR switches can be used for aggregating the enclosure network connections, which is the case in the recommended reference setup. In larger designs, the collapsed core or End of Row (EoR) high capacity switches should be used for HP c7000 Blade Systems.
Other resources Web links Nokia Cloud Infrastructure on VMware
2.3 Storage recommendations for the reference hardware configuration The hardware reference configuration recommends an EMC VNX 5400 Unified Storage equipment. Since it is recommended to have file-based services besides the block-based ones, only the unified capable storage should be utilized. However, in very small configurations, where there are only a few enclosures with a single vCenter and vCloud, the Network File System (NFS) usage can be replaced with Internet Small Computer System Interface (iSCSI) or Fiber Channel over Ethernet (FCoE) only. To achieve a more robust High Availability (HA) of the clusters, the NFS can also be used as a secondary datastore that is used for host system logging. The Unified solution has block-based and file-based services.
Other resources Web links Nokia Cloud Infrastructure on VMware
2.3.1 FC-, FCoE-, or iSCSI-based block storage connection The Open MSS Cloud uses a storage architecture based on VSAs. Virtual Storage Appliances (VSAs) are RedHat Enterprise Linux 7 based Virtual Machines (VMs) that are not part of the Virtual Server Platform (VSP) cluster. They are not managed by the recovery system of the VSP, they cannot write VSP logs and alarms, and they boot independently from the Operation and Maintenance Unit (OMU).
Hardware requirements of the reference configuration
Commissioning Open MSS Cloud on VMware
There are two VSAs in one network element: VSA-1 serving WDU-0 disks and VSA-2 serving WDU-1 disks. Each VSA has a root disk which is used to boot the VSA. It contains all software artifacts needed for the VSA and stores all logs by the VSA. With the VSAs, the Virtual Network Function (VNF) storage architecture s disk sharing for fast VM failover in a way that no direct iSCSI connection is needed. The storage requirements can be served from Fiber Channel (FC), Fiber Channel on Ethernet (FCoE), or Internet Small Computer System Interface (iSCSI) backed storage. The OMU VMs and the Charging Unit (CHU) VMs access the shared Virtual Machine Disks (VMDK) via the additional VSA VMs. The purpose of the different types of disks: 16 GB root disk
The system disk of the VSA with pre-installed RHEL 7 operating system and iSCSI targets. The root disks are not exposed on the iSCSI interface.
32 GB main disk
The main disks are exposed on both iSCSI interfaces. These disks serve the OMUs.
180 GB supplementary disk
The supplementary disks are exposed on both iSCSI interfaces. These disks serve the CHUs.
The Open MSS VMs are connected to the VSA VMs with iSCSI connectivity, the OMU and CHU VMs act as the iSCSI initiators, and the VSA VMs act as the iSCSI targets. The VNF VMs not only access a disk in a shared manner, but high availability is also granted by the VSP. Each VM uses two disks in a redundant manner. Each VSA has its own Operation and Maintenance (OaM) connection toward the external OaM network. These connections are used for troubleshooting purposes. The size of the different deployment artifacts are described in Table 2: Size of deployment artifacts. Table 2
14
Size of deployment artifacts
Open MSS vApp OVF file including VSAs
~1 MB
CAM XML configuration file
~10 kB
File Allocation Table (FAT) disk image with iPXE for OMU boot (cr3ipxtx.img)
Software requirements of the reference configuration
3 Software requirements of the reference configuration The Open MSS Cloud has minimum software requirements and recommendations regarding the VMware and Nokia components. To set up a vCloud IaaS service appropriate for Telco applications like the Open MSS Cloud or another Virtual Network Function (VNF), the appropriate vCloud version should be used and licensed based on the number of U sockets the infrastructure has in total. During the commissioning phase, the Cloud Application Manager (CAM) and NetAct can be used for application management, while for application deployment, use an auxiliary Linux Virtual Machine (VM) and the OMU IP injector script. Cloud Application Manager The CAM is a Virtual Network Function Manager (VNFM) under a Management Cluster or Resource Pool. It is used to deploy the Open MSS on top of vCloud, and provides application VM management at scale-in and scale-out operation phases. It requires access towards the vCloud Application Programming Interface (API), vSphere API, and Open MSS OaM Interface. NetAct NetAct is a set of VNF element management applications for managing and operating network elements efficiently and remotely. It provides standard Fault Management (FM), Performance Management (PM), and Configuration Management (CM) functions for the VNF. NetAct requires access towards the CAM and the OaM interface. Auxiliary Linux Virtual Machine Updating the OMU-0 physical IP address of the VNF into its disk images requires an auxiliary machine. During the commissioning, a Linux environment must be available with a minimum disk size of 40 GB, where the OMU-0 physical IP must be ed with the omu_ip_injector.pl script. The recommended Linux VM is CentOS-based, however the only real requirement is that it must to be able to run Perl programs. This machine is temporarily located in the host cluster where the Open MSS is deployed, or in a cluster.
Other resources Web links Cloud Application Manager documentation NetAct documentation
Deployment requirements for the VMware Virtual Infrastructure
Commissioning Open MSS Cloud on VMware
4 Deployment requirements for the VMware Virtual Infrastructure The successful deployment of the VMware Virtual Infrastructure requires the configuration of the high availability, capacity, and vSphere networking and disk setup settings.
4.1 High availability settings For a secure High Availability operation, it is strongly recommended to have two different types of datastores for HA heartbeating. For High Availability (HA), a block- and NFS-based storage can be used concurrently, where the NFS (Network File System) datastore operates as a secondary heartbeat datastore and can contain the Log Directories of the hosts as well. If this is not an option, a minimum of two datastores can be configured from the same block-based Virtual Machine File System (VMFS) type. In this case, the overall capacity requirement can be distributed across multiple datastores. Those datastores then can be organized into datastore clusters. Under high load, the vCenter, with fully automated Dynamic Resource Scheduler (DRS) settings, can induce vMotion of VMs in order to balance the cluster load. However, this can cause problems in the operation of the signaling units and may cause the units to restart. With the right vMotion network bandwidth, the possibility of a unit failure can be effectively reduced. As this behavior is not acceptable, the partially automated DRS operation mode has to be selected. Under high load, this will raise alarms in the vCenter and the DRS will request a manual approval for the vMotion. For telco applications that use hardware monitoring for operating system heartbeating, like the Open MSS, a different approach is required to achieve the same health check and recovery. In VMware, the application HA is the function that monitors the operating system readiness and detects the general SW failures. It has two levels of fault detection. The first one is the operation of the OS in general on the guest. The second is the operation of a particular SW application on top of the VM. From the guest (VM OS) side, the VMware Tools or its corresponding part should be installed or implemented in order to provide basic heartbeat signals for monitoring the operating system’s general operation. For a particular SW application heartbeating, a specific Applicaiton Programming Interface (API) has to be implemented in the application and the complete VMware Tools has to be installed. The application HA function has to be enabled on a cluster level, though it can be fine tuned on the vCenter level only. That means that every Virtual Machine deployed through vCloud on top of the cluster inherits the default cluster application HA settings.
g
Note: All the VMs must have VMware Tools or Host-Guest heartbeating function implemented, otherwise they will be rebooted or set to sleep mode continuously as a result of the activation of the application HA recovery event. There is no need to customize the default application HA configuration.
Deployment requirements for the VMware Virtual Infrastructure
Figure 3
Commissioning Open MSS Cloud on VMware
Datastore Heartbeating
4.2 Open MSS capacity requirements In order to deploy a full capacity Open MSS on a cluster that has dual 12 core U sockets, the cluster requires 7 hosts, including the one host needed for high availability. It is possible to start the Open MSS with the basic increment, which requires at least 5 hosts in the cluster. The 5 hosts are enough to execute a subset of the Open MSS Virtual Machines (VMs) in case the application provides only the 500k from the total 3M Busy Hour Call Attempts (BHCAs) capacity. The 5 host clusters can provide adequate computing capacity for the basic increment, even if the VMs are running with maximum U utilization. The cluster can tolerate 1 host failure without performance degradation. While the minimum number of hosts required for Open MSS is 5, it can be manually extended up to 3M or BHCA capacity in increments. When calculating the number of blades, it is essential to know how many physical and logical cores there are in the blades in the specific cluster. The basic increment includes all the VMs of the full capacity Open MSS, except for the signaling units (GISUs) and the load balancers. The increments differ from each other in the amount of additional GISUs and the necessary load balancers (IP Director Units).
Deployment requirements for the VMware Virtual Infrastructure
Table Minimum number of ESXi blades for different increments shows the planning and calculation of the required blades in a cluster executing the Open MSS for the Nokia traffic profile. This calculation does not include an additional blade of ESXi Host Cluster HA for a 1-host-failure tolerant setup. The host operating system requires 6 or 8 threads (vUs) from the total number of threads for each ESXi host, depending on the number of physical cores. These are always reserved before calculating the total number of blades. Table 3
Minimum number of ESXi blades for different increments
s/BHCA
500k
1M
Total VMs 38
1,5M
2M
2,5M
3M
44
50
56
62
70
Increments Basic
+6 VM
+6 VM
+6 VM
+6 VM
+8 VM
Total vU 104
122
140
158
176
200
Total Memory 92 (GB)
110
128
147
165
190
Total Dual 10 3,20 Core (40) Blades
3,65
4,25
4,70
5,30
5,90
Total Dual 12 2,67 (4)* Core (48) Blades
3,21
3,58
3,96
4,50
5,00
* This setup requires 4 physical blades due to anti-affinity rules. Table 4
NSX Edge Service Gateways (ESG) requirements for a single Open MSS Cloud
Total storage needed for NSX ESGs of a single MSS (GB)
24
24
24
24
24
24
4.3 vSphere networking setup The recommended physical network includes two dual port 10Gbit FlexFabric network cards in each blade and two pairs of Virtual Connect in the c7000 enclosure. For the proper bandwidth allocation, network partitioning is required on both network interface cards.
Deployment requirements for the VMware Virtual Infrastructure
Commissioning Open MSS Cloud on VMware
The network configuration relies on the mandatory standard (vSS) and distributed switches (vDS). The use of the Network File System (NFS) datastores and NFS shares is optional in small setups, and the NFS port group can also be omitted from the Storage Access vDS, if NFS is not used at all. If Storage Access vDS is not used at all, the Flex Network Interface Card (NIC) partitions can be hidden, or the vNICs can be used for other purposes, for example, in fault tolerant networks. Two interconnect pairs, that is, 4 VC modules are required for the separation of the infrastructure operation networks from cloud application networks. Simple switches are adequate for VMware management and vMotion. The vMotion, host management, datastore access VMware Kernel Interfaces (VMK) and the hardware failure oriented group blocking-acknowledgement messages (HBAs) have to be separated from each other via a dedicated uplink. For the Open MSS application networking, three distributed switches are required: • • •
The port groups that provide access to certain external networks necessary for the Open MSS have to be created by the vCenter or by the network beforehand. These port groups are the basis of all the external networks in vCloud. Table 5
External networks necessary for the Open MSS Cloud
Deployment requirements for the VMware Virtual Infrastructure
External networks necessary for the Open MSS Cloud (Cont.)
vCloud External Network Open MSS[x] Single Homed
Table 6
vDS
vCenter Port Group
VDS-NCIV1CLOUDINTERNAL
Required
pgMSS[x]-SCTP-Sh-TUDP
Security Zone
Mandatory
Core-Control
Connections used in Open MSS VMs EL0
EL1
EMB0
EMB1
EL4
EL5
EL6
Interface → Unit ↓ Physical Order
1
2
3
4
5
6
7
OMU
Internal
Internal
Internal
Internal
OAM
iSCSI-Path A/B
LI
CHU
Internal
Internal
Internal
Internal
Billing
iSCSI-Path A/B
Not Used
STU
Internal
Internal
Internal
Internal
LI Data
Fraud reports
Not Used
O&M IPDU
Internal
Internal
Internal
Internal
Core Signaling
Core Signaling
Core Signaling
IPDU
Internal
Internal
Internal
Internal
Core Signaling
Core Signaling
Core Signaling
The gateway in the Open MSS core signaling networks is the ESG switch.
4.4 vSphere disk setup To deploy core applications like the Open MSS, an Internet Small Computer System Interface (iSCSI) or Fiber Channel over Ethernet (FCoE) datastore has to be provisioned to the vCloud. To avoid the mix-up with the Network File System (NFS) datastore, perform database profiling on the block-based datastore. Allocate only those as available policies for vCloud while creating Organization vDCs and selecting block-based RAID10 backed datastores.
Deployment requirements for the VMware Virtual Infrastructure
Figure 4
Commissioning Open MSS Cloud on VMware
Block-based datastore in vSphere for vCloud
For better performance, always use RAID10 Pools for the volumes created for VMware datastores in a telco-ready vSphere/vCloud setup. Figure 5
Storage policy in vSphere for vCloud
To achieve exceptional performance, it is strongly recommended to use multi–tier array pools with the combination of Serial-Attached SCSIs (SASs) and Solid-State Drives (SSDs). In order to avoid any performance degradation, use block-based data protocols. Using RAID5 disk arrays (pools) or NFS data access protocol may result in slowness and the decrease of performance for telco applications.
4.4.1 Open MSS storage requirements The VMware datastore is necessary for storing the Open MSS Virtual Machines’ different VMware configuration files and the VM host memory swap file. The arrays that host VMware datastores can also host the volumes that are used by the Open MSS. If this is not allowed for security reasons, a separate RADI10 Storage Pool array has to be created for the Open MSS disk volumes. With iSCSI-based Block Storage Connection via Virtual Storage Appliance (VSA), there is no need for the direct iSCSI connection between the disk units and the storage. Storage solution is integrated within the Open MSS vApp. The Operation and Maintenance Unit (OMU) and the Charging Unit (CHU) VMs access the shared virtual vmdk based disks via the additional VSA VMs. The respective Open MSS VMs connect to the VSA VMs with iSCSI connectivity with the OMU and CHU VMs acting as the iSCSI initiators and the VSA VMs acting as the iSCSI targets.
5 Deployment requirements toward vCloud Deploying the application in the VMware Cloud requires that certain conditions are fulfilled. The requirements include the number of XMLs in the OVF file, the number of Organization vDC allocations, and the number of MSS network connections. A Provider vDC is necessary to utilize resources form a vSphere cluster in vCloud. Before deploying the Virtual Network Function (VNF), the element count has to be increased in the OVF file, the Organiztion allocations have to be set, and the external networks have to be configured.
Other resources Web links Nokia Cloud Infrastructure on VMware
5.1 Increasing the element count in the OVF file The number of XML elements in the OVF file must be increased for the deployment of the Virtual Network Function. Purpose The default setting of vCloud regarding the maximum number of XML elements in the OVF file prevents the deployment of the Open MSS. The Open MSS can be deployed only to a vCloud cell where the value of the maximum element count is increased to 15000. Procedure 1
Initiate a Secure Shell (SSH) connection into the vCloud Director cell using root access.
2
Open the global.properties file through the /opt/vmware/vclouddirector/etc path in the vCloud Director by using VI text editor.
3
Create a backup of the global.properties file before making the changes.
Repeat the process for each of the remaining vCloud Director cells.
7
Restart the VMware vCD with the vmware-vcd restart command through the /etc/init.d/ path to apply the changes.
5.2 Open MSS Organization vDC allocations Use the reference tables of the capacity figures to create Organization vDCs when commissioning Nokia Cloud Infrastructure on VMware. vU Table Reserved capacity allocations for vU shows the required reserved allocations for the vU of the Open MSS Organization vDC, based on the increment in use and the available physical U core speed. Table 8
Tables Reserved allocations for Memory vU and Storage allocations show the required reserved allocations for the memory of the Open MSS Organization vDC, based on the increment in use: Table 9
Reserved allocations for Memory vU Capacity (MBHCA)
Memory (GB)
0,5
91
1
109
1,5
127
2
145
2,5
163
3
187
Table 10
Storage allocations Allocations GB
Organization vDC Quote
347 (SWAP File allocations for Template )
Included
347 (SWAP File allocations for full Open MSS increment)
Included
60 (2 * 30GB OMU Disks)
Included
1080 (6 *180GB CHU Disks)
Included
32 (2 * 61GB VSA root disks)
Included
Total: 1546
g
Note: Storage allocation is needed for the vApp template as well as the deployed applications. The required minimum number of pool-based isolated networks (VXLAN networks) for the Open MSS Cloud is 12.
5.3 Open MSS networks Set up the IP connectivity according to the required network names, types, and connections.
6 Application content provisioning Before the applications can be deployed in the Cloud Application Manager, the Open MSS VSA and Open MSS application templates have to be pre-provisioned into vCloud Director. In order to deploy the Open MSS via CAM, a configuration or parameter file has to be created beforehand. The Open MSS files are distributed via NOLS for VMware deployment: • • • • • • • •
OVF Template: CR3MSSTP.OVF and CR3VSAMS.OVF Configuration XML: CR3CAMTP.XML Zipped Raw Disk Image: MSS17OMU-W0.img.bz2 cr3ipxtx.vmdk VSA root disks (vsa-root.vmdk) VSA disk (vsa-supp-disk.vmdk) VSA configuration xml (CR3VSACA.XML) omu_ip_injector.pl
Other resources Web links Nokia Networks Online Services
6.1 Updating the Open MSS disk image content Updating the core disk image content in the Open MSS requires the configuration of the physical OMU IP address and the default gateway. Purpose Before the Open MSS vApp template is ed to the vCloud, the OMU-0 physical IP address and the default route must be set from the local application Operation and Maintenance (OaM) subnet. Procedure
Update the operating system of the Auxiliary Linux VM with the yum update command.
4
Install the bzip2 file with the yum install bzip2 command.
5
Install the Time-HiRes module with the yum install perl-Time-HiRes.
6
Add execution rights to the omu_ip_injector.pl script with the chmod +x omu_ip_injector.pl command.
7
Execute the IP injector script with the ./omu_ip_injector.pl IPAddress DefaultGW SubnetMask VSAOVFFile OMUDiskImage command. Running this script can take approximately one hour. Step example ./omu_ip_injector.pl 10.88.49.4 10.88.49.1 24 try/CR3VSAMS.OVF try/omu.img.bz2
8
Copy the CR3VSAMS.OVF, CR3MSSTP.OVF , the generated vsa-main-disk.vmdk, vsa-root.vmdk, cr3ipxtx.vmdk, and vsa-supp-disk.vmdk to the same folder.
6.2 ing the Open MSS VSA template to the vCloud Organization vDC Catalog The Open MSS VSA template must be ed before the application deployment with the CAM. Purpose The Open MSS and Open MSS Virtual Storage Appliance (VSA) are two separate vApps deployed in the same Organization vDC. Together they provide the functionality of the Open MSS. The Open MSS VSA vApp has to be deployed first, since it provides the storage services for the MSS. The Open MSS VSA template includes the of the Open MSS VSA template OVF package with the Virtual Machine (VM) specification of the VSA.
Set the default location in the Catalog. The Catalog default Organization vDC destination has to be set via the Catalog Properties if there are multiple Organization vDCs in the same Organization. While the template deployment should work even if the template VMs and the new vApp instance VMs are not in the same Organization vDC, it is recommended to put them into the same location. Step example Figure 13
2
Setting the default location in the Catalog
Locate the Open MSS VSA template (CR3VSAMS.OVF). The template is provided via NOLS. The configuration used during the Cloud Application Manager (CAM) deployment and the image content defines the VSA of the Open MSS.
Name the template. The name of the template in the vCloud is the operator’s choice, although it is recommended to include the template name and some information about its version. Versioning information can be found in the second line of the template. Step example
In this example, the name of the Open MSS VSA template in vCloud should be ‘CR3VSATP-3.7-0’.
4
Give a proper description to the template. To make identification easier later, give a short description to the template. Step example Nokia Open MSS Cloud 17 [Operator] VSA template
5
the template. The Organization or the Catalog Author has to initiate the . The procedure requires Java, Adobe Flash, and Internet Explorer on a machine or laptop that has access to the vCloud Director Web Portal. The portal can be accessed through the vCD web interface. Use the following path: Catalogs/My Organization’s Catalog/vApp Templates
Step result The first transfers the OVF file into a transfer directory of the vCloud Director Cell, which can be followed in the progress window. When the transfer is finished, the vCloud Director starts to import individual VMs of the OVF package into the vCenter. Every VM in the template becomes a VM in the vCenter, imported into one of the hosts in the cluster. This process might take some time, depending on the size of the template and, in case the VM has disks, their sizes. During importing, the template is in ‘Importing’ status and the progress bar indicates how the process is going. When the whole template is successfully imported, the template is ready.
g
Note: Only completely imported templates that are in ‘Ready’ state can be deployed.
Other resources Web links vCloud Director
6.3 ing the Open MSS template to the vCloud Organization vDC Catalog The Open MSS application template must be ed before the application deployment with the CAM. Before you start The Open MSS VSA template has to be successfully ed before the Open MSS template can be started, since the former provides the storage services for the Virtual Network Function (VNF). Procedure 1
Set the default location in the Catalog. The Catalog default Organization vDC destination has to be set via the Catalog Properties if there are multiple Organization vDCs in the same Organization. While the template deployment should work even if the template VMs and the new vApp instance VMs are not in the same Organization vDC, it is recommended to set the same location.
Browse the Open MSS template (CR3MSSTP.OVF). The template is provided via NOLS. The configuration used during the Cloud Application Manager (CAM) deployment and the image content defines the Open MSS.
3
Name the template. The name of the template in the vCloud is the operator’s choice, although it is recommended to include the template name and some information about its version. Versioning information can be found in the second line of the template. Step example
In this case, the name of the Open MSS template in vCloud should be ‘CR3MSSTP1.3-0’.
4
Give a proper description to the template. To make identification easier later, give a short description to the template. Step example Nokia Open MSS Cloud 17 [Operator] template
5
the template. The Organization or the Catalog Author has to initiate the . The procedure requires Java, Adobe Flash, and Internet Explorer on a machine or laptop that has access to the vCloud Director Web Portal. The portal can be accessed via the vCloud Director we interfac. Use the following path: Catalogs/My Organization’s Catalog/vApp Templates
6
Launch the window by clicking . Step example Figure 17
Locate the template and start the . Step example Figure 18
Locating the template and starting the
Step result The first transfers the OVF file into a transfer directory of the vCloud Director Cell. This can be followed in the progress window. When the transfer is finished, the vCloud Director starts to import individual VMs of the OVF package into the vCenter. Every VM in the template becomes a VM in the vCenter, imported into one of the hosts in the cluster. This process might take some time, depending on the size of the template and, in case the VM has disks, their sizes. During importing, the template is in ‘Importing’ status and the progress bar indicates how the process is going. The Open MSS VMs normally do not have virtual disks, therefore the import is fast. When the whole template is successfully imported, the template is ready.
g
Note: Only completely imported templates that are in ‘Ready’ state can be deployed.
6.4 Open MSS VSA base configuration for CAM The parameters have to be customized for a specific Open MSS instance and location before the application deployment with the Cloud Application Manager. The CR3VSACA.XML includes the basic configuration data that is necessary for the Open MSS to be able to work after the first deployment in vCloud. In the configuration XML, each parameter is described with a key, a label, and a value.
6.4.1 Naming the deployed vApp The ViAppConfiguration section must be customized to be visible in NetAct.
•
Set the id value in the ViAppConfiguration section under the ID tag to the name of the newly created vApp. The name will be visible in NetAct in the NE tree display name. Step example
id
MSS23 VSA
6.4.2 Setting the VSA time zone To ensure compatibility configure the time zone for the VSAs. Purpose The same time zone has to be set in the two Virtual Storage Appliances (VSAs) and it should be the same as the one set for the Operation and Maintenance Unit (OMU).
•
Set the time zone for the VSA in the Virtual Machine (VM) specific sections of the configuration file (Configuration/VMs/VM/). The values given in the Value field have to be compatible with the timedatectl Linux command. The possible values can be listed with the timedatectl list-timezones command in Linux.
6.4.3 Generating SSH key pairs Follow the steps to generate key pairs to to the VSA. Purpose The SSH key pairs are used to to the VSAs. The OAM interface of the VSAs use non-interactive SSH. A key pair is generated for each VSA and the public keys are added to the Attribute within each VSA's VM section, where the Key is vsa1-sshpublic-key and vsa2-ssh-public-key. This procedure describes how to generate a key pair using PuttyGen (http://the.earth.li/~sgtatham/putty/latest/x86/puttygen.exe) Windows version. Procedure 1
6.4.4 Configuring the OaM IP addresses Configure the VSA's OaM IP addresses, netmask, and default gateway. Purpose The IP addresses have to be externally routable. Procedure 1
Align the IP addresses with the IP plan and add them to the Attribute within each VSA's VM section. Step example
vsa1-om-ip
10.101.145.3
VSA1_params
2
Align the netmask of the OaM IP addresses with the IP plan and add them to the Attribute within each VSA's VM section. Step example
vsa1-om-netmask
255.255.255.0
VSA1_params
3
Align the default gateway of the OaM IP addresses with the IP plan and add them to the Attribute within each VSA's VM section. Step example
vsa1-om-gateway
6.4.5 Naming the VSA VMs You can customize the VSA VM names.
•
Give each VSA VM a unique name. Step example
vsa1-hostname
VSA-1
VSA1_params
6.5 Open MSS base configuration for CAM The parameters have to be customized for a specific Open MSS instance and location. The CR3CAMTP.XML includes the basic configuration data that is necessary for the Open MSS to be able to work after the first deployment in vCloud. In the configuration XML, each parameter is described with a key, a label, and a value.
6.5.1 Naming the deployed Open MSS vApp The ViAppConfiguration section must be customized to be visible in NetAct.
•
Set the id value in the ViAppConfiguration section under the ID tag to the name of the newly created vApp. The name will be visible in NetAct in the NE tree display name.
6.5.2 Setting the C number of the Open MSS instance The Open MSS instance can be identified by its C number. Purpose The C number has to be set in two different locations. Procedure 1
Set the C number in the ViAppConfiguration section, under the C4CAttributes tag in Attribute, with the RootDN key. The value should include the real C number in the string MSS-999999 by replacing the default numbers.
2
Set the C number in the ViAppConfiguration section, under the C4CAttributes tag in Attribute, with the CNumber key. The default value has to be replaced with the real C number itself.
Example
RootDN
MSS-999999
cfw4coreaif
…
CNumber
999999
cfw4corepostdeployment
Application deployment with Cloud Application Manager
7 Application deployment with Cloud Application Manager The CAM introduces a new Open MSS instance in NetAct via the Automatic Integration Framework (AIF) configurator after the deployment has been started. The deployment process includes the creation, deployment, and starting procedures for both the Open MSS VSA and the Open MSS from a template.
7.1 Deploying the Open MSS VSA from template The deployment via the Cloud Application Manager GUI can be completed by creating, deploying, and starting the Open MSS VSA application.
7.1.1 Creating the Open MSS VSA application The Open MSS Virtual Storage Appliance application must be created in CAM before deployment.
Application deployment with Cloud Application Manager
3
Commissioning Open MSS Cloud on VMware
Select Templates. Figure 24
Open Templates under the right Organization
4
Start creating the application by pressing the create icon next to the CR3VSATP-x.y-z template.
5
Browse the prepared configuration XML (CR3VSACA.XML) and it. Define the destination Organization vDC of the new vApp in the pop-up window. By default, it is the same window where the template is ed. Use primarily that selection.
Application deployment with Cloud Application Manager
6
Commissioning Open MSS Cloud on VMware
Map the external networks to the proper vApp network and click Next. Figure 26
Open MSS VSA to Organization vDC network mapping
Set the mapping between the application networks and the available Organization vDC to external networks, and select Internal for the vApp isolated networks. Table 12
Open MSS VSA network mapping at the creation phase in CAM Application Network
Application deployment with Cloud Application Manager
Finishing the Open MSS VSA application creation
Step result After a few seconds the new application becomes visible under the Applications tab. Until the process of creating the application is finished, no action is required.
Related procedures Deploying the Open MSS application on page 53
7.1.2 Deploying the Open MSS VSA application The Open MSS VSA application must be deployed in the CAM before it can be started. Before you start The process of creating the application must be successfully finished. Recommended preconditions links Creating the Open MSS application on page 47
•
Navigate to the Applications tab and start the deployment by pressing the icon and then clicking Deploy. The application deployment usually takes 10 to 30 minutes.
Application deployment with Cloud Application Manager
Commissioning Open MSS Cloud on VMware
Figure 31
Starting the Open MSS VSA deployment
Figure 32
Finished Open MSS VSA deployment
Related procedures Creating the Open MSS application on page 47
7.1.3 Starting the Open MSS VSA application The Open MSS VSA application must be started before the deploying the Open MSS application from the template. Before you start The application deployment must be successfully finished. Navigate to Applications and start the application by pressing the icon.
Application deployment with Cloud Application Manager
This results in starting the vApp and its Virtual Machines (VMs), which takes approximately 10 to 30 minutes. The application VMs start in the order defined by the template. Figure 33
Start the Open MSS VSA with the start operation
7.2 Deploying the Open MSS from template The deployment via the Cloud Application Manager GUI can be completed by creating, deploying, and starting the Open MSS application.
7.2.1 Creating the Open MSS application The Open MSS application must be created in CAM before deployment.
Application deployment with Cloud Application Manager
3
Commissioning Open MSS Cloud on VMware
Select Templates. Figure 34
Open Templates under the right Organization
4
Start creating the application by pressing the create icon next to the CR3MSSTP-x.y-z template.
5
Browse the prepared configuration XML and it. Define the destination Organization vDC of the new vApp in the pop-up window. By default, it is the same window where the template is ed. Use primarily that selection.
Application deployment with Cloud Application Manager
Open MSS vApp to Organization vDC network mapping
Set the mapping between the application networks and the available Organization vDC to external networks. Select Create Internal Network for the vApp isolated networks. Table 13
Open MSS network mapping at the creation phase in CAM Application Network
Application deployment with Cloud Application Manager
Table 13
Open MSS network mapping at the creation phase in CAM (Cont.) Application Network
g
60
Commissioning Open MSS Cloud on VMware
Organization vDC Network
Billing
Open MSS[x] Charging
LI
Open MSS[x] LI
Internal-iSCSI-A-Network
Add one of the networks created in section
Internal-iSCSI-B-Network
Add one of the networks created in section
OAM-EXT0
Open MSS[x] OAM Multi-homed Primary
OAM-EXT1
Open MSS[x] OAM Multi-homed Secondary
OAM-EXT2
Open MSS[x] OAM Single Homed
-EXT0
Open MSS[x] Multi-homed Primary
-EXT1
Open MSS[x] Multi-homed Secondary
-EXT2
Open MSS[x] Single Homed
Note: It is important to select Create Internal Network for the EMB and FI networks, and to follow the mapping guidance strictly while setting the mapping on CAM GUI.
Application deployment with Cloud Application Manager
Provide AIF reference ID. Figure 38
g
Parameter setting
Note: The ID in this figure is not a valid ID and is only for demonstrational purposes. In order to execute post processing on the application and to be able to in NetAct AIF service, these parameters have to be set properly. The OaM (MML ) name and must be 5 characters long with capital letters only. The MR parameter for cloud-based core applications has to be arranged with the NetAct istration.
Application deployment with Cloud Application Manager
Finishing the Open MSS application creation
Step result After a few seconds the new application becomes visible under the Applications tab. Until the process of creating the application is finished, no action is required.
7.2.2 Deploying the Open MSS application The Open MSS application must be deployed in the CAM before it can be started. Before you start The process of creating the application must be successfully finished. Navigate to the Applications tab and start the deployment by pressing the icon. The application deployment usually takes 10 to 30 minutes.
Application deployment with Cloud Application Manager
Figure 42
Commissioning Open MSS Cloud on VMware
Starting the Open MSS deployment
7.2.3 Starting the Open MSS application Starting the Open MSS application is the final step of the commissioning process. Before you start The application deployment must be successfully finished. Navigate to Applications and start the application by pressing the icon. This will result in starting the vApp and its VMs, which takes 10 to 30 minutes. The application VMs start in the order defined by the template. After all units are started, the Open MSS can reach the basic operation state in 5-10 minutes and all the pair unit warm-ups can be finished within 10-15 minutes. From that point, as soon as the ‘CAM Start’ task is finished, the Open MSS can reach operational state within 10-15 minutes. The Open MSS first deployment starts only with the basic increment. In the starting phase, after the unit start is accomplished, the Cloud Application Manager (CAM) executes a post configuration script that will do the basic configuration of the Operation and Maintenance Unit (OMU) and the disked units. The script only covers basic OaM and iSCSI-related configuration, therefore any further application configuration and integration should be done manually or via NetAct.
8 Health check of generic operations Perform the health check to ensure that the VNF is ready to use. The health check should be perfomed in the following areas: • • •
console access disk connections operational status
8.1 Console access Perform the console accesses' health check procedure to ensure that the Open MSS is ready to use. The health check should be performed in the following areas: • • •
Telnet or SSH console VMware console VSA configuration
8.1.1 Accessing the OMU console In order to access the Operation and Maintenance Unit console the logical IP address has to be configured. Purpose The OMU console (MML interface) can be accessed via the logical IP address assigned to the EL4 interface of OMU-0. When the deployment is done and the Cloud Application Manager (CAM) is finished with the post processing, the new Open MSS instance MML interface can be accessed via Telnet or Secure Shell (SSH). Telnet or SSH can be opened form any machine in any location where the machine has access to the Application OaM subnet and the Open MSS's logical IP address assigned to the EL4 interface of OMU-0. Any console access tool, for example, Putty or HIT, can be used to access the MML interface in order to do the basic system check. It depends on the service engineer’s choice. Procedure 1
Access the MML interface. Use the same name and that was given on CAM GUI in the Open MSS application creation wizard.
8.1.2 Accessing the VMware console If the Open MSS Man-Machine Interface is not accessible via Telnet or SSH toward the logical IP address assigned to the EL4 interface of OMU-0 because of network access or configuration issues, the only way to check the Open MSS status is via the service terminal of the OMU-0. Purpose You can access the service terminal only via the VMware vCloud web interface. That is a regular Virtual Machine KVM (Keyboard-Video-Mouse) console window, which is limited to manual access only, and neither Putty nor HIT can be used to access it directly. It is recommended to use the console window to check the connectivity of the Open MSS OMU EL4 interface.
g
Note: The console window has an irregular keyboard layout. Procedure 1
Log in to the vCloud Web interface as Organization or as a vApp .
2
Open the Open MSS vApp instance to check the Virtual Machines (VMs) belonging to that Open MSS instance.
3
Navigate to MyCloud istration and select the right Organization vDC that contains the Open MSS vApp.
4
Click on the vApp to open detailed view. All the VMs belonging to the vApp are listed under Virtual Machines.
Click on the console window to open the console or select the Popout Console from the right click menu of the VM. Figure 46
Open MSS vApp OMU-0 virtual machine console in vCloud Web interface
When opening the console for any of the VMs, the regular service terminal becomes visible. To reach the MMI from the Service Terminal, use the following commands: ZLE:1,VIMMLAGX Z1C:
The same name and can be used to log in to the Service Terminal and MMI. The service terminals of the units can be accessed via ZDDS context change, via OMU Telnet interface.
f
DN09217877 Issue: 2-0-1
Changing any setting of the Open MSS vApp in vCloud is forbidden and may cause irreversible damage to context synchronization between the CAM and the vCloud. In a worst case scenario, the redeployment of the whole application might also be required.
Switch from Service Terminal to MML in VM vCloud console
8.1.3 Configuring the VSA After the automatic post-configuration process, the VSA must be configured manually. Purpose Once the VSA is switched on for the first time, post-configuration scripts are started for the following tasks: • • •
expanding the compact root disk size to 16 GB configuring the network interfaces used for iSCSI communication starting the generic SCSI target subsystem for Linux (SCST)
The post-configuration phase takes about five minutes. this by checking the network interface status with the following command: ip addr show
When the interfaces for NIC 1 (ens161) and NIC 2 (ens192) are configured, the postconfiguration phase is finished.
Note: If you are accessing through Console, log in as root with a automatically generated by the VMware infrastructure. If you are doing so through SSH as a VSA , use the generated keys. Procedure 1
Create the ifcfg [interface name] file in the /etc/sysconfig/networkscripts/ on the VSA. The following attributes should be included in the interface file: DEVICE=”[interface name]” BOOTPROTO=static ONBOOT=yes IPADDR=[VSA IP on the OAM network] NETMASK=[netmask of the OAM subnet] GATEWAY=[the gateway towards the OAM network]
2
Make sure that the new configuration is in use. ifup [interface name]
Any SSH client can be used to whether the OaM interface of the VSA functions.
8.2 Checking the disk connections Check the disk connections of the instances to ensure the commissioning procedure was successful.
Procedure 1
List the Winchester Drive Unit (WDU) states, and execute a basic device listing in the Service Terminals. ZISI::WDU,:;
2
Check the availability of the WDU device for the Operation and Maintenance Unit (OMU) and the Charging Unit (CHU). All devices must be in WO-BU state. MAIN LEVEL COMMAND <___> < ZISI:,OMU:WDU;
I/O DEVICE WORKING STATE AND SPARE DEVICE SYSTEM = FTvMSS03 UNIT = OMU DEVICE STATE SPARE DEVICE DEVICE STATE TAPE TYPE WDU-00 WDU-01
WO-BU WO-BU
-
-
INFO
TAPE STATE
-
COMMAND EXECUTED MAIN LEVEL COMMAND <___> < ZISI:,CHU:WDU; LOADING PROGRAM VERSION 12.7-0 I/O DEVICE WORKING STATE AND SPARE DEVICE SYSTEM = FTvMSS03 UNIT = CHU DEVICE STATE SPARE DEVICE DEVICE STATE TAPE TYPE WDU-00 WDU-01
WO-BU WO-BU
-
-
INFO
TAPE STATE
-
COMMAND EXECUTED
I/O DEVICE WORKING STATE COMMAND
<
If some of the disks are not in the WO-BU state for OMU, that means the iSCSI Initiator is not connected to the storage. The OMU can see only one disk from its disk pair, because only the directly attached Raw Device Mapping Serial Attached SCSI (RDM SAS) emulated disk is available. If there is no RDM SAS emulated disk attached to the CHU and the state of the WDUs shows an error message, the iSCSI Initiator is not connected to the storage at all. If the iSCSI Initiators are supposed to be connected to the VSA, but the disk cannot be changed to the WO-BU state or they immediately fall back to the BL-SY or TE-ID state. Afterwards the Open MSS unit will have SW issues and it will require further analysis by . 3
Switch to each disked unit service terminal. ZDDS:
,
;
4
Use the MXP command to access the WDU device listing. ZMXP:W0-/ ZMXP:W1-/
Check the WDU device listing for CHU. CHU disks are initiated and formatted by the CAM after starting the Open MSS and executing the post-configuration process. They have to be formatted before they can be listed. In case the initiation of the CHU disks does not take place, they can be initiated manually later with the MML commands.
g 6
Note: Disk formatting is not possible while the disk is in WO-BU state. You must first put the disk in WO-ID state.
Format and initiate CHU disks. ZMID:W0-CHU
,FFF,2,XX;
7
Use the MXP command to access the WDU device listing. The MXP command lists the files and directories in the root of the device. Step example ZMXP:W0-/ ZMXP:W1-/
8
g 9
If the iSCSI Initiators are not connected, check the inter-cell interference coordination (ICIC) configuration to list the Open MSS disked units’ iSCSI initiator configuration in the service terminal. Note: The ICIC file is created from the iPXE boot image (CR3IPX) automatically. Do not modify it manually.
Use the IWT command to see the iSCSI configuration of the Open MSS instance. Step example EXECUTION STARTED
8.3 Checking the operational status of the Open MSS Cloud For the correct operation of the VNF on VMware, all basic units must be in working executing (WO-EX) or spare executing (SP-EX) state.
Before you start The Open MSS deployment must be finished. Procedure 1
Log in to the Man-Machine Interface (MMI) and check the EMB supervision timer with the WOI command. Step example MAIN LEVEL COMMAND <___> < ZWOI:9,193; LOADING PROGRAM VERSION 9.5-0 EXECUTION STARTED
READING DATA FROM DATABASE, PLEASE WAIT ...
PARAMETER CLASS: 9
SYSTEM_SUPERVISION
IDENTIFIER
NAME OF PARAMETER
VALUE
CHANGE POSSIBILITY
00193
EMB_SUPERV_MULTIPLIER
0008
YES
COMMAND EXECUTED
PARAMETER HANDLING COMMAND <WO_> <
2
If the value of the EMB_SUPERV_MULTIPLIER parameter is lower than 8, increase it with the WOC command. Step example < ZWOC:9,193,8; LOADING PROGRAM VERSION 9.5-0 EXECUTION STARTED CHANGE PARAMETER VALUE: PARAMETER NAME EMB_SUPERV_MULTIPLIER
After you have changed the parameter value, restart the system with the USS command. Step example < ZUSS:SYM:; EXECUTION STARTED SYSTEM RESTART WITH SYM UNIT REQUESTED CONFIRM COMMAND EXECUTION: Y/N ? Y SYSTEM RESTART ORDERED END OF DIALOGUE SESSION Connection closed
4
Log in to the MMI and check the unit states with the USI command. The booting process takes approximately 5-10 minutes after the whole application is started. After the process is finished, all units must be in WO-EX or SP-EX state. In the basic increment three Generic IP Signaling Units (GISUs) have to be in WO-EX and one in SP-EX IDLE state. The rest of the GISUs have to be in separated - no hardware (SE-NH) Graceful Shutdown (GRSD) state. Step example MAIN LEVEL COMMAND <___> < ZUSI; EXECUTION STARTED MSS
MSS14
WORKING STATE OF UNIT PHYS OMU-0 0000 OMU-1 0001 STU-0 0002 STU-1 0003 CHU-0-0 0010 CHU-0-1 0011 CMU-0 000A CMU-1 000B VLRU-0-0 001A VLRU-0-1 001B
82
2015-09-21 UNITS STATE WO-EX SP-EX WO-EX SP-EX WO-EX SP-EX WO-EX SP-EX WO-EX SP-EX
If a unit is stuck in the RP1 booting phases, initiate a Virtual Machine (VM) reset in the Cloud Application Manager (CAM) by clicking the Operations square icon for that VM. Note: Do not power off or switch on the Open MSS vApp VMs from vCloud or vCenter under any circumstances. Always use CAM or NetAct to initiate unit VM power cycle. If CAM is not able to do the power reset of the VMs, it can be done via vCloud or vCenter. Always use reset and not power on or power off to do the power cycling. Step example Figure 49
8
Power off VM
Confirm the power off procedure by clicking the Power Off button. Step example Figure 50
5. Create port groups on the VDS-NCIV1CLOUDINTERNAL switch for Control Plane () networks.
•
•
6. Add the Port groups external networks created for Open to the vCloud. MSS on the 3 vDSs.
DN09217877 Issue: 2-0-1
Checked
Open MSS___ OAM Multihomed Primary pgMSS___OAM-SCTPMh-Pri Open MSS___ OAM Multihomed Secondary pgMSS___OAM-SCTPMh-Sec Open MSS___ OAM Single Homed pgMSS___OAM-SCTPSh-T-UDP
The name of the port groups used: •
References
See section vSphere networking setup
[ ]
Open MSS___ Multi-homed Primary pgMSS___-SCTPMh-Pri Open MSS___ Multihomed Secondary pgMSS___-SCTPMh-Sec Open MSS___ Single Homed pgMSS___-SCTPSh-T-UDP
Open MSS___ Charging Open MSS___ LI Open MSS___ OAM Single Homed Open MSS___ OAM Multi-homed Primary Open MSS___ OAM Multi-homed Secondary Open MSS___ Single Homed Secondary Open MSS___ Multi-homed Secondary Open MSS___ Multi-homed Secondary 7. Add the External networks The name of the external networks created for Open Organization to the MSS. networks (name Organization in template and vDC. the vCloud external network names): •