Page 1
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide Humair Ahmed Dell Technical Marketing – Data Center Networking August 2013...
Compellent storage array, and Dell S5000 as NPIV Proxy Gateway We will first demonstrate a non-converged setup and then add the Dell S5000 to the picture. This will allow us to see how the connections and configuration change from a traditional non-converged environment to a converged environment with the introduction of the Dell S5000 switch.
Page 5
The Dell Compellent Storage Center controllers are used to support various I/O adapters including FC, iSCSI, FCoE, and SAS. A Dell Compellent Storage Center consists of one or two controllers, FC switches, and one or more enclosures. In the above example, two Compellent SC8000 controllers, one Compellent SC220 enclosure, two FC switches, and one 4-port FC HBA card on each Compellent controller is used for the SAN network.
Page 6
32 independent paths from the connected storage devices. The MPIO framework uses Device Specific Modules (DSM) to allow path configuration. For Windows Server 2008 and above, Microsoft provides a built-in generic Microsoft DSM (MSDSM) and it should be used. For Windows Server 2003 only, Dell Compellent provides a DSM.
Page 7
While this is a highly robust failover solution, it requires a large number of ports. Dell Compellent introduced virtual ports in Storage Center 5.0. Virtual ports allow all front-end IO ports to be virtualized. All FC ports can be used at the same time for load balancing as well as failover to another port.
Page 8
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE During initial configuration of the Compellent Storage Center, we created a disk pool labeled “Pool_1” consisting of seven 300 GB drives. The total disk space is 1.64 TB; this can be seen in the screen shot of the Storage Center System Manager GUI as shown below in Figure 5.
Page 9
FC ports on the FC HBA card. Figure 8: Added Dell PowerEdge Server HBAs to ‘Server Object’ on Dell Compellent Storage Array The next step is to enable mulipathing on Windows Server 2008 R2 Enterprise. Navigate to ‘Start->Administrative Tools->Server Manager->Features->Add Features’...
Page 10
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 9: Installing Windows Server 2008 R2 Enterprise Multipath I/O feature Now navigate to ‘Start->Control Panel->MPIO’ and click the ‘Add’ button. When prompted for a ‘Device Hardware ID’, input “COMPELNTCompellent Vol” and click the ‘OK’ button. The system will need to be restarted for the changes to take effect.
Page 11
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 10: Installing Windows Server 2008 R2 Enterprise Multipath I/O for Compellent array Next, create a volume and map it to a server object so the respective server can write to the FC storage array.
Page 12
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 11: Created 20 GB “Finance_Data_Compellent” volume on Compellent array Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
Page 13
Figure 12: Confirming to keep the default value for ‘Replay Profiles’ The last step in configuring the Dell Compellent Storage Center array is mapping the newly created volume to the server. Once you create the volume, you will be asked if you want to map it to a server object.
Page 14
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 13: Initialized and formatted virtual disk within Windows Server 2008 R2 Enterprise Now the volume on the Compellent storage array displays in Windows just like a typical hard drive.
Page 15
The reason we see eight storage ports instead of four is because we are using virtual port mode on the Dell Compellent array so we are seeing the physical WWPN and the virtual WWPN. We would see similar (with different WWPNs) on the fabric B FC switch.
Page 16
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 15: Node logins on the fabric A FC switch You can also see the node WWPN by looking at what is logged in on the physical port as shown in Figure 16 below.
Page 17
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 16: Check WWPNs logged in on port 2 of fabric A FC switch We can use the respective port WWPNs to create a specific zoning configuration such as that displayed below in Figure 17.
Page 18
Figure 18: ‘switchshow’ command on fabric A FC switch displaying connections on FC ports As you can see in Figure 18 above, since we are using virtual port mode on the Dell Compellent storage array, instead of the normal F_Port text as shown on port 2 which is connected to the FC HBA on the server, we see “1 N Port + 1 NPIV public”.
Page 19
In Figure 19, you can see how the traditional non-converged topology has changed with the introduction of the Dell S5000 switch in a possible use case. Note how the Dell S4810 Ethernet switches have been replaced by Dell S5000 converged switches. Also, note how the separate Ethernet NIC and FC adapters on the server have been replaced by one converged network adapter (CNA).
Page 20
Figure 21: Logical view of how operating system sees CNA with NPAR and FCoE enabled Since we are using a Dell QLogic QLE8262 CNA, the first thing we need to do is configure it for FCoE. Note, since we NIC team with ‘Failsafe Team’, no configuration is required on the S5000 switches and the switches are not aware of the NIC team.
Page 21
Apply DCB map to downstream interface going to server. The same procedure is repeated for the S5000 connecting to fabric B. Note that we used a different ‘fc-map’ and FCoE VLAN. Since fabric A and fabric B are isolated from each other, this was not necessary, however, it may be easier to troubleshoot and understand if some distinction is made between the two fabrics.
Page 22
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 22: Dell S5000 (fabric A ) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable >...
Page 26
In Figure 24 below, you can see the output of the ‘switchshow’ command on the fabric A FC switch. Notice that the port connected to the Dell S5000 switch (port 4) now states “F-Port 1 N Port + 1 NPIV public”...
Page 27
Figure 25: Output of the ‘nsshow’ command on the fabric A FC switch Since we swapped the FC HBA card for a Dell QLogic CNA card, we do have to update the HBA ‘server object’ mapping on the Compellent storage array. To accomplish this, we simply use the Storage...
Page 28
Additionally, we need to update the FC zoning configurations on each FC switch by removing the FC HBA WWPN and adding the Dell QLogic CNA WWPN. Notice how we do not need to add the Dell S5000 WWPN to the zoning configuration.
Page 29
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 28: Zoning for fabric B FC switch > zonecreate financeServer1_p2_test,"50:00:d3:10:00:ed:b2:3c;50:00:d3:10:00:ed:b2:42;50:00:d3:10:00:ed:b2:3a; 50:00:d3:10:00:ed:b2:40;20:01:00:0e:1e:0f:2d:8f " > cfgcreate zoneCfg_test,"financeServer1_p2_test" > cfgenable zoneCfg_test > cfgsave Figure 29: Output of the ‘zoneshow’ command on the fabric A FC switch You can see that our zoning configuration matches what is displayed in Figure 27.
Page 30
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 30: Output of the ‘portshow 4’ command on the fabric A FC switch To see information on NPIV devices logged into the fabric, use the ‘show npiv devices’ command as shown below.
Page 31
Figure 33: See FIP-snooping enode information on S5000 fabric A switch To see a list of configured fcoe-maps, use the ‘show fcoe-map brief’ command. Figure 34: See list of configured fcoe-maps on S5000 fabric A switch To see more detailed information on a given fcoe-map, use the ‘show fcoe-map <FCoE_MAP_NAME>’...
Page 32
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 35: See more detailed information on fcoe-map ‘SAN_FABRIC_A’ Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
The traditional LAN/SAN non-converged setup example is shown below in Figure 36. As you can see, a Dell PowerEdge R720 server with a two port FC HBA is used to connect to two FC switches which are then connected to a Dell PowerVault MD3660f storage array. Each FC port on the server HBA is connecting to a different fabric.
Page 34
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE (MPIO). For Windows Server 2008 R2 Enterprise, three load balancing policy options are available. A load balance policy is used to determine which path is used to process I/O. PowerVault Load Balancing Policy Options: Round-robin with subset —...
Page 35
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 37: Windows load balancing policy set by default to “Least Queue Depth” The two FC switches I am using are Brocade 6505s and the zoning configuration is below. The WWPNs starting with ‘10’...
Page 36
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 39: Zoning for fabric B FC switch > zonecreate financeServer1_p2_test,"10:00:8c:7c:ff:30:7d:29;20:24:90:b1:1c:04:a4:84;20:25:90:b1:1c:04:a4:84; 20:44:90:b1:1c:04:a4:84;20:45:90:b1:1c:04:a4:84" > cfgcreate zoneCfg_test,"financeServer1_p2_test" > cfgenable zoneCfg_test > cfgsave On the fabric A FC switch you can see the WWPN of the server HBA port is ‘10:00:8c:7c:ff:30:7d:28;20:14:90’...
Page 37
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 40: Virtual disk (Finance) created on PowerVault M3660f storage array You can see in Figure 41 below that the virtual disk ‘Finance’ was created on the PowerVault storage array and mapped to be accessible by the server ‘D2WK1TW1’. When you are creating the virtual disk, it will ask you if you would like to map the disk to a detected host.
Page 38
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 41: Host Mapping on PowerVault M3660f Storage Array As soon as the HBA on the Windows server detects storage available for it, it will be detected in the Windows disk management administration tool after performing a disk scan. To perform a disk scan, right click ‘Disk Management’...
Page 39
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Now the virtual disk on the PowerVault storage array displays in Windows just like a typical hard drive. Note, no special configuration was needed on the HBA. Figure 43: Remote storage on PowerVault as seen in Windows as drive ‘F:’...
Page 40
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 44: Node logins on the fabric A FC switch Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
Page 41
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 45: Zoning configuration on the fabric A FC switch You can see that our zoning configuration matches what is displayed in Figure 38. Another useful FC switch command to check what ports are connected to what WWPNs is ‘switchshow’.
Page 42
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 46: ‘switchshow’ output displays the WWPNs connected to the respective FC ports Note, both controllers on the PowerVault are active and each FC switch has two paths to controller 1 and two paths to controller 2.
Page 43
In Figure 48, you can see how the traditional non-converged topology has changed with the introduction of the Dell S5000 switch in a possible use case. Note how the Dell S4810 Ethernet switches have been replaced by Dell S5000 converged switches. Also, note how the separate Ethernet NIC and FC adapters on the server have been replaced by one converged network adapter (CNA).
Page 44
Figure 48: Dell S5000 acting as a NPIV Gateway and allowing for a converged infrastructure As you can see, a Dell PowerEdge R720 server with a two port CNA is used to connect to two Dell S5000 switches which are then each connected to a FC switch. The FC switches are connected to the Dell PowerVault MD3660f storage array.
Page 45
Figure 50: Logical view of how operating system sees CNA with NPAR and FCoE enabled Since we are using a Dell QLogic QLE8262 CNA, the first thing we need to do is configure it for FCoE. Note, since we NIC team with ‘Failsafe Team’, no configuration is required on the S5000 switches. See section D: “FCoE CNA adapter configuration specifics”...
Page 46
Apply DCB map to downstream interface going to server. The same procedure is repeated for the S5000 connecting to fabric B. Note that we used a different ‘fc-map’ and FCoE VLAN. Since fabric A and fabric B are isolated from each other, this was not necessary, however, it may be easier to troubleshoot and understand if some distinction is made between the two fabrics.
Page 47
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 51: Dell S5000 (fabric A) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable >...
Page 49
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 52: Dell S5000 (fabric B) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable >...
Page 50
In Figure 53 below you can see the output of the ‘switchshow’ command on the fabric A FC switch. Notice that the port connected to the Dell S5000 switch (port 4) now states “F-Port 1 N Port + 1 NPIV public.”...
Page 51
Figure 53: Output of the ‘switchshow’ command on the fabric A FC switch The ‘nsshow’ command output below shows that both the Dell QLogic CNA port and Dell S5000 switch are logged into fabric A. Note here that the QLogic adapter WWPN is ’20:01:00:0e:1e:0f:2d:8e’...
Page 52
Figure 54: Output of the ‘nsshow’ command on the fabric A FC switch Since we swapped the FC HBA card for a Dell QLogic CNA card, we need to update the zoning configuration and remove the FC HBA WWPN and add the Dell QLogic CNA WWPN to the respective zoning configurations on each switch.
Page 53
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE the zoning configuration. Figure 55: Zoning for fabric A FC switch > zonecreate financeServer1_p1_test,"20:14:90:b1:1c:04:a4:84;20:15:90:b1:1c:04:a4:84;20:34:90:b1:1c:04:a4:84; 20:35:90:b1:1c:04:a4:84;20:01:00:0e:1e:0f:2d:8e" > cfgcreate zoneCfg_test,"financeServer1_p1_test" > cfgenable zoneCfg_test > cfgsave Figure 56: Zoning for fabric B FC switch >...
Page 54
If we look at the details of what’s connected to port 4 of the fabric A Fibre Channel switch, we see the WWPNs of both the Dell S5000 switch and the Dell QLogic CNA. Figure 58: Output of the ‘portshow 4’ command on the fabric A FC switch To see information on NPIV devices logged into the fabric, use the ‘show npiv devices’...
Page 55
Figure 61: See FIP-snooping enode information on S5000 fabric A switch To see a list of configured fcoe-maps, use the ‘show fcoe-map brief’ command. Figure 62: See list of configured fcoe-maps on S5000 fabric A switch To see more detailed information on a given fcoe-map, use the ‘show fcoe-map <FCoE_MAP_NAME>’...
Page 56
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 63: See more detailed information on fcoe-map ‘SAN_FABRIC_A’ Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
EoR. Note, in the case as shown in Figure 64, there is no need to have the LAN traffic traverse all the way to the S5000; we can simply split the LAN and SAN traffic at the S4810 via VLANs and have the S5000 decaspulate the FC packets.
Page 59
As mentioned prior, with the Dell PowerEdge m1000e chassis it’s more likely the S5000 switch will be at ToR going to all the storage at EoR. In this case, as shown in Figure 67, we have VLT on the Dell S5000 switch running down to the MXL switches.
Page 60
Dell IOA because all uplink ports on the IOA are automatically part of one LAG, so there would be no option to use VLT on the S5000 down to the IOA for LAN and have a separate uplink from the IOA up to the S5000 for FCoE.
Page 61
40 GbE QSPF+ to 4 x 10 GbE breakout cable from the MXL/IOA to the S5000. This would provide for better scalability without running out of 40 GbE ports and leaves the 40 GbE ports on the S5000 available for future expansion or additional upstream bandwidth.
Page 62
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 69: Fabric A Dell MXL (FSB) configuration > enable > config terminal > dcb enable > feature fip-snooping > service-class dynamic dot1p > interface range fortyGigE 0/33 - 37 >...
Page 63
VLAN. For this reason, since all DCB and FCoE configuration is also applied by default, there is no configuration needed on the Dell IOA. If it is desired to change the VLAN settings, the Chassis Management Controller (CMC) GUI can be employed.
Page 64
Dell S5000 and server is configured properly, the IOA will automatically function as a FCoE transit switch. For the Dell, MXL, in figure 68, we manually applied much of the same configuration such as uplink-failure detection.
HBA adapter in Windows as soon as the drivers are installed. Other adapters like the Broadcom BCM57810S and Dell QLogic QLE8262 will require FCoE to be turned on, which can be done from the vendor-specific CNA management software. More detailed configuration for the Broadcom BCM57810S and Dell QLogic QLE8262 CNA adapters are provided below.
Page 66
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 76: View of Broadcom BCM57810S in Broadcom Advanced Control Suite 4 In ‘Control Panel->Network and Internet->Network Connections’, we see eight virtual ports as shown in Figure 77. Figure 77: Virtual adapter network connections as seen in Windows By default each function is configured only as a NIC.
Page 67
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE To keep things simple and as based on requirements, we use one virtual port on each physical port and disable the rest. This can be done easily through Broadcom Advanced Control Suite 4 by selecting the virtual port in the left-pane, expanding the ‘Resource Reservations’...
In Figure 81 above, you can see we NIC team using ‘Smart Load Balancing™ and Failover (SLB)”. We configure active-standby links for LAN up to the S5000 switches. Note, the switch will not be aware of the NIC team and no LAG configuration will be required on upstream switches.
Page 69
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE On the next dialog, we select the respective adapters to NIC team. Figure 82: Selecting virtual NIC ports on Broadcom BCM57810S to NIC team Next, we select the port to use for standby.
Page 70
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 84: We leave ‘LiveLink’ feature on Broadcom BCM57810S at the default setting Next, we enter VLAN information. We have setup LAN traffic on VLAN 5 in our topology. Figure 85: VLAN configuration on Broadcom BCM57810S...
Page 71
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 86: Select ‘Tagged’ for the VLAN configuration on Broadcom BCM57810S Figure 87: We use VLAN 5 for our LAN traffic Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
Page 72
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 88: We are not configuring additional VLANs The final step is to confirm the changes. Figure 89: Commit changes to create NIC team on Broadcom BCM57810S Once the configuration is complete, we see the below NIC team setup with both virtual ports as members.
Page 73
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 90: NIC team view in Broadcom Advanced Control Suite 4 of Broadcom BCM57810S Now Windows Server 2008 R2 Enterprise sees a virtual adapter as shown in Figure 91 and Figure 92.
A partition can be looked upon as a virtual port. This example will use a Dell PowerEdge R720 server with a Dell QLogic QLE8262 CNA and Microsoft Windows Server 2008 R2 Enterprise installed. By default, only the NIC functionality is enabled. FCoE must be manually enabled on the CNA for the virtual HBA ports to be identified in Windows.
Page 75
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 94: Dell QLogic QLE8262 CNA FCoE/NPAR Configuration Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Creating a NIC Team Since the NICs and HBAs are seen as virtual ports, we can treat them as separate entities and create a NIC team with the virtual CNA NIC ports. In Figure 95 and Figure 96, you can see we NIC team the two virtual NIC ports and use ‘Failsafe Team’.
Page 77
As far as the network configuration for the LAN, since ‘Failsafe Team’ is being utilized, there is no special configuration that needs to be done on the S5000 switches. We can simply have one link going to each S5000 switch with one port in ‘active’ mode and the other in ‘standby’ mode.
Page 78
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 99: Tagging the NIC team with VLAN 5 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide...
Page 79
Click below to find more Mipaper at www.lcis.com.tw Mipaper at www.lcis.com.tw...