Step By Step NPIV configuration
For maximum redundancy for the paths create the instance on dual VIOS. We will consider an scenario having Power6/7 Server, with 2 PCI Dual/Single port 8 GB Fibre Card with VIOS level – 2,2 FP24 installed and VIOS is in shutdown state.
First we need to create Virtual fibre channel adapter on each VIOS which we will later on map to physical fibre adapter after logging into VIOS similarly as we do for Ethernet
Please Note: - Create the all lpar clients as per requirements and then configure the Virtual fiber adapter on VIOS. Since we are mapping one single physical fiber adapter to different hosts, hence we need to create that many virtual fiber channel adapter. Dynamically virtual fiber channel adapter can be created but don’t forget to add in profile else you lost the config on power-off.
1. 1. Create Virtual fibre channel adapter on both VIOS server.
HMC--> Managed System-->Manage Profile-->Virtual Adapter
Let say I have define the virtual fiber adapter for AIX client Netwqa with adapter ids 33 & client adapter id 33
Similarly on Vios2 for multipath redundancy:-
If you have any more LPARs which you want to configure for NPIV, repeat the above mentioned steps with those LPAR details.
2. Mapping defined virtual fiber channel adapter to Physical HBA ports
Now activate VIOS Lpar. Logon to VIOS server and check the status of physical Fibre channel port. Or if Vios are already running then run cfgmgr or config manager to get the defined virtual FC adapter on Vios servers.
Name physloc fabric tports aports swwpns awwpns
fcs0 U5802.001.008A824-P1-C9-T1 0 64 64 2048 2048
fcs1 U5802.001.008A824-P1-C9-T2 0 64 64 2048 2048
fcs2 U5877.001. 0083832-P1-C9-T1 0 64 64 2048 2048
fcs3 U5877.001.0083832-P1-C9-T2 0 64 64 2048 2048
If the value for the ‘fabric’ parameter shows as ‘0’ that means that HBA port is not connected to a SAN switch supporting the NPIV feature. Please connect fibre cable between Physical fibre channel adapter and San switches. If the value for the ‘fabric’ parameter shows as ‘1’ that means that HBA port is connected to a SAN switch supporting the NPIV feature
Above commands displays
Name:- Display Name and
physloc :- location of physical adapter.
aports:- Display number of available physical ports (aports)
awwpns:- Display total numbers of WWPNs that physical port support.
After connecting fibre channel cable, execute lsnport again you should get fabric=1
Name physloc fabric tports aports swwpns awwpns
fcs0 U5802.001.008A824-P1-C9-T1 1 64 64 2048 2048
fcs1 U5802.001.008A824-P1-C9-T2 1 64 64 2048 2048
fcs2 U5877.001. 0083832-P1-C9-T1 1 64 64 2048 2048
fcs3 U5877.001.0083832-P1-C9-T2 1 64 64 2048 2048
Run the ‘lsdev –vpd | grep vfchost’ command to know which device represents the Virtual FC adapter on any specific slot. Or run`lsmap -npiv –all`to list number of FC adapter and their mapping to physical adapter
Here we are interested in vfchost2 as I am showing the example for connecting vfchost2.
Check Status and Flags:-
Status:LOGGED_IN, Flags: a<LOGGED_IN,STRIP_MERGE>
-> The vfchost adapter is mapped to a physical adapter, and the associated client is up and running.
Status: NOT_LOGGED_IN, Flags:1<NOT_MAPPED,NOT_CONNECTED>
-> The vfchost adapter is not mapped to a physical adapter
Status: NOT_LOGGED_IN, Flags:4<NOT_LOGGED>
-> The vfchost adapter is mapped to a physical adapter, but the associated client is not running. If you suspect a problem, check for VFC_HOST errors.
ClntName:- will only be displayed when your mapped vio client is booted and running state.
ClntOS : Name will only be displayed when your mapped vio client is booted and running state
Now we need to map the device ‘vfchost2’ to the physical HBA port ‘fcs1’ using the ‘vfcmap -vadapter vfchost2 -fcp fcs1’command. Once it is mapped, check the status of the mapping using the command ‘lsmap -vadapter vfchost2 -npiv’. Please note that the status of the port is showing as ‘NOT_LOGGED_IN’, this is because the client configuration is not yet complete and hence it cannot login to the fabric.
$ vfcmap -vadapter vfchost2 -fcp fcs1
List the adapter using below ` lsmap –vadapter vfchost2 –npiv`.
Since Aix client is not configured and mapped that’s why status is not Logged_IN, it will not display the ClntName and ClntOS along with VFC client name and DRC
Repeat the above mentioned steps in the second VIOS LPAR also. If you have more client LPARs, repeat the steps for all those LPARs in both the VIOS LPARs.
3. .Aix Client Configuration
Create Virtual FC client adapter on Aix lpar by navigating HMC and below tabs:-
HMC ? VIO Client (NETWQA) ? manage Profile?Virtual Adapter? Action ? Create as
Create the second Virtual FC Client Adapter with the slot number details as shown in below figure. Make sure the slot numbers match with the slot numbers we have entered in the second VIOS LPAR while creating the Virtual FC Server Adapter.
Now activate the AIX LPAR and install AIX in it, note that the minimum version required to support the NPIV feature is AIX 5.3 TL9 or AIX 6.1 TL2. Once the AIX installation is complete, depending on the SAN Storage box, you need to install the necessary subsystem driver and configure it. If Aix is already running then issue `cfgmgr` command.
Install SDDPCM driver for multipathing depending upon the storage you have.
You can now check the status of the Virtual FC Server Adapter ports in both the VIOS to check whether the ports are successfully logged in to the SAN fabric.
4. Allocating San Storage:-
You can now assign the storage to the Aix lpar. Do proper zoning between san storage and wwpn of Aix client FC virtual adapter. Use below command to check the WWPN of virtual Fibre channel adapter on AIX client.
#lscfg -vpl fcs*
Or below commands as shown below:-
You can also get the wwpn number from AIX client profile through HMC as shown below:-
NOTE: When viewing the properties of the Virtual FC Client Adapter from the HMC, it will show two WWNs for each Virtual FC Client Adapter as shown above. The second WWN shown here is not used until there is a live migration activated on this LPAR through Live Partition Mobility. When a live migration happens for this LPAR, the new migrated hardware will be accessing SAN storage using the second WWN of the Virtual FC Client Adapter, so you have to make sure the second WWN is also configured in Zoning and Access Control.
Use lspath or pcmpath query adapter , ‘datapath query adapter’, ‘datapath query device’, ‘lsvpcfg’ , pcmpath query essmap etc commands to check the mutlipathing and hdisk configured properly.
It will show the output as shown below. You can see that there are 4 separate paths for the disk ‘hdisk2’ which is through two separate virtual FC adapters as I have connected my DS storage to fiber switch through 4 cables for each fiber card.
**Zoning on SAN Switch is out of scope for this document; if you want to know how to do zoning you can drop a comment or mail me.
§ NPIV is only supported on 8Gb FC adapters on p6 hosts. The FC switch needs to support NPIV, but does not need to be 8 Gb (the 8 Gb adapter can negotiate down to 2 and 4 Gb).
§ Maximum number of 64 NPIV adapters per physical adapter (see lsnports)
§ 16 virtual fibre channel adapters per client
§ No support for IP over FC (FCNET)
§ Optical devices attached via virtual fibre channel are not supported at this time
Diagnostics no supported for virtual fibre channel adapters
Important NPIV Commands
Display information about physical ports on physical fibre ports
$lsmap –npiv –all
Display Virtual fibre channel adapter created in VIO Server and there status
$lsmap –npiv –vadapter vfchost0
Display attributes for virtual fibre channel adapter
$vfcmap –vadapter vfchost0 –fcp fcs0
Map virtual fibre adapter with physical fibre adapter
$ vfcmap –vadapter vfchost0 –fcp
Unmaps Virtual fibre channel adapter
$ portcfgnpivport ------ > On IBM brocade san switch
0 - Disable the NPIV capability on the port
1 - Enable the NPIV capability on the port
Usage :- $portcfgnpivport 10 1
Unable NPIV functionality on Port 10 of san switch
Also configure Fibre card to dyntrk = yes and fc_err_recov :fast_fail on Aix Lpar