Since HMC firmware version 7.7.7, IBM added very cool feature, which allows installation of the Virtual I/O Server directly from the HMC. This is very convenient when you get your POWER machine to the new location, and you want to install an operating system, for instance, IBM i or AIX, but you prefer to do it remotely.

There was another command installios for quite some time in the HMC. But it wasn’t very user friendly. Especially for IBM i administrators, the command looks like an occult. With new feature ‘Virtual I/O Server Repository’ it’s very easy to get the VIOS installed, even for someone who has never done it before.

If you get a brand new machine to a new location, and you don’t have a NIM server (or you don’t know what is this), it’s enough that you get the HMC operational and accessible from a remote location (you can ask a business partner to configure it). The HMC will host the VIOS installation images and performs almost automatically entire VIOS installation process.

1.Download VIOS installation .iso images from IBM website.

2.Copy installation images to the HMC. Go to HMC Management – Manage Virtual I/O Server Image RepositoryHMC-ManageVIOS Image repository


 

3.Complete information where the images can be copied from

HMC-ManageVIOS Image repository_details

HMC-ManageVIOS Image repository_result

4. Select your brand new VIOS partition profile, and click Operations – Activate – Profile and select Yes for Install Virtual I/O Server as part of activation process?

HMC-ManageVIOS Image repository_install

5. Type in the IP address, subnet mask and default gateway for VIOS.

HMC-ManageVIOS Image repository_IP

 

HMC-ManageVIOS Image repository_installation_process

6. The installation process starts. It will do everything what is necessary to install the VIOS.

Remember that your HMC console acts as a NIM server. Therefore, if the console is behind the firewall, you must ensure that all ports required by NIM are opened.

7. When the installation finished, open virtual terminal in the HMC vtmenu , and do the first login to the VIOS with a default password padmin for padmin user.

Once the VIOS is installed, you can use it as virtual media repository. It allows you to create virtual drive where you can load OS installation images (IBM i, AIX, or Linux) and proceed with OS installation completely remotely. This process I described some time ago here.

I think this is great feature in the HMC, and IBM i people should use it. It saves time, its very convenient, and you don’t have to travel to the data room just to insert the SLIC DVD.

**************************************************************************
A flash reporting a possible issue that could occur if a drive fails during drive firmware update can be found here.

 

Until the flash is updated showing how to avoid this issue, only update drive firmware when installing a new machine or if all hosts are offline.

 

**************************************************************************

 

IBM recently released new drive firmware for the Storwize V7000, so I thought I would share the process of how I update that firmware.  You can download it from here.      The details for this new package can be found here.   I recommend you perform the drive update before you next update your Storwize V7000 microcode.

 

I want to be clear that one of the central goals of the Storwize V7000 is to ensure that performing drive firmware updates can be done online without host disruption.    This is possible because each drive can be updated in less than around 4 seconds.   The scripts I share below leave a 10 second delay between drives just to be safe.  I would still prefer that you did the update during a quiet period.

 

We need to perform this procedure using the command line as there is no way to do this procedure from the GUI (yet).

 

There are four steps:

 

 

  1. Upload the Software Upgrade Test Utility to determine which drives need updating.

 

  1. Upload the drive microcode package.

 

  1. Apply the drive software.

 

  1. Confirm all drives are updated.

Общая последовательность настройки SAN коммутатора Brocade

I. Начальная настройка

1. подключится к COM порту (9600-8-1, no-flow-control)

2. ввести логин/пароль (admin/password)

3. изменить имя коммутатора

# switchname

4. установить IP адрес

# ipaddrset

5. изменить Domain-ID (рекомендуется использовать из диапазона 99..127)

# switchdisable

# configure

— ответить “yes” на вопрос об изменении параметров фабрики

Domain ID: 101 (на втором коммутаторе: 102)

— остальные параметры оставить как есть (нажимать “Enter”)

# switchenable

# reboot

In this post we will see how to upgrade prime using its command line interface (CLI). You can use GUI as well, but it is much quicker if you could upload the patch/upgrade files onto local FTP server & then apply it onto prime. You can use “show version” CLI command to verify current prime version.

primedev/admin# show version 
Cisco Application Deployment Engine OS Release: 2.0
ADE-OS Build Version: 2.0.1.038
ADE-OS System Architecture: x86_64
Copyright (c) 2005-2010 by Cisco Systems, Inc.
All rights reserved.
Hostname: primedev
Version information of installed applications
---------------------------------------------
Cisco Prime Network Control System
------------------------------------------
Version : 1.4.0.45
Patch: Cisco Prime Network Control System Version: Update-1_16_for_version_1_4_0_45

We will use this to upgrde to PI 1.4.1 (patch PI_1.4_0_45_Update_1-39.tar.gz) to support new 3700 series AP. You can download these patches from the software section of PI in CCO page as shown below. You also need to check the release notes (Here is 1.4.1 release notes) to make sure your upgrade path is correct & its compatibility with other products.

PI-Patch-01

Step By Step NPIV configuration

For maximum redundancy for the paths create the instance on dual VIOS. We will consider an scenario having Power6/7 Server, with 2 PCI Dual/Single port 8 GB Fibre Card with VIOS level – 2,2 FP24 installed and VIOS is in shutdown state.
First we need to create Virtual fibre channel adapter on each VIOS which we will later on map to physical fibre adapter after logging into VIOS similarly as we do for Ethernet
Please Note: - Create the all lpar clients as per requirements and then configure the Virtual fiber adapter on VIOS. Since we are mapping one single physical fiber adapter to different hosts, hence we need to create that many virtual fiber channel adapter. Dynamically virtual fiber channel adapter can be created but don’t forget to add in profile else you lost the config on power-off.
1. 1. Create Virtual fibre channel adapter on both VIOS server.
HMC--> Managed System-->Manage Profile-->Virtual Adapter
Let say I have define the virtual fiber adapter for AIX client Netwqa  with adapter ids 33 & client adapter id 33

Make sure you are monitoring file system space on your VIOS.

Why?

If you run out of space in the root file system, odd things can happen when you try to map virtual devices to virtual adapters with mkvdev.

For example, a colleague of mine was attempting to map a new hdisk to a vhost adapter on a pair of VIOS. The VIOS was running a recent version of code. He received the following error message (see below). It wasn’t a very helpful message. At first I thought it was due to the fact that he had not set the reserve_policy attribute for the new disk to no_reserve on both VIOS. Changing the value for that attribute did not help.

$ ioslevel

2.2.1.3

$ mkvdev -vdev hdisk1 -vadapter vhost0 -dev vhdisk1

*******************************************************************************

The command's response was not recognized. This may or may not indicate a problem.

*******************************************************************************

NIM. Как начать


Начнем с определения: NIM в AIX - это программный продукт, позволяющий управлять сетевой установкой Base Operating System (BOS, или по-просту самой OS AIX) и дополнительного софта на одном или нескольких клиентах. NIM поставляется вместе с AIX, покупать или лицензировать что-либо дополнительно не требуется.

Пока определений больше не будет, они будут появляться по мере развертывания NIM.

Создание среды:

Среда NIM включает в себя серверы и клиентские машины. Сервер предоставляет ресурсы -- программы и файлы для установки. Клиент эти ресурсы использует.

Сервером (правильнее будет говорить "сервером ресурсов") может быть любая машина среды.

Это тот случай, когда любой клиент одновременно может быть сервером для других клиентов.

Одна машина в среде особенная - она называется master. Это и есть наш NIM. Он является лишь управлятором процесса установки  неких ресурсов с некоего сервера на неких клиентов. То есть хранить файлы дистрибутива AIX вы можете на на одной машине, ставить AIX на другую машину, а управлять процессом с третьей (она-то и есть NIM master).

Обычно среда NIM довольно проста: Сам master является сервером всех ресурсов, а все остальные машины - всего лишь клиенты. На картинке ниже изображено, почему иногда выгодно на клиентских машинах иметь ресурсы для установки, то есть делать их серверами ресурсов.

 

When installed on a Windows XP or Windows Server 2003 host machine, the vSphere Client and vSphere PowerCLI may fail to connect to vCenter Server 5.5 due to a Handshake failure. vSphere 5.5 uses the Open SSL library, which, for security, is configured by default to accept only connections that use strong cipher suites. On Windows XP or Windows Server 2003, the vSphere Client and vSphere PowerCLI do not use strong cipher suites to connect with vCenter Server. This results in the error No matching cipher suite on the server side, and a Handshake failure on the vSphere Client or vSphere PowerCLI side.

To work around this issue, perform one of these options:

Subcategories

   
© 2018 systemadmins.ru All Rights Reserved