disk_size.sh – Quick look at disk details

I’ve been doing many AIX server migrations lately. Some which involve taking a mksysb and restoring, others which involve presenting LUN’s from a SVC and then doing a migratepv. The latter can result in a large number of disks presented on the host, so I wrote a quick basic script which gives me the details that I need – hdisk name, size, vg, pvid and serial number.

Maybe someone out there will find it useful.
Continue reading

Check those attributes!

This is a script I wrote purely from the frustutation of verifying each and every adapter/device attribute which one sets after an AIX installation. What this script does is loop through the list of user defined attributes, and verifies each and every adapter/device against them. The beauty of the script is that it’s very easy to add/modify or remove checks.

For those with any expierence in scripting or coding will understand how array’s work. The script allows you to define your own array elements to add/modify or remove checks against certain adapters/devices.

Currently, the script checks for the following:

    Virtual SCSI Adapters (vscsiX)
    Virtual FC Adapters (fscsiX)
    Hdisk Devices (hdiskX)
    VMO Values
    Number of paths to a disk (=>2)

Continue reading

Automatically reduce image.data to a single PV

I’ve been working with a client who is going through the process of migrating from physical Power 5 servers to a virtualized Power 7 environment with PowerVM. Often referred to as P2V migrations around the IBM office 😛 Due to the I/O limitations in the P5 servers, our only method of migration was to take mksysb/savevg’s of the current servers, create NIM resources out of them, and then restore onto the P7 LPAR’s.

The Power 5 rootvg consisted of two internal disks in a LVM mirror, with the other volume groups backed by either internal disk, or locally attached storage. The Power 7 which we were migrating to had it’s storage provided by a shiny DS8800. Given the boot from SAN solution we had, we no longer required two disks to form the rootvg, as all the mirroring and redundancy was being handled by the SVC’s.
Continue reading

NFS cross-mounts in PowerHA/HACMP

Combining NFS with PowerHA we can achieve a HANFS (Highly Available Network File System). The basic concept behind this solution is that one node in the cluster mounts the resource locally, and offers that as an exported resource via a serviceable IP. Another node in the cluster is then configured to take on the resource in the event of failure.

If you’re following this, I’m taking the assumption that your cluster is already configured, you have a working IP network and have set up a shared volume group between the cluster nodes that will be handling the HANFS failover. Before we get started though, there are a few things which need to be installed/verified.
Continue reading