nim_move_up Command
Purpose
Facilitates the enablement of new hardware in AIX® environments.
Syntax
nim_move_up {[ -S ] | [ -K [ -h control_host ] ] | [ -r [ -R ] [ -u ] ]} | { [ -c NIM_client ] [ -i target_ip [ -ending_ip ] ] [ -s subnet_mask ] [ -g gateway ] [ -h control_host ] [ -m managed_sys ] [ -V vio_server [ -e ] [ -D ] ] [ -I img_src ] [ -l resource_dir ] [ -t seconds ] [ -p loops ] [ -j nimadm_vg ] [ -L lpp_source ] [ -U spot ] [ -B bosinst_data ] [ -E exclude_files ] [ -C script_resource ] [ -b installp_bundle ] [ -f fix_bundle ] {{[ -n ] [ -d ]} | -O} [ -q ] [ -T ] [ -M manual_configuration_filenames ]}
Description
The nim_move_up command enables users of existing AIX environments to take advantage of the capabilities available on new POWER® Systems hardware. The command provides an interface that migrates an existing AIX system onto an LPAR residing on a POWER server. The level of AIX on the original machine is raised to a level that supports operation on newer hardware. The original system's hardware resources are closely replicated on the newer hardware. By the end of the migration, the same system is fully running on the new LPAR.
In addition, nim_move_up can use the Virtual I/O capabilities of POWER servers by optionally migrating a client onto virtualized hardware, such as virtual disks and virtual Ethernet.
The nim_move_up command relies on the functionality of NIM and the NIM master's capability of remotely managing and installing NIM clients on the network. The nim_move_up command attempts to use the NIM master and the nimadm command to complete the following actions on an existing NIM client:
- Create a system backup of the client
- Migrate the backup's level of AIX
- Install the backup onto an LPAR that resides on the new POWER server, which is be represented in the NIM environment as a new standalone client.
Before the new hardware is installed, the NIM master (on which the nim_move_up command is run) and the NIM clients on the existing hardware must be configured. The clients are the starting point of the migration and eventually turn into the new LPAR.
- The NIM master remains the same.
- The LPAR on the new POWER server correspond to the original NIM clients and are controlled by the NIM master.
- An HMC controls the LPAR on the new POWER servers by communicating with the NIM master through SSH.
- The original NIM clients remain unaffected and still in control of the NIM master.
The entire migration takes place without any downtime required on the part of the original client. The process can be completed in phases executed sequentially, which allows more control over the process, or can be executed all at once, so that no user interaction is required. The command is delivered as part of the bos.sysmgt.nim.master fileset and requires a functional NIM environment in order to run.
Required Flags
Item | Description |
---|---|
-c NIM_client | Specifies either a NIM standalone client (standalone object type) or a NIM machine group (mac_group object type). The client indicated must be reachable using the network from the NIM master and must allow the NIM master to run commands on them. If a NIM machine group is specified in this argument, it must reside in the same NIM network. The client is the target machine that will be migrated onto the new LPAR on a POWER server. |
-g gateway | Specifies the IP address of the default gateway that the clients will be configured with after the migration to the POWER server. |
-h control_host | Specifies the host name or IP address of the HMC that is used for hardware control of the POWER server. |
-i target_ip[-ending_ip] | Specifies the IP address that the new migrated client will be configured with after it is installed on the POWER server. If a NIM machine group is supplied to the -c option, a range of IP addresses must be supplied here and there must be enough addresses in the range to enumerate the amount of clients that are to be migrated. |
-I img_src | Specifies the path to the source of the installation images used to create the NIM resources required for migration and installation. This path can be a device (such as dev/cd0 if using AIX product media) or a path to a location on the file system containing the installation images. |
-l resource_dir | Specifies the path to a location on the file system that will contain any new NIM resources created through the nim_move_up command. The location must have enough space to accommodate an LPP_Source and a spot unless existing resources were provided through the -L and -U options. |
-m managed_sys | Specifies the name of the managed system corresponding to the POWER server as tracked by the HMC. |
-s subnet_mask | Specifies the subnet mask that the clients will be configured with after the migration to the POWER server. |
Execution and Control Flags
Item | Description |
---|---|
-d | Executes nim_move_up in the background and returns control of the terminal to the caller. The progress of nim_move_up can be tracked through the -S flag. |
-K | Configures SSH keys on the specified HMC. This allows the unattended remote execution of commands from the NIM master without password prompts. This flag cannot be used with any other options except the -h option. |
-n | Runs only the next phase of the nim_move_up migration process. The nim_move_up command exits when the phase completes or fails. If this flag is not provided, all the subsequent phases are run and nim_move_up exits when they have all run or one of them has failed. |
-O | Saves only supplied values. Save values provided through other options and then exits without executing any phases. This flag cannot be used with any other of the Execution and Control Flags. |
-q | Specifies quiet mode. No output is displayed to the terminal (but is instead kept in the logs). This flag has no effect if nim_move_up runs with the -d flag. |
-r | Unconfigures nim_move_up. This resets all saved data, including saved options, phase-specific data, and current phase information. This operation must be run if the migration process is to be started over for the migration of a new client or set of clients. |
-R | Removes all NIM resources created by nim_move_up in addition to unconfiguring the environment. This flag can only be used with the -r option. |
-S | Displays the status of the current phase or the next phase to be run. All saved values are displayed as well. The nim_move_up command exits immediately after displaying the information. This flag cannot be used with any other options. |
Optional Flags
Item | Description |
---|---|
-b installp_bundle | Specifies an existing installp_bundle NIM resource whose software are installed on each of the newly migrated LPAR in phase 10 (post-installation customization) if the option is provided. |
-B bosinst_data | Specifies an existing bosinst_data NIM resource used by nim_move_up to install the new clients onto the new LPAR. If this option is not provided, nim_move_up generates a bosinst_data resource with default unattended installation values. |
-C script_resource | Specifies an existing script NIM resource that, if provided, nim_move_up will execute in phase 10 (post-installation customization) on all of the new migrated LPAR. |
-D | Forces the use of physical storage controllers instead of virtual SCSI adapters in creating the new LPAR on the POWER server when a Virtual I/O server LPAR is specified. This flag is only valid when used with the -V option. |
-e | Forces the use of physical network adapters instead of shared Ethernet adapters in creating the new LPAR on the POWER server when a Virtual I/O server LPAR is specified. This flag is only valid when used with the -V option. |
-E exclude_files | Specifies an existing exclude_files NIM resource that nim_move_up uses to create a mksysb of the original clients. If this option is not provided, nim_move_up generates an exclude_files resource that excludes the contents of /tmp from the backup. |
-f fix_bundle | Specifies an existing fix_bundle NIM resource whose APARs are installed on each of the newly migrated LPARin phase 10 (post-installation customization) if the option is provided. |
-j nimadm_vg | Specifies the volume group to be used by the
underlying nimadm call for data caching.
If this option is not provided, the default value is rootvg . |
-L lpp_source | Specifies an existing LPP_Source NIM resource to whose AIX level the target clients will be migrated to. If this option is not provided, nim_move_up attempts to create a new LPP_Source from the installation image source provided through the -I option. |
-M manual_configuration_filenames | Specifies phase4 to use these manual configuration files to the associated back-level AIX machines. This flag is effective only in phase4 of the nim_move_up command. For more information about this flag, see the Advanced usage section. |
-p loops | Specifies the number of times to execute system analysis tools on the target NIM clients in analyzing resource utilization. The final resource usage data will be the average of the values obtained from each loop. This data will be taken into account when determining the equivalent POWER server resources from which the migrated LPAR will be derived. If this option is not provided, the default is 1 loop. |
-t seconds | Specifies the number of seconds each loop runs for. If this option is not provided, the default is 10 seconds. |
-T | Transports user-defined volume groups from the original clients to the new migrated LPAR. |
-u | Enables nim_move_up to completely "roll back" entire nim_move_up migration. Must be used with the -r flag. |
-U spot | Specifies an existing spot NIM resource that will be used in the migration and installation of the clients. If this option is not provided, a new spot is created from the lpp_source NIM resource provided by the -L and -I options. |
-V vio_server | Specifies the LPAR name of a Virtual I/O server that resides on the POWER server denoted by the -m flag. |
Exit Status
Item | Description |
---|---|
0 | Successful completion. |
nonzero | An error occurred. |
Security
Only the root user can run this command.
Examples
- To run the first phase and configure all the required options
(nim_move_up must not be already configured
and running), type:
nim_move_up -c client1 -i 192.168.1.100 -s 255.255.255.0 -g 192.168.1.1 -h hmc1.mydomain.com -m \ my-p5 -l /big/dir -I /dev/cd0 -n
- To display the status of the nim_move_up command's
environment, including all saved configuration input and which phase
is to be executed next, type:
nim_move_up -S
- To change the saved host name to a new name and run the next phase
while suppressing output, type:
nim_move_up -h hmc2.mydomain.com -n -q
- To run all remaining phases in the background, save your agreement
to accept all licenses, and have the prompt returned after the phases
begin running, type:
nim_move_up -Y -d
- To unconfigure nim_move_up, discard
all saved input, and reset the command to run phase 1, type:
All NIM resources previously created by nim_move_up remain unaffected in the NIM environment and will be used by nim_move_up as necessary to migrate another client.nim_move_up -r
Restrictions
- Running AIX 5L Version 5.3 with the 5300-03 Recommended Maintenance package, or later.
- Perl 5.6 or later.
- OpenSSH (from the Linux® Toolbox CD)
- At least one standalone NIM client running AIX 4.3.3 update or later in the environment
- Product media version AIX 5L Version 5.2 with the 5200-04 Recommended Maintenance package or later, or product media version AIX 5.3 or later (the equivalent LPP_Source and spot NIM resources can also be used).
- A POWER server with sufficient hardware resources to support the target clients' equivalent POWER server configuration.
- An installed and configured Virtual I/O server is, if virtual resources will be used to migrate the clients.
- An HMC controlling the POWER server, along with sufficient privileges to power-on, power-off, and create LPAR.
The nim_move_up command will fail to execute properly if all of the preceding requirements are not met or if the command is executed by a non-root user.
Implementation Specifics
The nim_move_up command takes a phased approach to migrating an existing client onto a new LPAR. The following phases make up the process:
- Create NIM resources. The NIM resources required to perform the migration steps are created if they do not already exist.
- Assess premigration software. An assessment of which software is installed and which software cannot be migrated is performed on each target client. Any software missing from the LPP_Source is added from the source of the installation images (such as product media) that is provided to nim_move_up.
- Collect client hardware and usage data. Data about each target client's hardware resources are gathered. Also, an attempt to assess the average use of those resources over a given amount of time is made.
- Collect POWER server resource availability data and translate client resource data. The managed system that is provided is searched for available hardware resources. The data gathered in the previous phase is used to derive an equivalent LPAR configuration that uses the managed system's available resources. If a Virtual I/O server LPAR was provided to work with, the derived client LPAR is created with virtual I/O resources instead of physical I/O resources. The appropriate adapters and configuration are created on the Virtual I/O server as needed.
- Create system backups of target clients. After NIM performs a mksysb of each target client, the corresponding mksysb NIM resources are created.
- Migrate each system backup. Using the NIM resources designated by nim_move_up, each mksysb resource is migrated to the new level of AIX by the nimadm command. The original mksysb NIM resources are preserved and new mksysb NIM resources are created for the new migrated mksysb resources.
- Allocate NIM resources to new LPAR. NIM standalone client objects are created for each new derived LPAR created in phase 4 using the network information provided to nim_move_up. Appropriate NIM resources are allocated and a bos_inst pull operation is run on each NIM client (NIM does not attempt to boot the client).
- Initiate installation on LPAR. Each LPAR is rebooted using the control host (HMC) and the installation is initiated. The phase's execution stops after the installation has begun (that is, the progress of the installation is not monitored).
- Assess post-migration software. After each installation has completed, the overall success of the migration is assessed, and a report of software problems encountered during migration is generated. If any filesets failed to migrate, the errors reported for that fileset must be corrected manually.
- Customize post-installation. If an alternate LPP_Source, fileset list, or customization script was provided, a customized NIM operation is performed on each client with the values provided. This allows for the optional installation of additional software applications or for any additional customization.
In order to successfully migrate a NIM client onto a new LPAR, each of these phases (with the exception of phase 10, which is optional) must be executed completely successfully. If all phases completed successfully, a new NIM client object will be present in the NIM environment that represents the migrated LPAR, which will be running the level of AIX supplied through the nim_move_up source of installation resources.
After all prerequisites needed to run nim_move_up have been satisfied, the nim_move_up command runs in two phases: configuration and phase execution.
Configuration
Before the nim_move_up command can begin its phases, input must be
provided to the application. The required input includes a list of the NIM clients to be migrated,
TCP/IP configuration information of the new migrated LPAR, and the POWER server name. For a complete list of required nim_move_up
configuration options, refer to the Required Flags (they also are denoted by a *
(asterisk) in the nim_move_up_config SMIT menu). Optional input, such as
whether a Virtual I/O server is specified, also affects the behavior of
nim_move_up and the end result of the migration process (if a Virtual I/O
server is specified, virtual I/O resources are used to create the migrated LPAR).
smitty nim_move_up_config
or
smitty nim_move_up
and select the Configure
nim_move_up Input Values option. At the menu, fill in the options with values that reflect the requirements of your environment. For further information about the nim_move_up command's SMIT interface, see the SMIT usage section below.
Phase Execution
After all input is supplied, phase execution begins at phase 1 and continues sequentially. If a phase encounters an error, nim_move_up attempts to execute the failed phase the next time it runs. Optionally, you can specify that nim_move_up start only the next phase or attempt all remaining phases.
smitty nim_move_up_exec
or smitty nim_move_up
and select the Execute
the nim_move_up Phases option. Answer the Execute
All Remaining Phases? option and press Enter. The phases
begin executing. nim_move_up -n
To specify that nim_move_up execute
all remaining phases, type the following command: nim_move_up
In
addition to executing phases, this command can also modify saved configuration
options if the appropriate flag is supplied. SMIT Usage
smitty nim_move_up
- Display the Current Status of nim_move_up
- Equivalent to running nim_move_up with the -S flag. The next phase to be executed and a listing of all the saved options are displayed.
- Configure nim_move_up Input Values
- Through this screen, all required and optional input to nim_move_up can be configured. All values entered into the fields are saved and are remembered through subsequent runs of nim_move_up and through subsequent uses of this SMIT screen. This screen can be used at any time to modify saved values after phases have been run.
- Execute nim_move_up Phases
- Provides a simple interface to execute nim_move_up phases. The phases can be executed one at a time or all at once, depending on how the questions in this phase are answered.
- Configure SSH Keys on Target HMC
- Provides a simple interface for setting up SSH keys on the remote control host (HMC). This does the equivalent work of passing the -K flag on the command line. Configuring SSH keys on the remote control host enables the unattended remote execution of commands from the NIM master, which is necessary for completing all the phases (some of which remotely execute commands on this system).
- Unconfigure nim_move_up
- Provides an interface to unconfigure the nim_move_up command's environment. This removes all state information, including which phase to execute next, saved data files generated as a result of the execution of some phases, and all saved input values. Optionally, all NIM resources created through nim_move_up can be removed as well. This screen does the equivalent work of the -r command line option.
Advanced Usage: Understanding the mig2p5 Framework
The mig2p5 framework consists of the /var/mig2p5 directory and serves as a means for nim_move_up to remember its state between subsequent invocations. Its existence and its use by nim_move_up is completely transparent to the user: the directory is created by nim_move_up and its values are initialized if it does not exist. It is removed when nim_move_up is unconfigured. The contents of this directory are easily readable and can be very helpful in troubleshooting problems with nim_move_up; the directory contains all of the logs generated in the phases and contains editable files that affect the behavior of nim_move_up in ways that are not allowed by the command line (such as forcing nim_move_up to run a certain phase out of order).
- config_db
- Contains all of the saved configuration options passed to nim_move_up through
the command line arguments or the nim_move_up_config SMIT
menu. Each line in the file takes the following form:
option_name:value
- current_phase
- Contains the number of the phase that will be executed at the next invocation of nim_move_up. Before running this phase, nim_move_up ensures that all previous phases have run successfully. This information is also maintained elsewhere with the mig2p5 framework.
- global_log
- Contains the output of all phases that have been run since the last time the mig2p5 framework was initialized.
- client_data/
- Contains files that are generated by nim_move_up during phases 3 and 4, in which each of the original clients' system resources and utilization are monitored and quantified into configuration files. The available resources in the POWER server are also quantified into corresponding text files. All the data in these files will be taken into account when determining the hardware profile of the newly derived LPAR on the POWER server. These files are intended to be machine-readable data files for the nim_move_up command's internal use. Do not manually modify or create them.
- phase#/
- Contains data specific to the corresponding phase denoted by the number in its name ( # ). Every phase has a directory (for example, phase1/ , phase2/ , and so on).
- phase#/log
- Contains all output displayed during a phase's run. If a phase runs multiple times (such as after an error has been corrected), all new output is appended to any text already existing in the file. This log is helpful in investigating failures related to this phase after they have occurred. The global_log file is composed of all the phases' log files, and all output in that file is arranged in the order that it was originally displayed.
- phase#/status
- Indicates whether this phase succeeded or failed when it was last
run. This file is used by nim_move_up to
determine whether a subsequent phase can be run. A phase can run only
if all of the previous phases' status files
contain the string
success
. The status file contains thefailure
string if the phase encountered an error that caused it to fail the last time it was run. - pid
- Contains the nim_move_up process ID number when nim_move_up is running in the background. This file and is cleaned up when the process finishes. As long as this file exists and contains a process ID, nim_move_up cannot run phases because concurrent runs of nim_move_up are not supported.
What is the manual configuration file and why is it needed?
- There is a need for more memory than that determined by the nim_move_up command.
- There is a virtual SCSI adapter (vhost#) created on a Virtual I/O server that you want to use for a Volume Group.
- You want to use a different Virtual Local Area Network (VLAN) ID than the one generated by the nim_move_up tool.
How do I write a manual configuration file?
For each client that is migrated to a POWER Systems environment, the nim_move_up command does all of the hardware configuration-related calculations by default. This file enables you to alter or tune the configuration of the target machine as you choose.
You can change the amount of memory, the size of the volume groups and the Virtual I/O server resources to be used. For example, you can change the VSCSI server adapter to be used for the volume groups created for the target LPAR. You can also change the VLAN IDs to be used for the Ethernet adapters created for the target LPAR.
# manual_cfgfile_dennis file
# MEMORY = min_MB desired_MB max_MB
MEMORY = 256 512 1024
# VIO_VG_INFO = vgname_src, size_in_MB, vhost_to_use
# Where vgname_src is the VG name in source machine, and
# vhost_to_use is the virtual adapter to be used for
# the VG specified in the VIO Server.
VIO_VG_INFO = rootvg,15344,vhost4
# VIO_VLAN_INFO = vlan_id, lpar_name, slot_number
VIO_VLAN_INFO = 1,VIO-server,2
The file can have any blank lines. You can add comments to the file with a # at the beginning of the line.
- min_MB
- The minimum memory required for AIX to run.
- desired_MB
- The amount of memory that you want the logical partition to have when activated.
- max_MB
- The maximum memory when dynamic logical-partitioning operations are performed on the partition.
The values of the VIO_VG_INFO field must be comma separated. The vgname_src value is the Volume Group in the source machine for which the manual data must be given. The size_in_MB value is the size of the Volume Group on the target machine and the vhost_to_use value is the vhost* (virtual SCSI server adapter) to be used for this Volume Group on the target POWER server.
Similarly, the values of the VIO_VLAN_INFO field must be comma separated. The vlan_id value is used instead of the one used by the nim_move_up command for the target LPAR’s Ethernet adapter. The lpar_name value is the LPAR name of the Virtual I/O server having the shared Ethernet adapter (SEA), and the slot_number value is the slot number of this SEA on the Virtual I/O server.
It is not necessary to provide all of these values. The nim_move_up command receives the specified values from the manual file and generates the rest based on the client configuration.
Files
Item | Description |
---|---|
/usr/sbin/nim_move_up | Contains the nim_move_up command. |