dISO8859-1*G"G#,G#tE#E$O$HO$O$ 1%8 1%j C% ]% L&>8&>&I'9'M5'9'5'6(-6(d6(-(A)p)B)>*N*>,E, .E!S."/4#S0$u0\%=0&O1'82`(K2)Z3*Q4@5*55+5b6i8}XLM@ MT ,Mb ;M =M -N EN7CN}3N1N3O'5O[OP?PQfRRS1WS 9TG!7T"PT# U $IU+%]Uu&dU'BV8( V{)!V*(V+@V,W(-.W..8W]/}W0(X1*X=2:Xh3@X41X5@Y6:YW7*Y8NY95Z :ZB;*Z<`Z=G[T>4[c[d(\xe7\f+\gH]h3]Ni^jW_kJ_gl__m/`n5`Bo4`xpP`q;`rba:s5atfau(b:v=bcwGbxJby)c4zhc^{:c|-d}0d0~&da=d*dZd6eLXefeFfC^fWfrgAGg.gh+$hJ!hohhhhiii*iGi\ir iiiiijj"j9jLjgj{jjjjjkFk0kwkkkk)k<l7lPl3mR5mVm0n;nDXnnPonGopq"Uq)q=rVrHtrue)u"vCv%uvivwx`y Ryaz Yz8zd{A{z1{j{5|Y2|3|2|1})5}[}~~QDH%Ihy<,Ki4jUDE 9f.n RRpbC4;p*6aE1_wd ( Y7 L + 5 `@k>8I@OgARB*\7=j !"Y#!$D%&8'0(:)3L*#+,"U-?x.;/E0E:12h354\(586g8H&9:o:;*<S$=Cx>G@9A6>B=uC1D2E,F)EG*oH'I0J1K-%L'SM+{N'O.P/Q..R0]S1T/U1V."^4Q_)`a/PbDc4d1eA,f.nOm>[v";E=ic'Af44it:z9bmRgo(Ic_F`_Fg\Y meiV=8S_!0+UF4J{W}7f,WI1gl;cèl +y$ĥ#""#4#X|řŶ>..?bnWB);l;Ǩ.>7R<Ȋ:cTf9ɻ  &4FZlO]̽.͓Λ8j_ϣ.#2+VЂY@^3Jk *Ԅ ԯ -Օ  P$e#؊0خ/ّWY X3zY)ܮZ0[$ \7.]Bf^Hݩ_`Xya.bdc@fzߧB"He?Lf;l:RJGU;_< W]UH TT8'YKkL44QDK[m_egt>Y]uTT(O}X&l   G  \.8%U^ w!,"P#Y$X%&'PQ(*)*\+sn,*-1 .?/V"0Gy1I2 384=5CRXSSYT[UUbVYWh\:{]9L=wDc4@|1LaM{NOPTQX RK qS T _Uo V W IX YV Z& \(]Ab^\cMdZ eZdfeg%hu*iBjEkQ)lH{m1n0o0'p/Xq[rst.auTvmw;Sx?yzg{/~|S}:~=&`~?q@KW>AC |;  !q"["Z#W*##$~%p%_&oY&r')3'4'C($(IH(n5(.( )\)=:)()I)<*H,*2*J*t+0B+0+P,+,jP,',4-2-D?-w--.A/.oE.l.T/R{/0#01oJ22e]23J3q4K5U5Ml5f6Q6wc6_7-p7r7v8q8 s9}!c9"m:U#<:$G;%f;H&;'aJY>xp>yAXzV{`W_|W}2XC~XvY?FYRZ!eZtIZ7[$[\,[\t\2]\[]]a/]^.^>_SH_B_5`o`T`aob{ccdoeScf afm'ffgTh |hahiRjJjrJjAkkJrl&~l]mamv/m+nen4ono DoKp$pppWq5qArr]kr:sKsYttaxu{pu7ve:vv[vwI wPwZwcwh wp w{ w w w w w ww wwxxx+xBxGxVxjx xxxx x!x"x#y $ y%y,&y2'y;(yD)yT* yh+yt,y|-y.'y/y0y1y2z 3z 4z25z76z>7zN8z_9zq:z;z< z= z>z?!z@zA{B{0C{KD{bE {kF {uG{H{I{J {K{L{M{N{O{P|Q|)R|?S|OT%|iU%|V"|W"|X|Y }Z}([}<\}E] }M^ }n_ }y`}a }b }c}d }e}f }g }h}i}j}k ~l~m ~-n ~:o~Fp~Zq~vr~s ~t~u~v~w~x yz "{ /|;}K~ R_ t  !;]o      : H V cp%#5 LYkt   -@ O Y dry    "(1 F R^ b o} dnde=fRgehEui:j9k 0lD>m>n$op(q.r)@s&jtx yz}~ &?[&sL?!aq1Na"#9<68s95.Le~ !5HdVy!VVIV = $0H%>y&R'D ,/P-(./0:12V13v4G:a<87-J.x-*&*'JRO*)* T`_S;;P6@Tools for verifying that a cluster is properly installed and configuredTools for verifying that a cluster is properly installed and configuredTools for verifying that a cluster is properly installed and configuredVerifies that PowerHA SystemMirror has been tested with your hardwareVerifies that PowerHA SystemMirror has been tested with your hardwareVerifies that your software environment is compatible with PowerHA SystemMirrorVerifies that your software environment is compatible with PowerHA SystemMirrorVerifies that your software environment is compatible with PowerHA SystemMirrorVerifies that your cluster is configured properlyVerifies that your cluster is configured properlyVerifies that PowerHA SystemMirror has been tested on your CPU typeVerifies that all devices installed on your system have been tested with PowerHA SystemMirrorVerifies that all fixes required by PowerHA SystemMirror have been installedVerifies that PowerHA SystemMirror is properly installedVerifies that the OS level is correct for PowerHA SystemMirrorVerifies that no known PTFs that break PowerHA SystemMirror are installedVerifies that all cluster nodes agree on cluster topologyForces all cluster nodes to agree on cluster topologyVerifies that all cluster nodes agree on cluster topologyForces all cluster nodes to agree on cluster topologyVerifies that cluster resources are properly installedVerifies that cluster resources are properly installedVerifies that cluster resources are properly installedRuns both the networks and resources programsChecks for proper configuration of network adapters and TTY linesThis check verifies that the resource ownership and takeover distribution policies are correct and consistent. Allows for selected viewing of PowerHA SystemMirror log files, enables debugging of the Cluster Manager, or enables dumping of all Lock Manager resources.Allows for selected viewing of script output or syslog output.scripts [-h host] [-s] [-f] [-d days] [-R file] [event ...] where: -h host is the name of a remote host from which to gather log data -s filters Start/Complete events -f filters failure events -d days defines the number of previous days from which to retrieve log -R file is file to which output is saved events is a list of cluster events Allows for parsing the hacmp.out file. [default location: /tmp/hacmp.out] Allows for selected viewing of script output or syslog output.syslog [-h host] [-e] [-w] [-d days] [-R file] [process ...] where: -h host is the name of a remote host from which to gather log data -e filters error events -w filters warning events -d days defines the number of previous days from which to retrieve log -R file is file to which output is saved process is a list of cluster daemon processes Allows for parsing the cluster.log file. [default location: /usr/adm/cluster.log] Allows for selected viewing of PowerHA SystemMirror log files, enables debugging of the Cluster Manager, or enables dumping of all Lock Manager resources.Enables debugging of the Cluster Manager or the dumping of the lock resource table.clstrmgr [-l level] [-R file] where: -l level is the level of debugging performed (0 - 9, where 0 turns debugging off) -R file is the file to which output is saved Allows for real-time clstrmgr debugging. Enables debugging of the Cluster Manager or the dumping of the lock resource table.cllockd [-R file] where: -R file is the file to which output is saved Allows dumping of the Lock Resource Table. Finds volume group inconsistencies among hosts and the disks.vgs -h hostnames [-v volume_groups] where: -h hostnames is a list of 2 to 4 hostnames separated by commas -v volume_groups is a list of volume group names separated by commas Note: Spaces are not allowed between hostname entries or volume group entries Checks for consistencies of volume groups among hosts, ODMs, and disks. Obtains a sequential flow of time stamped system events.trace [-t time] [-R file] [-l] daemon ... where: -t time is the number of seconds to perform the trace -R file is file to which output is saved -l chooses a more detailed trace option daemon is a list of cluster daemons to trace Allows for tracing PowerHA SystemMirror daemons (clstrmgr, cllockd, clsmuxpd, clinfo). Displays errors from the error log (hardware, software, system) that occur in the cluster.error type [-h host] [-R file] where: type is one of: short - short error report long - long error report cluster - PowerHA SystemMirror specific short error report -h host is the name of a remote host from which to gather log data -R file is file to which output is saved Allows for parsing the system error log. Available actions for clmgrAvailable classes for clmgr action "%1$s" Valid values: %1$s # Available options for "clmgr %1$s %2$s": clmgr {[-c|-d ][-S] | [-x]} [-v] [-f] [-D] \ [-T <#####>] [-l {error|standard|low|med|high|max}] \ [-a {,,...}] [] \ [-h | = = ...] clmgr {[-c|-d ][-S] | [-x]} [-v] [-f] [-D] \ [-T <#####>] [-l {error|standard|low|med|high|max}] \ [-a {,,...}] -M " [] [= = ...] . . ." ACTION={add|modify|delete|query|online|offline|...} CLASS={cluster|site|node|network|resource_group|...} clmgr {-h|-?} [-v] clmgr [-v] help The clmgr command provides a consistent, reliable interface for performing PowerHA SystemMirror cluster operations via a terminal or script. All clmgr operations are logged in the "clutils.log" file, including the command that was executed, its start/stop time, and what user initiated the command. The basic format for using clmgr is consistently as follows: clmgr [] [] This consistency helps make clmgr easier to learn and use. Further help is also available at each part of clmgr's command line. For example, just executing "clmgr" by itself will result in a list of the available ACTIONs supported by clmgr. Executing "clmgr ACTION" with no CLASS provided will result in a list of all the available CLASSes for the specified ACTION. Executing "clmgr ACTION CLASS" with no NAME or ATTRIBUTES provided is slightly different, though, since for some ACTION+CLASS combinations, that may be a valid command format. So to get help in this scenario, it is necessary to explicitly request it by appending the "-h" flag. So executing "clmgr ACTION CLASS -h" will result in a listing of all known attributes for that ACTION+CLASS combination being displayed. That is where clmgr's ability to help ends, however; it can not help with each individual attribute. If there is a question about what a particular attribute is for, or when to use it, the product documentation will need to be consulted. ACTION a verb describing the operation to be performed. NOTE: ACTION is *not* case-sensitive. CLASS the type of object upon which the ACTION will be performed. NOTE: CLASS is *not* case-sensitive. NAME the specific object, of type "CLASS", upon which the ACTION is to be performed. ATTR=VALUE optional, attribute/value pairs that are specific to the ACTION+CLASS combination. These may be used to do specify configuration settings, or adjust particular operations. When used with the "query" action, ATTR=VALUE specifications may be used to perform attribute-based searching/filtering. When used for this purpose, simple wildcards may be used. For example, "*" matches zero or more of any character, "?" matches zero or one of any character. -a valid only with the "query", "add", and "modify" ACTIONs, requests that only the specified attribute(s) be displayed. NOTE: the specified order of these attributes is *not* guaranteed to be preserved in the resulting output. -c valid only with the "query", "add", and "modify" ACTIONs, requests all data to be displayed in colon-delimited format. -D disables the dependency mechanism in clmgr that will attempt to create any requisite resources if they are not already defined within the cluster. -f requests an override of any interactive prompts, forcing the current operation to be attempted (if forcing the operation is a possibility). -h requests a help message to be displayed. -l activates trace logging for serviceability: low: logs function entry/exit med: adds function entry parameters, as well as function return values high: adds tracing of every line of execution, only omitting routine/"utility" functions max: adds routine/utility functions. Also adds a time/date stamp to the function entry/exit messages. All trace data is written into the "clutils.log" file. This option is typically only of interest when troubleshooting. -M allows multiple operations to be specified and run via one invocation of clmgr, with one operation being specified per line. All the operations will share a common transaction ID. -S valid only with the "query" ACTION and "-c" option, requests that all column headers be suppressed. -T a transaction ID to be applied to all logged output, to help group one of more activities into a single body of output that can be extracted from the log for analysis. This option is typically only of interest when troubleshooting. -v requests maximum verbosity in the output.. NOTE: when used with the "query" action and no specific object name, queries all instances of the specified class. For example, "clmgr -v query node" will query and display *all* nodes and their attributes. When used with the "add" or "modify" operations, the final, resulting attributes after the operation is complete will be displayed (only if the operation was successful). -x valid only with the "query", "add", and "modify" ACTIONs, requests all data to be displayed in simple XML format. One or more deletion operations have been detected. Proceed with the deletion(s)? (y|n) Operation aborted. Properties: # Available actions for the "%1$s" class: ERROR: missing the class specification for action "%1$s". ERROR: invalid class specification for action "%1$s": %2$s # Available classes for clmgr action "%1$s": ERROR: an ambiguous class was specified for action "%1$s": "%2$s" # Available classes for clmgr action "%1$s" that start with "%2$s": ERROR: unrecognized clmgr action specified: %1$s ERROR: unrecognized clmgr class specified: %1$s ERROR: unrecognized %1$s management action: %2$s ERROR: unrecognized or ambiguous %1$s action: %2$s The current operation requires a PowerHA SystemMirror cluster, but a cluster has not yet been created. Therefore, a cluster named "%1$s" will now be created automatically. The current operation requires a PowerHA SystemMirror node named "%1$s", but that node does not appear to exist. An attempt will be made to create that node now... The current operation requires a PowerHA SystemMirror node, but no node was specified. Defaulting to the local node, "%1$s"... The current operation requires a PowerHA SystemMirror node, but no nodes are currently defined. An attempt will be made create a node on this system now, named "%1$s"... The current operation requires a PowerHA SystemMirror network, but a network has not yet been created. A network named "%1$s" will be automatically created. The current operation requires a PowerHA SystemMirror network, but no network was specified. Defaulting to network, "%1$s"... The current operation requires a PowerHA SystemMirror network, but no networks are currently defined. An attempt will now be made create an Ethernet network in the cluster named "%1$s"... ERROR: an ambiguous "%1$s" option was detected: "%2$s" Matching Options: %3$s ERROR: an unrecognized "%1$s" option was detected: %2$s ERROR: invalid dependency attributes were specified. ERROR: "%1$s" is ambiguous, and could match any of the following: %2$s ERROR: no cluster is defined. ERROR: conflicting options were provided, "%1$s" versus "%2$s". ERROR: option "%1$s" is required when any of the following option(s) are used: %2$s ERROR: invalid value for option "%1$s" when any of the following option(s) are used: %2$s ERROR: could not create a temporary storage location in "%1$s". ERROR: failed to enable %1$s. ERROR: failed to disable %1$s. ERROR: %1$s is not currently enabled. ERROR: "%1$s" has exceeded the maximum allowed length of %2$s. Never ERROR: "%1$s" may only be used with "%2$s". ERROR: missing the action for "%1$s manage %2$s ...". ERROR: more than one output formatting flag was specified. The "-c", "-d", and "-x" flags may not be used together. ERROR: "%1$s" is not a numeric value. ERROR: "%1$s" is not a negative number. ERROR: "%1$s" is neither a negative number, nor a zero. ERROR: "%1$s" is neither a negative whole number, nor a zero. ERROR: "%1$s" is not zero (zero was expected). ERROR: "%1$s" is neither a positive whole number, nor a zero. ERROR: "%1$s" is neither a positive number, nor a zero. ERROR: "%1$s" is not a positive number. ERROR: internal error in verify_is_numeric(): invalid "range" value: "%1$s" ERROR: "%1$s" is not in the range of %2$s .. %3$s. ERROR: invalid IPv4 address: "%1$s". A valid IPv4 address must have exactly four octets, "#.#.#.#", each in the range of 0 to 255. ERROR: missing requisite fileset: "%1$s" ERROR: "%1$s" is an ambiguous value for "%2$s", and could match any of these possible values: ERROR: an error occurred while parsing the inputs for the operation. ERROR: unrecognized or ambiguous %1$s class: %2$s Warning: No cleanup method was specified for Application monitor "%1$s". The stop script "%2$s" from application server "%3$s" will be used as the cleanup method. ERROR: a name/label must be provided. ERROR: this operation requires the "%1$s" attribute. ERROR: "%1$s", does not appear to exist! ERROR: one or more invalid characters were detected in "%1$s": "%2$s" For more information about available options and syntax, try "/usr/es/sbin/cluster/utilities/clmgr %1$s". As an alternative, if the PowerHA SystemMirror man pages have been installed, invoke "/usr/es/sbin/cluster/utilities/clmgr -hv" (or "/usr/bin/man clmgr"), searching for "%2$s" in the displayed text. ERROR: one or more invalid characters were detected in "%1$s" ("%2$s"). Valid characters include letters, numbers, and underscores only. ERROR: the specified path/file does not appear to be in absolute format: %1$s ERROR: the specified path/file does not appear to exist on "%2$s": %1$s ERROR: the "%1$s" attribute's value contains whitespace, which is not allowed: "%2$s" ERROR: the operation appears to have failed. ERROR: invalid value specified for "%1$s": "%2$s" ERROR: "%1$s" requires a positive, integer value. ERROR: either NETMASK (IPv4) or PREFIX (IPv6) may be specified, but not both. ERROR: an invalid IPv6 prefix length was specified: %1$s Warning: "%1$s" must be specified. Since it was not, a default of "%2$s" will be used. ERROR: an invalid IPv4 netmask was specified: %1$s ERROR: no nodes are defined on this host, and none were specified via the "NODES" attribute. ERROR: no logs are currently defined. ERROR: the specified method, "%1$s", could not be queried. ERROR: unable to retrieve the cluster manager status for node "%1$s". ERROR: the cluster manager subsystem appears to be down on node "%1$s". ERROR: missing required argument: %1$s ERROR: invalid action "%1$s" for the event summary report. Try using "%2$s" instead of "%1$s". Warning: could not obtain the clstrmgr.debug debug level. ERROR: no physical volumes were specified. ERROR: no resource group label was specified. ERROR: no node label was specified. ERROR: the specified snapshot, "%1$s", could not be found. ERROR: failed to move "%1$s" to "%2$s". ERROR: failed to move the secondary instance of resource group "%1$s" to "%2$s". ERROR: no nodes could be found within this cluster. ERROR: the "SYNC" and "VERIFY" options are both set to "no", so nothing to do! Warning: process "%1$s" has exceeded the maximum allowed run time of %2$s seconds. Aborting. ERROR: the "%1$s" option contains a shell execution character: %2$s Warning: no options were provided for log "%1$s". Defaulting to the last %2$s lines. ERROR: the specified report, "%1$s", is not a known IBM PowerHA SystemMirror report. Warning: the directory in "%1$s" does not yet exist. An attempt will be made to create that directory. ERROR: the specified snapshot's information file could not be found. ERROR: unable to list shared volume groups. Verifying with event "%1$s"...Available Application Controllers: Available Application Monitors: Available Dependencies: Available Fallback Timers: Available File Collections: Available File Systems: Available Interfaces: Available Logs: Available Logical Volumes: Available Methods: Available Networks: Available Nodes: Available Persistent Node IPs: Available Physical Volumes: Available Resource Groups: Available Reports: Available Service IPs: Available Sites: Available Snapshots: Available Tapes: Available Volume Groups: Available Groups: Available Users: Available Storage Agents: Available Storage Systems: Available Mirror Disks: Available Mirror Pairs: Available Mirror Groups: Available Repositories: Available Application Controllers Configured for Capacity on Demand: Available Events: Available %1$s Events: Available Event Types: Available Mirror Pools: Available HMCs: ERROR: "%1$s" maximum value is "%2$d". ERROR: "%1$s" requires a positive, decimal numeric value. ERROR: "%1$s" requires the following option(s): %2$s No application data could be found for this report. This is often caused by a lack of, or problem with, application monitoring. Application monitors produce the historical data needed for this report. ERROR: unable to communicate with "%1$s" (%2$s). ERROR: unrecognized %1$s operation attempted: %2$s ERROR: this operation requires IBM PowerHA SystemMirror for AIX Enterprise Edition. Valid values must be in the range %1$s .. %2$s. ERROR: "%1$s" contains whitespace, which is not allowed. Warning: the current operation has exceeded its allotted execution time (%1$s seconds). ERROR: one or more invalid characters were detected in "%1$s" ("%2$s"). Valid characters include letters, numbers, underscores, and dashes only. This change will go into effect on all nodes after the cluster is synchronized. ERROR: the specified disk, "%1$s" ("%2$s"), is not eligible for use. ERROR: unable to determine the name of the cluster. Knowing the name of the cluster is needed in order to stop or start Cluster Aware AIX cluster services. Verify that this cluster is fully configured and has been successfully synchronized before attempting this operation again. BACKUP ERROR: "%1$s" does not appear to exist within site "%2$s", or is not a repository. ERROR: unable to communicate with "%1$s" (%2$s). Verify that the node is powered up and active, and that clcomd is properly configured and running on it. If the problem persists, also check the local clcomd, and verify that the network is functioning normally. Warning: "%1$s" does not appear to be available on "%2$s". WARNING: The COMMUNICATION_PATH of node("%1$s") has changed to "%2$s". The new communication path does not match with hostname. The communication path of PowerHA cluster node must match the host name of that node. Otherwise, The cluster verification may fail. You can run the"hostname" command to display the hostname of that PowerHA node. You have specified multiple dependent groups "%1$s" to be moved. No checking of the current state or location will be done, and the move request will be handled directly by the cluster manager. Any errors during processing of the move, or any conflicts in the specified move request may result in failure of the move request ERROR: the specified disk, "%1$s" ("%2$s"), is %3$d MB in size, but must be between %4$d MB and %5$d MB in size. ERROR: missing required argument: %1$s ERROR: failed to create "%1$s". ERROR: this operation requires an IP address or resolvable name. ERROR: only one resource group may be specified as the parent in a "parent/child" dependency: PARENT="%1$s" ERROR: a dependency of type "%1$s" has already been specified. It will not be possible to also define a "same node/site" dependency in the same operation. ERROR: to establish a "same site/node" dependency, the type of the dependency must be specified via the SAME attribute, and two or more resource groups must be specified via the GROUPS attribute. ERROR: a dependency of type "%1$s" has already been specified. It will not be possible to also define a "different nodes" dependency in the same operation. ERROR: an error occurred while attempting to add "%1$s" to file collection "%2$s". An attempt will be made to remove the file collection, since it was only partially created. ERROR: event "%1$s" is not currently supported for notification methods. ERROR: failed to set the resource distribution preference for network "%1$s" to "%2$s". ERROR: two sites are already configured within this cluster, "%1$s" and "%2$s". Attempting to discover available resources on "%1$s"... ERROR: site-specific attributes were specified, but sites are not defined in this cluster. ERROR failed to update the PowerHA SystemMirror configuration. ERROR failed to update the RSCT configuration. *** Warning: since no label was provided, a default label will be provided automatically: "%1$s" Creating a fallback timer that executes only once... Creating a fallback timer that executes yearly... Creating a fallback timer that executes monthly... Creating a fallback timer that executes weekly... Creating a fallback timer that executes daily... ERROR: the specified object already exists: "%1$s" ERROR: a node priority policy may only be used with a fallover policy of "FUDNP" ("Fallover Using Dynamic Node Priority"). ERROR: a node priority policy of "least" or "most" requires an associated, valid script to be specified using the "NODE_PRIORITY_POLICY_SCRIPT" option. *** The initial configuration infomation has been saved. You can now define repository disks and other configuration information. ERROR: no more than %1$s repository disks may be defined within a single site. ERROR: repository disk "%1$s" is not available on all the site nodes. Warning: a valid entry is missing from the %1$s file. A boot IP address or fully qualified host name for each node must be entered in that file on all nodes in the cluster. Please consider adding either "%2$s" or "%3$s" to all the %1$s files in your cluster, then restart clcomd on each node. For example: echo "%3$s" >>%1$s stopsrc -s clcomd; sleep 2; startsrc -s clcomd Warning: a valid entry is missing from the %1$s file. An entry is required in %1$s for each node in the cluster, and must exist in that file on every node. The line must be of the form " ". For example: 10.4.122.215 yourhost.customer.domain.com yourhost Please consider adding a line similar to the following to the %1$s file on all your nodes, then restart clcomd: echo "%2$s" >>%1$s stopsrc -s clcomd; sleep 2; startsrc -s clcomd Warning: cannot communicate to node "%1$s". This indicates a problem with the clcomd subsystem. Make sure it is running on "%1$s". Check the system configuration files that affect it, /etc/cluster/rhosts and /etc/hosts, for complete entries for all nodes in the cluster. Also consider restarting the service using: stopsrc -s clcomd; sleep 2; startsrc -s clcomd No problems were detected on this node. Of course, that does not mean that no problems actually exist! Only that none were found for you automatically! Here are a few tips that might help you troubleshoot this problem. * Compare the local /etc/hosts and /etc/cluster/rhosts files with their counterparts on the other nodes in this prospective cluster. The rhosts file requires an IP address or fully-qualified host name for each node in the cluster. The hosts file requires the IP address, fully-qualified host name, and short host name for each node in the cluster. * Make sure only one clcomd process is running, and that it is the /usr/sbin/clcomd: ps -ef | grep clcomd | grep -v grep * Perform a full restart (not a refresh) of clcomd using a command like: stopsrc -s clcomd; sleep 2; startsrc -s clcomd *** Warning: since no nodes were specified for this cluster, a one-node cluster will be created with this system: "%1$s" ERROR: no more than %1$s repository disks may be defined. ERROR: repository disk "%1$s" is not available on all the cluster nodes. ERROR: one or more LDAP servers must be provided. Warning: unable to verify that "%1$s" is free/available, and shared between the specified nodes, "%2$s". ERROR: since the synchronization failed, the requested node startup option(s) cannot be established. Removing the incomplete node. ERROR: failed to retrieve the policies for resource group "%1$s". ERROR: failed to retrieve the node list for resource group "%1$s". ERROR: adding node(s) "%1$s" to resource group "%2$s". ERROR: one or more disks must be specified. ERROR: a linked cluster requires one repository disk per site. Please use the SITE option to indicate within which site you intended to add the repository. ERROR: a linked cluster requires one repository disk per site. Please use the SITE option to indicate within which site you intended to replace the repository. ERROR: a linked cluster requires one repository per site. Since this cluster was not defined as a linked cluster, it has a cluster type of "%1$s", it requires just one repository disk for the entire cluster. The SITE option is not appropriate for this environment. ERROR: "%1$s" does not appear to be shared across all the nodes in the cluster. ERROR: "%1$s" does not appear to be shared across all the nodes in site "%2$s". ERROR: a stretched cluster only requires one repository disk for the entire cluster. The SITE option is not appropriate for this environment, not per site. ERROR: a cluster without sites only requires one repository disk for the entire cluster. The SITE option is not appropriate for this environment. ERROR: disk "%1$s" is already in use within volume group "%2$s". ERROR: "%1$s" is only valid for a linked cluster. ERROR: a cluster-wide multicast IP address is only valid for non-linked clusters. This cluster was defined as a "%1$s" cluster, which only utilizes a separate multicast IP address for each site. ERROR: failed to create the CAA cluster. ERROR: a site-specific multicast IP address is only valid for linked clusters. This cluster was defined as a "%1$s" cluster, which only utilizes a single multicast IP address defined at the cluster level. *** Warning: to complete this configuration, sites must be defined. *** Warning: to complete this configuration, a repository disk must be defined for each site. ERROR: a linked cluster requires one repository disk per site. After defining the cluster, define sites, adding a repository to each one. ERROR: a linked cluster requires one multicast heartbeat address per site. After defining the cluster, define sites, including a valid multicast address for each one. ERROR: failed to add repository: %1$s ERROR: repository disk "%1$s" is not available on all the nodes in site "%2$s" (%3$s). ERROR: the "%1$s" option is not appropriate for a cluster type of "%2$s". ERROR: "%1$s" cannot be found on "%2$s". ERROR: "%1$s" maps to different disks across %2$s: Either select a different disk, or try specifying the disk that you want by its UUID or PVID. The specified address, "%1$s", is of the same type (%2$s) as the specified network, "%3$s". So the netmask/prefix information specified for "%1$s" is not needed, and will instead be taken from "%3$s". ERROR: the specified address, "%1$s", is of a different type (%2$s) than the specified network, "%3$s" (%4$s). A prefix length must be provided. ERROR: the specified address, "%1$s", is of a different type (%2$s) than the specified network, "%3$s" (%4$s). A netmask must be provided. ERROR: "%1$s" is already configured for capacity on demand. ERROR: no configuration specifications were provided. ERROR: the attempt to define capacity on demand for "%1$s" has failed. ERROR: more than one object matches the provided information: *** Warning: since no physical volumes were specified, all the physical volumes in volume group "%1$s" will be used for this mirror pool: ERROR: the specified reference node, "%1$s", does not appear to host the given volume group, "%2$s". ERROR: "%1$s" resides within a different volume group, "%2$s". ERROR: disk "%1$s" is already in use within volume group "%2$s" on node "%3$s". ERROR: invalid value specified for "%1$s" ("%2$s") with "%3$s". ERROR: "%1$s" does not apply to "%2$s". ERROR: "%1$s" requires one of the following options: ERROR: "%1$s" either does not exist, or is not a raw disk. Warning: since no heartbeating type was specified, and unicast is not available on all nodes, a default heartbeating style of multicast will be used. Warning: since no heartbeating type was specified, a default heartbeating style of unicast will be used. ERROR: one or more nodes in the cluster does not currently have unicast capability. Please perform the necessary updates on all nodes to allow a unicast-based cluster. *** The initial cluster configuration information has been saved. You can now define repository disks, along with other configuration information. When the cluster configuration is fully defined, verify and synchronize the cluster to deploy the configuration to all defined nodes. ERROR: No sites will be supported when cluster policy is set to single node swap mode. *** Warning: no label was provided for this node. An attempt will be made to use the provided communication path, "%1$s", to determine an appropriate node label automatically. Using a label of "%1$s" for this node (communication path "%2$s"). ERROR: "%1$s" could not be resolved via the /usr/bin/host command. Please make sure that "%1$s" is correctly defined in the /etc/hosts file. Attempting to enable communication with node "%1$s"... Communication with node "%1$s" is now enabled. ERROR: failed to enable communication with node "%1$s". ERROR: failed to add node "%1$s" to the cluster. ERROR: a site must be specified. ERROR: failed to create the cluster according to the provided specifications. Running the delete code to attempt to ensure that no partial instance of the cluster remains... ERROR: failed to delete "%1$s". The cluster appears to have already been deleted on this node. Warning: unable to determine the name of the local node. ERROR: the specified node "%1$s" could not be found in the cluster. Attempting to bring node "%1$s" offline (current state is "%2$s")... ERROR: cluster services on node "%1$s" are in a state that prevents the cluster configuration from being removed. A cluster cannot be deleted from a node while cluster services remain active. Please consider running "%2$s offline node %1$s". ERROR: an error state was reported from node "%1$s". Unable to delete the cluster on that node. Attempting to delete node "%1$s" from the cluster... ERROR: the cluster removal appears to have failed on node "%1$s": From "%1$s": %2$s Attempting to delete the local node from the cluster... ERROR: the cluster removal appears to have failed on the local node: From the local node: %1$s ERROR: no "%1$s" dependencies involving "%2$s" could be found. ERROR: no dependencies involving "%1$s" could be found. ERROR: the specified implicit dependency type, "%1$s", does not match the specified explicit dependency type, "%2$s". ERROR: the specified dependency, "%1$s", is ambiguous and could be referring to any of the following dependency types: %2$s Please specify which dependency type you intended to delete by using the "TYPE" attribute. ERROR: an active repository can only be replaced with another disk, not deleted. ERROR: "%1$s" is not configured for capacity on demand services. ERROR: failed to remove the capacity on demand settings from "%1$s". Warning: there are no application controllers to delete. Warning: there are no application monitors to delete. Warning: there are no resource group dependencies to delete. Warning: there are no fallback timers to delete. Warning: there are no file collections to delete. Warning: there are no interfaces to delete. Warning: there are no methods to delete. Warning: there are no networks to delete. Warning: there are no nodes to delete. Warning: there are no persistent IPs to delete. Warning: there are no resource groups to delete. Warning: there are no service IPs to delete. Warning: there are no sites to delete. Warning: there are no snapshots to delete. Warning: there are no tapes to delete. Warning: there are no mirror disks to delete. Warning: there are no mirror groups to delete. Warning: there are no mirror pairs to delete. Warning: there are no storage agents to delete. Warning: there are no storage systems to delete. Warning: there are no volume groups to delete. Warning: there are no logical volumes to delete. Warning: there are no file systems to delete. Ensuring that the following nodes are offline: %1$s ERROR: cluster services on one or more of the specified nodes, "%1$s", are in a state that prevent the cluster configuration from being removed. A cluster cannot be deleted from a node while cluster services remain active. Please consider running "%2$s offline node " for each active node. ERROR: the specified dependency cannot be deleted: %1$s It is a resource group processing order dependency (its type is "%2$s"), so it can only be modified, not removed. If your intention was to remove a different dependency, of a different type, then you will need to specify the type using the "TYPE" option. This can happen when a dependency uses the same nodes as one of the resource group processing orders. Deleting the cluster definition from "%1$s"... Deleting the following nodes from the cluster definition on "%1$s": Attempting to remove the CAA cluster from "%1$s"... Attempting to remove the CAA site from "%1$s"... ERROR: "%1$s" does not appear to exist in volume group "%2$s"! ERROR: failed to delete "%1$s" from "%2$s". ERROR: Given disk "%1$s" is a remote disk so can not perform this operation. ERROR: Unable to open the disk "%1$s" ,please check if disk is already being opened by some other process. ERROR: Unable to read the reservation state of disk "%1$s". ERROR: Disk "%1$s" is not reserved , this operation must be run to clear the reservation of disk if it is reserved. ERROR: volume group "%1$s" is not reserved , this operation must be run to clear the reservation of volume group if it is reserved. ERROR: failed to modify "%1$s". ERROR: no valid modifications were specified for "%1$s". ERROR: the "%1$s" and "%2$s" options must be used together. ERROR: the specified implicit dependency type, "%1$s", does not match the specified explicit dependency type, "%2$s". ERROR: an error occurred while attempting to remove file(s) "%1$s" from file collection "%2$s". ERROR: an error occurred while attempting to add file(s) "%1$s" to file collection "%2$s". ERROR: could not determine the node name for interface "%1$s". ERROR: the "FORMATTING" and "TRACE_LEVEL" attributes are only valid for the "hacmp.out" log. ERROR: either "%1$s" or "%2$s" must be specified. ERROR: the resource distribution preference requires at least one service IP be defined in the cluster, on the specified network ("%1$s"). ERROR: invalid multicast IP address: "%1$s". Multicast addresses must be in the range 224.0.0.0 - 239.255.255.255 ERROR: Bad multicast IP address. Must be 4 octets: %1$s ERROR: multicast IP address "%1$s" is reserved: %2$s. ERROR: disk "%1$s" was targeted for removal, but does not appear to be a valid repository disk. ERROR: deleting all repositories from a cluster is not valid. Instead, the entire cluster must be removed. ERROR: deleting all repositories from a site is not valid. Instead, the entire site must be removed. ERROR: an attempt was made to add "%1$s" as a repository, but it is already being used as a repository disk. Warning: the specified disk, "%1$s", is already in volume group "%2$s". ERROR: to modify this timer to make it run only one time, the "%1$s" attribute must be specified. ERROR: to modify this timer to make it repeat yearly, the "%1$s" attribute must be specified. ERROR: to modify this timer to make it repeat monthly, the "%1$s" attribute must be specified. ERROR: to modify this timer to make it repeat weekly, the "%1$s" attribute must be specified. ERROR: "%1$s" either does not exist, or is not currently available. ERROR: a replacement disk was not specified, nor are any backup repository disks defined. ERROR: could not determine the name of the current repository disk. Unable to proceed. ERROR: cluster security is currently disabled, rendering the specified configuration options invalid: %1$s ERROR: cluster security is being disabled, rendering the specified configuration options invalid: %1$s ERROR: the attempt to modify the capacity on demand settings for "%1$s" has failed. ERROR: multicast IP address "%1$s" is already in use. ERROR: the cluster has already been fully formed, so its type cannot be changed. ERROR: the current cluster configuration contains sites, so its type cannot be set to "NSC". ERROR: a tie breaker is already defined: %1$s ERROR: "%1$s" appears to be unavailable. ERROR: unable to update the non-persistent hostname change support on node "%1$s". ERROR: repositories must be specified per site in a linked cluster. ERROR: repositories may only be specified per site in a linked cluster. LOW,MEDIUM,HIGH,DISABLED ERROR: the CAA cluster already exists, so the cluster name (%1$s) cannot be changed. ***Warning: this operation will destroy any information currently stored on "%1$s". Are you sure you want to proceed? (y/n) ERROR: could not determine the name and PVID of the replacement repository disk. Unable to proceed. New repository "%1$s" (%2$s) is now active. ERROR: the CAA cluster already exists, so the cluster type (%1$s) cannot be changed. ERROR: the specified node list (%1$s) does not contain the local node, "%2$s". It is not possible to remove "%2$s" from the cluster while actively running a cluster configuration operation on it. Either add "%2$s" to the provided node list, or perform this operation on a different cluster node (%3$s). ERROR: the specified node list (%1$s) does not contain any of the current cluster nodes (%3$s). This operation is not appropriate for deleting the entire cluster. Either add one or more of the current cluster nodes to the provided node list, or perform a standard cluster deletion on "%2$s", followed by a standard cluster creation on one of the new nodes. ERROR: the specified node list (%1$s) does not contain any changes from the current cluster nodes (%2$s). ERROR: a fully formed cluster is required before the MONITOR_INTERFACES setting can be adjusted. ERROR: to confirm that the MONITOR_INTERFACES setting is to be modified, please use the force flag, "-f". ERROR: no disks were specified for "%1$s" PARENT_CHILD Dependencies, by ParentPARENT_CHILD Dependencies, by ChildSTOP_AFTER Dependencies, by SourceSTOP_AFTER Dependencies, by TargetSTART_AFTER Dependencies, by SourceSTART_AFTER Dependencies, by TargetNODECOLLOCATION DependenciesSITECOLLOCATION DependenciesANTICOLLOCATION DependencyNo logical volumes were found in the cluster matching "%1$s". No logical volumes were found in the cluster. ERROR: unable to list shared volume groups. Warning: could not collect all IBM PowerHA SystemMirror data in the allotted time (%1$s seconds). Warning: could not collect status data for "%1$s" in the allotted time (%2$s seconds). ERROR: "%1$s" does not appear to exist, or is not a repository. ERROR: "%1$s", of type "%2$s", does not appear to exist! ERROR: the specified object, "%1$s", could not be found. ERROR: could not collect "%1$s" information. ERROR: "%1$s" was not found in the data that you requested. ERROR: unable to collect the needed LVM information. Attempting to terminate any remaining, active processes... ERROR: the specified group, "%1$s", could not be found. ERROR: the specified storage technology, "%1$s", does not support vendor IDs in storage systems. ERROR: the specified storage technology, "%1$s", does not support storage agents. ERROR: the specified user, "%1$s", could not be found. SystemMirror Information: Version:Build Level:Cluster Type:CAA Information: Cluster Configured:RSCT Information:Host Information: New repository "%1$s" (%2$s) is now active and configuration has been updated. Warning: cluster services are already offline on node "%1$s" (state is "%2$s"). Removing that node from the shutdown list. Warning: cluster services are already online on node "%1$s" (state is "%2$s"). Removing that node from the startup list. The cluster is now offline. ERROR: cluster services have experienced a problem which has left the cluster in an error state. Manual intervention will be required to recover from this problem. Warning: unable to determine if the cluster has gone fully offline (the process is still not complete). Wait a few minutes, then manually check the cluster's state using "clmgr query cluster". ERROR: at least one node must be specified. "%1$s" is now offline. ERROR: cluster services have experienced a problem on node "%1$s" which has left the node in an error state. Manual intervention will be required to recover from this problem. Warning: unable to determine if node "%1$s" has gone offline (the process is still not complete). Wait a few minutes, then manually check the node's state using "clmgr query node %1$s". ERROR: at least one resource group must be specified. Warning: unable to identify a node where resource group "%1$s" is currently online. ERROR: at least one site must be specified. ERROR: "%1$s" contains no nodes. Node "%1$s" in site "%2$s" is now offline. ERROR: cluster services have experienced a problem on node "%1$s" in site "%2$s" which has left the site in an error state. Manual intervention will be required to recover from this problem. Warning: unable to determine if node "%1$s" in site "%2$s" has gone offline (the process is still not complete). Wait a few minutes, then manually check the site's state using "clmgr query site %1$s". The cluster is now online. Warning: unable to determine if the cluster has come fully online (the process is still not complete). Wait a few minutes, then manually check the cluster's state using "clmgr query cluster". "%1$s" is now online. Warning: unable to determine if node "%1$s" has come online (the process is still not complete). Wait a few minutes, then manually check the node's state using "clmgr query node %1$s". Warning: unable to identify an eligible node where resource group "%1$s" can be brought online. Node "%1$s" in site "%2$s" is now online. Warning: unable to determine if node "%1$s" in site "%2$s" has come online (the process is still not complete). Wait a few minutes, then manually check the site's state using "clmgr query site %1$s". Node "%1$s" in site "%2$s" is now unmanaged. The cluster is now unmanaged. "%1$s" is now unmanaged. Warning: unable to determine if node "%1$s" has come online (the startup process has not completed after %2$d seconds). Wait a few minutes, refresh the cluster's data, then check the node's status to see if it completed launching cluster services. Warning: unable to determine if node "%1$s" has come online (the startup process has not completed after %3$d seconds). Wait a few minutes, then manually check the node's state using "%2$s -a *START query node %1$s". ERROR: the Cluster Aware AIX cluster services appear to be offline on "%1$s". Cluster "%1$s" is already offline. Cluster "%1$s" is already online. The nodes for site "%1$s" are already offline. The nodes for site "%1$s" are already online. You have specified dependent group(s) "%1$s" to be brought ONLINE. No checking of the current state or location will be done. Combination of manage=unmanage and STOP_CAA=yes is not allowed, since you must stop SystemMirror cluster services before stopping CAA ERROR: the Cluster Aware AIX cluster services appear to be offline on "%1$s". It is possible that CAA was manually stopped using STOP_CAA option. In that case, you could use any one of the below command to start CAA and PowerHA services. "clmgr online cluster START_CAA=yes" OR "clmgr online node START_CAA=yes" OR "clmgr online site START_CAA=yes" ERROR: node information should not be passed to bring up both PRIMARY and SECONDARY. ERROR: node information should not be passed to bring down both PRIMARY and SECONDARY. ERROR: no nodes are defined within this cluster! ERROR: the recovery effort has failed. ERROR: no node could be identified on "%1$s". ERROR: "%1$s" is not a known node. Attempting to recover from event failures on "%1$s"... ERROR: failed to recover from script failure(s) on node "%1$s". Node "%1$s" is not in a failed state at the moment; no recovery needed. Any current event script failures have been recovered from. Warning: more may occur if the underlying problem has not been corrected. At the moment, there are no known event script failures on the following node(s): %1$s Attempting to delete the cluster from AIX ... ERROR: an error occurred while attempting to remove the cluster from AIX. See "%1$s" for details. ERROR: no usable IP addresses were found on interface "%1$s". ERROR: could not detect a required %1$s encryption library. Please install at least one of the following filesets: %2$s ERROR: the periodic refresh rate must be at least %1$s seconds. ERROR: the periodic refresh rate cannot be greater than %1$s seconds. ERROR: the grace period cannot be greater than %1$s seconds. ERROR: the grace period cannot be greater than the periodic refresh rate. ERROR: the private key and certificate files must match. Either use both default files, or neither. ERROR: a matching private key and certificate combination must be provided for security mechanism "%1$s". Warning: disabling security for all cluster communication.Warning: could not restart the remote clcomd in the allotted time (%1$s seconds). Testing cluster communication using the new security configuration... Warning: could not verify cluster communication in the allotted time (%1$s seconds). ERROR: communication within the cluster appears to be compromised. A manual restart of clcomd on each affected node may be needed. Cluster communication using the new security configuration appears to be functioning properly. ERROR: could not find disk "%1$s" in volume group "%2$s". ERROR: the specified mirror pool, "%1$s", does not exist within volume group "%2$s". ERROR: could not identify the member nodes for the specified volume group, "%1$s". ERROR: the specified reference node, "%1$s", is not valid for "%2$s". ERROR: could not change respository disk to "%1$s". Please try with another disk. ERROR: HMC "%1$s" is already defined for node "%2$s". Warning: could not communicate with HMC "%1$s" via SSH from node "%2$s". Check that "%1$s" is valid. Check that the public key for "%2$s" is correctly installed on "%1$s". Check any network issues, name resolution problems, firewall problems. ERROR: could not communicate with HMC "%1$s" via SSH from node "%2$s. Check that "%1$s" is valid. Check that the public key for "%2$s" is correctly installed on "%1$s". Check any network issues, name resolution problems, firewall problems. ERROR: no valid HMCs were specified. ERROR: too many HMCs were specified. At least one, but no more than two, are required. ERROR: node "%1$s" already has the maximum allowed number of HMCs. Cannot add "%2$s" without first removing one of the existing HMCs from "%1$s". Warning: forcing the removal of managed system "%1$s" from node "%2$s", so that HMC "%3$s" can be added. ERROR: managed system "%1$s" is defined for node "%2$s". Adding HMC "%3$s" to "%2$s" will require either removing "%1$s" first, or forcing the "add" operation, which will then remove "%1$s" automatically. ERROR: HMC "%1$s" is not defined for node "%2$s". ERROR: HMC "%1$s" is not defined for node "%2$s". ERROR: a maximum of two HMCs may be specified for any given node. Warning: specifying a managed system for nodes "%1$s" on HMC "%2$s" will result in a change of node assignments for that HMC. Current nodes: Specified nodes: ERROR: specifying managed system "%1$s" for nodes "$2$s" on HMC "%3$s" will result in the removal of one or more nodes already defined for that HMC. Warning: modifying HMC "%1$s" to support "%2$s" will remove HMC "%3$s" from managing "%1$s". ERROR: modifying HMC "%1$s" to support "%2$s" will remove HMC "%3$s" from managing "%1$s". If that is what you intended to do, retry this operation with the force flag. If instead you are trying to add "%1$s" to "%2$s", try using the "add" action instead of "modify". ERROR: Address or name "%1$s" cannot be resolved. Check the input and name resolution then try again. ERROR: No sites will be supported in single node swap mode. A system mirror group with name "%1$s" exists for node "%2$s". Please remove it first. A repository mirror group with name "%1$s" exists for site "%2$s". Please remove it first. A storage system with name "%1$s" exists for site "%2$s". Please remove it first. Modiying node with name "%1$s" in system mirror group completed with RC = "%2$s". Modiying site with name "%1$s" in storage system completed with RC = "%2$s". Modiying site with name "%1$s" in repository mirror group completed with RC = "%2$s". ERROR: unable to determine the current state of all the nodes in the cluster. This makes it unsafe to modify the configuration%1$s. ERROR: one or more nodes in the cluster are active. This makes it unsafe to modify the configuration%1$s. ERROR: Cannot delete node "%1$s" from the cluster. Remove node "%1$s" from all resource groups before attempting to delete the node from the cluster. ERROR: Cannot delete node "%1$s" from the cluster. Remove node "%1$s" from site definitions before attempting to delete the node from the cluster. The specified address "%1$s", is of the same family (%2$s) as network "%3$s". The specified netmask/prefix will not be used, and will instead be taken from "%3$s". ERROR: the specified address, "%1$s", is of a different type (%2$s) than the specified network, "%3$s". Either a netmask (IPv4) or prefix length (IPv6) must be provided. Running the delete code to attempt to ensure that no partial instance of "%1$s" remains ... Successfully added a primary repository disk. Successfully added one or more backup repository disks. To view the complete configuration of repository disks use: "clmgr query repository" ERROR: to set an optimal value, you cannot use desired level from the LPAR profile (USE_DESIRED must be set to "No"). ERROR: either use desired level from the LPAR profile (USE_DESIRED set to "Yes") or one optimal setting at least must be set. ERROR: the optimal amount of memory (OPTIMAL_MEM) must be a multiple of 0.25. ERROR: the optimal number of processing units (OPTIMAL_PU) must be a multiple of 0.01. ERROR: the optimal number of virtual processors (OPTIMAL_VP) must be at least equal to the whole number of the optimal number of processing units (OPTIMAL_PU), rounded up for any fraction. ERROR: the optimal number of processing units (OPTIMAL_PU) must be at least 0.1, a tenth of a processor for reasonableness, if a value other than zero specified. ERROR: the optimal number of virtual processors (OPTIMAL_VP) must be no more than ten times the optimal number of processing units (OPTIMAL_PU). ERROR: "%1$s" is not yet configured for Resource Optimized High Availability. ERROR: this node does not exist "%1$s". ERROR: one or more invalid characters were detected in "%1$s" ("%2$s"). Valid input include either "-1" value or a positive integer value. Available HMCs: ERROR: at least either timeout (TIMEOUT), or retry count (RETRY_COUNT) or retry delay (RETRY_DELAY) must be set. ERROR: failed to remove the "%1$s" HMC. ERROR: incompatible options "%1$s" and "%2$s". Cannot verify connectivity between "%1$s" node and "%2$s" HMC, as either /etc/cluster/rhosts of "%3$s" node does not contain authorization to execute commands from "%4$s" or clcomd service not started on nodes of cluster. Checking HMC connectivity between "%1$s" node and "%2$s" HMC : failed! But we force! Checking HMC connectivity between "%1$s" node and "%2$s" HMC : failed Checking HMC connectivity between "%1$s" node and "%2$s" HMC : success! Checking HMC connectivity failed : it is possible to skip the connectivity check (CHECK_HMC=no) to force the hmc creation or modification. ERROR: failed to add the "%1$s" HMC to HACMPhmcparam. ERROR: failed to remove the "%1$s" HMC from HACMPhmcparam. ERROR: the SITE option is not appropriate for a no-site cluster. Available Application Controllers Configured for Resource Optimized High Availability: ERROR: "%1$s" is already configured for Resource Optimized High Availability services. ERROR: the attempt to define Resource Optimized High Availability for "%1$s" has failed. ERROR: "%1$s" is not configured for Resource Optimized High Availability services. ERROR: failed to remove the Resource Optimized High Availability settings from "%1$s". ERROR: the attempt to modify the Resource Optimized High Availability settings for "%1$s" has failed. ERROR: failed to associate "%1$s" NODES and "%2$s" HMC. ERROR: failed to associate "%1$s" SITE and "%2$s" HMC. ERROR: only "text" value is valid for "type" parameter with "roha" report. ERROR: a linked cluster cannot get more than "%1$s" backup repositories per site (there were "%2$s" backup repositories, and you want to add "%3$s" backup repositories). ERROR: a no-site cluster or a stretched cluster cannot get more than "%1$s" backup repositories (there were "%2$s" backup repositories, and you want to add "%3$s" backup repositories). To view the complete configuration of repository disks use: "clmgr query repository" or "clmgr view report repository" ERROR: repositories may only be specified if nodes are specified. ERROR: you have indicated that you want to create a cluster that uses unicast communications, but you have also provided a multicast IP address (e.g. "%1$s"). These settings are not compatible with each other. A multicast address is only valid when multicast communications are in use. ERROR: you have indicated that you want to create a linked cluster (e.g. TYPE="%1$s"), but you have also specified a cluster-wide repository (e.g. REPOSITORIES="%2$s"). These settings are not compatible with each other. A cluster-wide repository is only valid in a stretched or standard (no sites) cluster. ERROR: you have indicated that you want to create a linked cluster (e.g. TYPE="%1$s"), but you have also specified a cluster-wide multicast address (e.g. CLUSTER_IP="%2$s"). These settings are not compatible with each other. A cluster-wide multicast address is only valid in a stretched or standard (no sites) cluster. Note: now that a site has been added to the cluster, the cluster type has been automatically changed from standard to stretched. Warning: this cluster is currently defined as a standard, no-sites cluster. Since a site ("%1$s") is being added to it, it will be automatically converted to a stretched cluster. Note: now that the last site has been removed from the cluster, the cluster type has been automatically changed from stretched to standard. Warning: you are removing the last site ("%1$s") from a stretched cluster. This will result in the cluster automatically being converted to a standard cluster. %1$s %2$s(%3$s) active No active repository %1$s %2$s(%3$s) backup No backup repository Stretched/standard cluster type must have split policy of either None or TieBreaker Stretched/standard cluster type must have merge policy of either Majority or TieBreaker ERROR: "%1$s" may only be set for a stretched cluster or linked cluster. ERROR: the primary operation (%1$s) failed, so an attempt will be made to remove any dependencies that were automatically created in support of the operation. *** Warning: "%1$s" is not defined as a node within this cluster. However, "%1$s" uses the same communication path as node "%2$s" (%3$s). So "%2$s" will be used instead of "%1$s". The current operation requires a PowerHA SystemMirror node, but no node was specified. Defaulting to "%1$s"... The current operation requires a PowerHA SystemMirror resource group named "%1$s", but that resource group does not appear to exist. An attempt will be made to create that resource group now... The current operation requires one or more PowerHA SystemMirror resource groups, but no resource groups are currently defined. ERROR: a resource group movement has been requested that affects resource groups involved in more than one dependency. Only one dependency at a time should be involved in any given resource group movement. There is only one dependency involved in this resource group movement, as required: ERROR: the specified IP label, "%1$s", could not be resolved by the "host" command. Make sure that "%1$s" is valid, and that it is properly defined in /etc/hosts. If so, confirm that local name resolution is in effect by checking that "local" is the first "hosts" option in /etc/netsvc.conf. Highest version of all cluster filesets:Effective runtime version: ERROR: EFS is not enabled on this system, so no attributes related to EFS may be used: %1$s WARNING: A resource group movement has been requested that affects resource groups involved in more than one dependency. It is not recommended to move resource groups with multiple dependencies in a single operation. The sequence of resource group movement should be carefully considered. Proceeding with resource group movement. ERROR: the specified service label, "%1$s", is already in use by resource group "%2$s". ERROR: the specified WPAR_NAME, "%1$s", does not match the resource group name, "%2$s". ERROR: the specified WPAR_NAME, "%1$s", does not match the new name of the resource group, "%2$s". Warning: since resource group "%1$s" is being renamed to "%2$s", and has a WPAR defined, that WPAR will also be renamed to "%2$s". This is necessary because the WPAR attribute of a resource group, if used, is required to match the name of the resource group. ERROR: raw disks may be specified as PVIDs or UUIDs (or device names, which get converted to UUIDs), but not both. *** Warning: automatically converting "%1$s" to its UUID, "%2$s". ERROR: raw disks may be specified as PVIDs or UUIDs, but not both. ERROR: only one replicated resource is allowed via the MIRROR_GROUP attribute. ERROR: USER_NAME must be provided for HMC connection type "REST API". ERROR: failed to set USER_NAME for "%1$s" HMC. ERROR: failed to set PASSWORD for "%1$s" HMC. ERROR: failed to remove "%1$s" HMC USER_NAME. ERROR: failed to remove "%1$s" HMC PASSWORD. WARNING: Specified USER_NAME or PASSWORD will not be used for HMC connection type "ssh". Warning: could not communicate with HMC "%1$s" with given credentials from node "%2$s". Check "%1$s" is valid. Check for any network issues or name resolution problems or firewall problems. ERROR: could not communicate with HMC "%1$s" with given credentials from node "%2$s. Check "%1$s" is valid. Check for any network issues or name resolution problems or firewall problems. ERROR: The Given Network Failure Detection TimeOut is not at least 10 seconds less than the Node Failure Detection Time(Heartbeat frequency)%1$s seconds. The Network Failure Detection TimeOut value can be set within the range 5 to %2$s seconds based on the current Node Failure Detection Time value. ERROR: Network Failure Detection value should be at least 5 seconds or 0 (unset). WARNING: HMC Connection type is updated to REST API,make sure USER_NAME and PASSWORD is set for all HMC(s). ERROR: USER_NAME must be provided for NovaLink Creation. ERROR: failed to associate "%1$s" NODES and "%2$s" NovaLink. Checking NovaLink connectivity failed : it is possible to skip the connectivity check (CHECK_NOVA=no) to force the NovaLink creation or modification. Available NovaLinks: ERROR: failed to remove the "%1$s" NovaLink. WARNING: Specified PASSWORD will not be used for NovaLink connection type "ssh". ERROR: failed to link "%1$s" NODES and "%2$s" NovaLink. Cannot verify connectivity between "%1$s" node and "%2$s" NovaLink, as either /etc/cluster/rhosts of "%3$s" node does not contain authorization to execute commands from "%4$s" or clcomd service not started on nodes of cluster. WARNING: could not communicate with NovaLink "%1$s" via SSH from node "%2$s". Check "%1$s" is valid. Check the public key for "%2$s" installed correctly on "%1$s". Check for any network issues or name resolution problems or firewall problems. Checking NovaLink connectivity between "%1$s" node and "%2$s" NovaLink : failed! But we force! ERROR: could not communicate with NovaLink "%1$s" via SSH from node "%2$s". Check "%1$s" is valid. Check the public key for "%2$s" installed correctly on "%1$s". Check for any network issues or name resolution problems or firewall problems. Connectivity between "%1$s" node and "%2$s" NovaLink : failed Connectivity between "%1$s" node and "%2$s" NovaLink : success ERROR: the "FORMATTING" attribute is only valid for the "hacmp.out" log. ERROR: CAA DR capability is not currently available in all nodes across the cluster. ERROR: CAA tunable dr_enabled cannot be set in linked cluster. ERROR: There are already nodes configured in PowerHA that have not been added to CAA. You can not add node "%1$s" until you verify and synchronize the cluster. ERROR: Failover rehearsals are supported in linked cluster only. ERROR: failed to get managed system for NovaLink "%1$s". ERROR: could not communicate with NovaLink "%1$s" via SSH from local node. Check "%1$s" is valid. Check the public key for local node installed correctly on "%1$s". Check for any network issues or name resolution problems or firewall problems. ERROR: Cannot update %1$s, unless the current type of Volume group is "%2$s". Volume group type changes cannot be combined with any other operation. WARNING: Monitor Retry Count cannot be tuned when Monitor Mode is StartUp. Hence Setting it to Default value 0. ERROR: Failed to allocate PVID to "%1$s". Manual intervention is needed to allocate PVID. ERROR: Fallover using Dynamic Node Priority is not supported when sites are configured. Method "%1$s" is not a "notify" method. Warning: could not communicate with HMC "%1$s" with given credentials from node "%2$s". Check "%1$s" is valid. Check for any network issues, name resolution problems, firewall problems or HMC version. ERROR: could not communicate with HMC "%1$s" with given credentials from node "%2$s. Check "%1$s" is valid. Check for any network issues, name resolution problems, firewall problems or HMC version. ERROR: Node priority policy script is only used with a fallover policy of "FUDNP" ("Fallover Using Dynamic Node Priority"). ERROR: Node priority policy timeout is only used with a fallover policy of "FUDNP" ("Fallover Using Dynamic Node Priority"). ERROR: "%1$s" is configured as a tie-breaker disk and cannot be chosen as a repository disk. WARNING: CAA DR capability is not currently available in all nodes across the cluster. WARNING: The value passed for Node Failure Detection Timeout during LPM is 0, updating it to default value 600. ERROR: Multiple resource groups are not allowed. ERROR: Invalid volume group provided for RG=%1$s. ERROR: Target location must be provided for cloud backup method. Backup scheduled on %1$s at %2$s. ERROR: Invalid %1$s value, %2$s value should be less than %3$s value. Notify method "%1$s" is used to get backup status. ERROR: Failed to save backup configuration. ERROR: Invalid configuration. Make sure sufficient storage is available in target location "%1$s" to store backup file. ERROR: Resource group "%1$s" not configured for backup. Configured resource groups for backup: ERROR: "%1$s" option is not supported in remote_storage backup method. ERROR: Python must be installed for using backup feature. Incremental backup runs every %1$d hours. ERROR: No resource group configured for backup. ERROR: failed to remove backup configuration for resource group "%1$s". All resource groups are selected, so all volume groups from all the resource groups will be configured in backup. ERROR: rootvg backup only supported with backup profile "%1$s". ERROR: Replicated resources validation failed. ERROR: Provide resource group (RG_NAME) or bucket name (BUCKET_NAME) or both. ERROR: Provide start time with end time. ERROR: The provided resource group, %1$s, is not configured for cloud backup. Provide time in yyyy-mm-ddThh format. ERROR: The provided start time format is invalid. ERROR: The provided end time format is invalid. Available resource groups which are configured for cloud are: ERROR: The provided bucket, %1$s, is invalid because the given resource group, %2$s, is configured with a different bucket name, %3$s. Backup profile %1$s is added successfully. Backup profile %1$s is updated successfully. ERROR: the specified path %1$s does not appear to exist on "%2$s": ERROR: Target location "%1$s" is not valid. Please provide valid target location for cloud backup method. ERROR: For Backup Profile modification, multiple resource groups are not allowed. ERROR: storage system with the specified new name already exists: "%1$s" Duplicate storage name entries are not allowed. ERROR: one or more invalid characters were detected in "%1$s". Valid characters include letters, numbers, and underscores only. A name cannot begin with a number. ERROR: The block size of disk %1$s is %2$d. CAA only supports disks with a block size of 512 bytes.Please specify a different disk with a block size of 512 bytes. ERROR: The block size of disk %1$s is NULL. This is due to failure in cl_querypv.Please check the disk name or specify a different disk with a block size of 512 bytes. Error: Unable to read disk data for %1$s due to failure in getDiskData. ERROR: Node priority policy should be specified with a fallover policy of "FUDNP" ("Fallover Using Dynamic Node Priority") is used. Backup process has been triggered in background with the PID %1$s for a backup profile %2$s. ERROR: The block size of disk %1$s is %2$d, but CAA does not support 4KDISK on this AIX level. Please redefine the disk using a supported block size. ERROR: The block size of disk %1$s is %2$d, but CAA only supports disks with a block size of 4K and 512 bytes. Please redefine the disk using a supported block size. ERROR: when Node priority policy is specified as default, Node priority policy script should not be provided. ERROR: The cluster is not online, so this operation cannot be performed. ERROR: Resource group "%1$s" is not online, so this operation cannot be performed. ERROR: Application monitoring has already been suspended for application "%1$s" in resource group "%2$s". Monitoring for application "%1$s", running in resource group "%2$s", has been successfully suspended. Warning: the state change has not completed in the allotted time (%1$d seconds). ERROR: Application monitoring is already active for application "%1$s" in resource group "%2$s". Monitoring for application "%1$s", running in resource group "%2$s", has resumed successfully. Beginning the process of resuming monitoring. Waiting up to %1$d seconds for the state change to take effect... Beginning the process of suspending monitoring. Waiting up to %1$d seconds for the state change to take effect... ERROR: One or more cluster nodes (%1$s) are currently unmanaged. Making cluster changes is not allowed at this time. ERROR: One or more cluster nodes (%1$s) currently have unsychronized configuration changes. Unmanaging a cluster node is not allowed at this time. CPU usage monitor interval "%1$s" is not an integer. Provide an integer value in range of 1 minute to 120 minutes. CPU usage monitor interval is empty. Provide an integer value in range of 1 minute to 120 minutes. ERROR: The specified bucket %1$s is not reachable. Please check firewall settings and network connectivity. ERROR: Python must be installed for using backup feature. ERROR: Python boto3 module must be installed for using cloud backup. ERROR: "%1$s" is a concurrent resource group. Please provide resource group which is non-concurrent. WARNING:Only non-concurrent resource groups shall be considered for backup. The following concurrent resource groups are not considered for backup. %1$s ERROR: The provided bucket %1$s does not exist in any of the cloud configured backup profiles. ERROR: Node "%1$s" is not part of site "%2$s". ERROR: A problem occurred trying to collect software information from one or more nodes. This report may not be complete. ERROR: Unable to create temporary directory "%1$s" for this operation. ERROR: Unable to copy the "%1$s" file to "%2$s". ERROR: Unable to restore the "%1$s" snapshot to "%2$s". ERROR: Invalid snapshot file (%1$s); invalid format. Verify that the file was created by the cluster snapshot utility. ERROR: Unknown or unsupported IBM PowerHA SystemMirror version number detected: "%1$s" clmgr {[-c|-d ][-S] | [-j] | [-x]} [-v] [-f] [-D] \ [-T <#####>] [-l {error|standard|low|med|high|max}] \ [-a {,,...}] [] \ [-h | = = ...] clmgr {[-c|-d ][-S] | [-j] | [-x]} [-v] [-f] [-D] \ [-T <#####>] [-l {error|standard|low|med|high|max}] \ [-a {,,...}] -M " [] [= = ...] . . ." ACTION={add|modify|delete|query|online|offline|...} CLASS={cluster|site|node|network|resource_group|...} clmgr {-h|-?} [-v] clmgr [-v] help The clmgr command provides a consistent, reliable interface for performing PowerHA SystemMirror cluster operations using a terminal or script. All clmgr operations are logged in the "clutils.log" file, including the command that was executed, the time the command was started and the time it was completed, and what user initiated the command. The basic format for using clmgr is consistently as follows: clmgr [] [] This consistency helps make clmgr easier to learn and use. Further help is also available at each part of clmgr's command line. For example, just executing "clmgr" by itself will display a list of the available ACTIONs supported by clmgr. Executing "clmgr ACTION" with no CLASS provided will result in a list of all the available CLASSes for the specified ACTION. Executing "clmgr ACTION CLASS" with no NAME or ATTRIBUTES provided is slightly different, though, since for some ACTION and CLASS combinations, that may be a valid command format. So to get help in this scenario, it is necessary to explicitly request it by appending the "-h" flag. So executing "clmgr ACTION CLASS -h" will result in a listing of all known attributes for that ACTION and CLASS combination being displayed. That is where clmgr's ability to help ends, however; it can not help with each individual attribute. If there is a question about what a particular attribute is for, or when to use it, the product documentation will need to be consulted in the IBM Knowledge Center. ACTION a verb describing the operation to be performed. NOTE: ACTION is not case-sensitive. CLASS the type of object upon which the ACTION will be performed. NOTE: CLASS is not case-sensitive. NAME the specific object, of type "CLASS", upon which the ACTION is to be performed. ATTR=VALUE optional, attribute/value pairs that are specific to the ACTION and CLASS combination. These may be used to do specify configuration settings, or adjust particular operations. When used with the "query" action, ATTR=VALUE specifications may be used to perform attribute-based searching/filtering. When used for this purpose, simple wildcards may be used. For example, "*" matches zero or more of any character, "?" matches zero or one of any character. -a valid only with the "query", "add", and "modify" ACTIONs, requests that only the specified attribute(s) be displayed. NOTE: the specified order of these attributes is not guaranteed to be preserved in the resulting output. -c valid only with the "query", "add", and "modify" ACTIONs, requests all data to be displayed in colon-delimited format. -D disables the dependency mechanism in clmgr that will attempt to create any requisite resources if they are not already defined within the cluster. -f requests an override of any interactive prompts, forcing the current operation to be attempted (if forcing the operation is a possibility). -h requests a help message to be displayed. -j valid only with the "query", "add", and "modify" ACTIONs, requests all data to be displayed in simple JSON format. -l activates trace logging for serviceability: low: logs function entry/exit med: adds function entry parameters, as well as function return values high: adds tracing of every line of execution, only omitting routine/"utility" functions max: adds routine/utility functions. Also adds a time/date stamp to the function entry/exit messages. All trace data is written into the "clutils.log" file. This option is typically only of interest when troubleshooting, so only use at the direction of IBM Support. -M allows multiple operations to be specified and run using one invocation of clmgr, with one operation being specified per line. All the operations will share a common transaction ID. -S valid only with the "query" ACTION and "-c" option, requests that all column headers be suppressed. -T a transaction ID to be applied to all logged output, to help group one of more activities into a single body of output that can be extracted from the log for analysis. This option is typically only of interest when troubleshooting. -v requests maximum verbosity in the output.. NOTE: when used with the "query" action and no specific object name, queries all instances of the specified class. For example, "clmgr -v query node" will query and display *all* nodes and their attributes. When used with the "add" or "modify" operations, the final, resulting attributes after the operation is complete will be displayed (only if the operation was successful). -x valid only with the "query", "add", and "modify" ACTIONs, requests all data to be displayed in simple XML format. ERROR: more than one output formatting flag was specified. The "-c", "-d", "-j" and "-x" flags may not be used together. ERROR: The node specified in "%1$s" is either invalid, or not a member of volume group "%2$s". Warning: this removal operation, when completed, will leave no disks in mirror pool "%1$s", resulting in its automatic removal. ERROR: missing the object name for class "%1$s". ERROR: volume group "%1$s" is configured for super-strict mirror pools, which limits it to a maximum of %3$d. Since "%1$s" already has %3$d mirror pools, "%2$s" cannot be added to it. Warning: the specified physical volume, "%1$s", does not appear to be managed by the indicated volume group, "%2$s". An attempt will be made to add it... ERROR: could not add physical volume "%1$s" to volume group "%2$s". ERROR: no valid physical volumes appear to be available in volume group "%1$s". ERROR: Encryption algorithm "%1$s" is not valid for "%2$s" cloud service. Valid values are "%3$s". ERROR: This action can not be performed by the user with the role %1$s. Following is the list of actions allowed for the role: Usage: %1$s -o odm_class_name [-q criteria] -f input_file_name. The utility reads from standard input if input_file_name is not specified. Usage: %1$s -o odm_class_name [-q criteria] Usage: %1$s input_file_name ERROR: The specified object name, "%1$s", is a PowerHA SystemMirror reserved word. Please choose a different name. ERROR: The action "%1$s" for class "%2$s" is not allowed for the user with the "%3$s" role. ERROR: Cannot modify name of a snapshot as an empty string. Please provide a valid input. Warning: Ignoring "%1$s". Either it's a dedicated disk which cannot be used to extend the shared VG:"%2$s" or this disk is already part of some other VG. Warning: Cluster security is already disabled. WARNING: Failed to enable "%1$s" for RBAC because fileset is not installed. Please run the operation after installing the missing fileset. WARNING: Failed to enable "%1$s" for RBAC because "%2$s" is not properly installed. Please run the operation after installing the fileset properly. Successfully added a primary repository disk for site "%1$s". Successfully added one or more backup repository disks for site "%1$s". INFO: Found %1$d duplicate entries in %2$s list. Excluding them. ERROR: Multiple storage locations are not allowed. ERROR: The number of configured %1$s resources(%2$d) has exceeded its maximum limit(%3$d) per resource group. Warning: a linked cluster cannot have more than "%1$d" backup repositories per site (there were "%2$d" backup repositories) Hence primary repository disk will replace with new repository disk but not added to backup repositories list. Warning: a no-site cluster or a stretched cluster cannot have more than "%1$d" backup repositories (there were "%2$d" backup repositories) Hence primary repository disk will replace with new repository disk but not added to backup repositories list. ERROR: TieBreaker disk option is supported only for split policy 'TieBreaker' and merge policy 'TieBreaker'. ERROR: The number of configured PowerHA SystemMirror Networks in the cluster has already reached its maximum limit(%1$d). ERROR: Cannot add more than %1$d application monitors in a resource group. Application controller %2$s is a part of resource group %3$s and this resource group %3$s already contains %1$d application monitors. ERROR: You have asked to add disk "%1$s" as repository disk, but it appears to have leftover VGDA information on it. Select a different disk or clear the VGDA information on the disk using "chpv -C %1$s" and try adding again. ERROR: one or more invalid characters were detected in "%1$s". Valid characters include letters, numbers, hyphen and underscores only. A name cannot begin with a number or hyphen. ERROR: An error occurred while configuring critical daemon restart grace period for node "%1$s". Successfully configured the node level RSCT critical daemon restart grace period for node %1$s. Provide "%1$s" value in HH:MM format. ERROR: a file system cannot be created in volume group "%1$s" because "%2$s" is a OAAN(online on all available nodes) resource group. ERROR: cannot add volume group "%1$s" to OAAN(online on all available nodes) resource group "%2$s" because "%1$s" contains a file system. INFO: As %1$s is not specified, clmgr cognitive mechanism has predicted %1$s=%2$d. ERROR: Application monitoring is already active.Current state is "%1$s" for application "%2$s" in resource group "%3$s". ERROR: Modifying or removing last network interface '%1$s' not allowed as the network '%2$s' holds a service IP. You can remove the service IP from the network, then try the operation again. Current AIX level '%1$s' does not support Live kernel update operation. Live kernel update feature is available from AIX 7.2 ERROR: Application Controller name %1$s should not be more than %2$d characters. ERROR: Resource Group name %1$s should not be more than %2$d characters. ERROR: Fallback Timer name %1$s should not be more than %2$d characters. ERROR: Node name %1$s should not be more than %2$ld characters. Warning: unable to determine if node "%1$s" has come online(the startup process has not completed after %3$d seconds). Wait a few minutes, then manually check the node's state using "%2$s -a STATE query node %1$s". WARNING: Resource group %1$s is already online on the node %2$s, hence no resource group movement is performed. ERROR: Failed to modify logical volume "%1$s". Cluster services are still active on at least one node in the cluster: %2$s. ERROR: Filesystem "%1$s" is either not configured or not associated with any volume group. ERROR: Logical voulme "%1$s" is either not configured or not associated with any volume group. Logical volume "%1$s" modified successfully. Filesystem "%1$s" modified successfully. ERROR: Enabling logical volume encryption using %1$s authentication method requires %2$s attribute. ERROR: Enabling logical volume encryption using %1$s authentication method requires %2$s and %3$s attributes. WARNING: Failed to add authentication for "%1$s". You can run "%2$s %1$s [..]" or "use Change a Logical Volume from %3$s menu" to provide the authentication. If the problem persists, please contact IBM support. ERROR: Enabling logical volume encryption requires %1$s attribute. ERROR: To use the authentication attributes, %1$s should be set with yes. ERROR: Installed AIX level %1$s is not supported to use LVM Encryption, minimum level required to use AIX LVM Encryption is AIX %2$s. ERROR: %1$s authentication is not supported on node %2$s, enable %1$s authentication and rerun the command. Check log file %3$s for more details. Cluster TYPE changed to %1$s. Proceed with next steps to complete cluster conversion. WARNING: Failed to disable LVM encryption for %1$s. ERROR: Site grace period can only be changed in linked cluster. ERROR: Deletion of application monitor "%1$s" is failed. Please delete the application monitor using clmgr command or smit menu. ERROR: There is a conflict between add and remove LVM authentication, provide either of these attributes. WARNING: Failed to remove authentication method name(s). ERROR: GLVM Volume group "%1$s" properties cannot be changed other than logical volume size when cluster services are active. Absolute path is not given for logical volume label, hence using standard path as %1$s. INFO: Multiple repositories provided. Since we are in the process of converting the cluster type, proceeding with only one repository %1$s for creating remote site by ignoring the others. You can add them as backup repositories once the cluster conversion process is completed. Warning: when the "%1$s" input is used with a value of "%2$s", all other option(s) are invalid and will be disregarded. INFO: As enable encryption option is provided, legacy volume group will be converted to original volume group. INFO: Encryption is disabled for legacy volume group. INFO: Logical volume level encryption will be disabled. "%1$s" Configuration ERROR: the specified path/file does not appear to be in a graphical format: %1$s on %2$s LinkedStretchedStandardSitePrimaryHost Name:Communication Path:Resource Groups:InterfacesPersistent IP Addresses:AIX Level:Site Name:ProcessesOwnerInstancesMonitor MethodMonitoring IntervalHung Process SignalApplications MonitoredModeFailure ActionNotification MethodStabilization IntervalRestart MethodRestart CountRestart IntervalCleanup MethodProcess MonitorCustom MonitorExecutes before "%1$s"Executes after "%1$s"General InformationCluster Name:Type:Edition:Version:Heartbeat Type:Cluster IP Address:Cluster ID:Status:Unsynchronized Changes?Maximum Event Processing Time:Maximum Resource Group Processing Time:CONFIG_TOO_LONG Time Limit:SitesSite IP AddressNodes at This SiteRecovery PriorityNameNodes:Resource GroupsSecondary Nodes:Secondary Status:Delayed Fallback Timer:ManagesService IP Labels/Addresses:Applications:File Systems:Mirror Groups:Exported File Systems (NFS v2/3):Exported File Systems (NFS v4):Stable Storage Path:File Systems to NFS Mount:Network for NFS Mount:Startup:Fallover:Fallback:Node Priority:Site Relationship:Volume Groups:Raw Disks:Tapes:PoliciesResource Group DependenciesTemporal DependenciesLocation DependenciesParent Resource GroupChild Resource GroupsLocation PolicyResource Group ProcessingResource Groups Acquired in Parallel:Resource Groups Released in Parallel:Resource Groups Acquired Serially:Resource Groups Released Serially:Global Resource Group SettingsSettling TimeDistribution PolicyNetworksNetmaskResource Distribution PreferenceVisibilityIP LabelsIP LabelNetwork NameNetwork TypeNodeInterfaceAddressInterfacesVolume GroupPhysical VolumesLogical VolumesResource GroupMirror PoolsMirror Pool
StrictnessMajor NumberMount PointPhysical PartitionsPhysical Partition
SizeLogical PartitionsLogical Partition
CopiesSuper Strict?ModeAsync Cache
Logical VolumeAsync Cache
High Water MarkExported DirectorySizeQuota Management?EFS Enabled?Mountguard?Available SpaceDeviceStart ScriptStart Synchronously?Stop ScriptStop Synchronously?Application ControllersMonitorsApplication MonitorsCustom Cluster EventsFileCore Event AssociationsDescriptionFile CollectionsPropagate Files
During Sync?Propagate Files
When Changed?Files/DirectoriesCluster SecurityLevel:Algorithm:Refresh:Mechanism:Certificate:Private Key:Cluster TuningHeartbeat Frequency:Grace Period:Site Heartbeat Cycle:Site Grace Period:Cluster Split / Merge PoliciesSplit Policy:Merge Policy:Action Plan:Tie Breaker:Notify Method:Notify Interval:Maximum Notifications:Default Surviving Site:Apply to PPRC Takover:Cluster VerificationAutomatic Daily Cluster Verification:Verification NodeVerification HourVerification DebuggingTape DevicesMirror Group TypeRecoveryStorage SystemsConsistentVendor IdentifierHyperSwap StatusHyperSwap PriorityHyperSwap Unplanned TimeoutHyperSwap DiskNon-HyperSwap DiskMirror PairsHORCM InstanceHORCM TimeoutPair Event TimeoutStorage AgentsAddressesUser LoginUser PasswordstoredWorld Wide Node NameFirst DiskSecond DiskLDAP ClientsLDAP ServerNFS Quorum Server:Local Quorum Directory:Remote Quorum Directory:PrintDownloadCluster SnapshotPrintPlatformSnapshot captured onService IPsNo (active)YesRepositoriesConfigurationReport created onPoliciesBackupSnapshot MethodsVerification MethodsNotification MethodsSourceContactEventsRetryTimeoutAutomatic File Update (minutes):Cluster: '%1$s' of %2$s type No cluster defined. Cluster tunables Dynamic LPAR Start Resource Groups even if resources are insufficient: '%1$s' Adjust Shared Processor Pool size if required: '%1$s' Force synchronous release of DLPAR resources: '%1$s' On/Off CoD I agree to use On/Off CoD and be billed for extra costs: '%1$s' Number of activating days for On/Off CoD requests: '%1$s' Hardware Management Console '%1$s' Hardware Management Console No Hardware Management Console defined. Version: '%1$s' Always Start Resource Groups: '%1$s' Resource Allocation order: '%1$s' Enterprise Pool On/Off CoD On/Off CoD memory On/Off CoD processor Enterprise pool '%1$s' Enterprise pool No enterprise pool defined. Enterprise pool memory Enterprise pool processor Managed System '%1$s' Hardware resources of managed system Shared processor pool '%1$s' Logical partition '%1$s' Node: %1$s Site: %2$s HMC(s): %3$s Managed system: %4$s LPAR: %5$s Node: %1$s HMC(s): %2$s Managed system: %3$s LPAR: %4$s State: '%1$s' Master HMC: '%1$s' Backup HMC: '%1$s' Activated memory: '%1$d' GB Available memory: '%1$d' GB Unreturned memory: '%1$d' GB Activated CPU(s): '%1$d' Available CPU(s): '%1$d' Unreturned CPU(s): '%1$d' Used by: '%1$s' Activated memory: '%1$d' GB Activated CPU(s): '%1$d' CPU(s) Unreturned memory: '%1$d' GB Unreturned CPU(s): '%1$d' CPU(s) Yes: '%1$s' No Installed: memory '%1$d' GB processing units '%2$d' Configurable: memory '%1$d' GB processing units '%2$d' Inactive: memory '%1$d' GB processing units '%2$d' Available: memory '%1$d' GB processing units '%2$d' Free: memory '%1$d' GB processing units '%2$d' State: '%1$s' Available: '%1$d' GB.days Activated: '%1$d' GB Left: '%1$d' GB.days Unreturned: '%1$d' GB Available: '%1$d' CPU.days Activated: '%1$d' CPU(s) Left: '%1$d' CPU.days Unreturned: '%1$d' CPU(s) Available: '%1$s' Reserved: '%1$s' Maximum: '%1$s' Current profile: '%1$s' Processing mode: Memory (GB): minimum '%1$d' desired '%2$d' current '%3$d' maximum '%4$d' Shared processor pool: '%1$s' Processing units: minimum '%1$d' desired '%2$d' current '%3$d' maximum '%4$d' Virtual processors: minimum '%1$d' desired '%2$d' current '%3$d' maximum '%4$d' Processors: minimum '%1$d' desired '%2$d' current '%3$d' maximum '%4$d' Dedicated Shared This '%1$s' partition hosts '%2$s' node of the %3$s '%4$s' No resource group(s) are ONLINE on this node. The following resource groups are ONLINE on this node: %1$s They may use optimal resources: %1$d GB of memory and %2$d PU(s) and %3$d VP(s) They may use optimal resources: %1$d GB of memory and %2$d CPU(s) ROHA provisioning for '%1$s' resource groups ROHA provisioning for resource groups No '%1$s' resource group. No resource group. Resource group: '%1$s' Application controller: '%2$s' Use desired='%1$s' Memory='%1$s' Processors='%2$s' Processing units='%3$s' Virtual Processors='%4$s' Total: Use desired number='%1$s' Memory='%2$s' Processors='%3$s' Processing units='%4$s' Virtual Processors='%5$s' No ROHA provisioning. Warning: cannot collect version of Hardware Management Console '%1$s'. Warning: cannot ping Hardware Management Console '%1$d'. Warning: cannot ssh to Hardware Management Console '%1$d'. Warning: cannot collect data of enterprise pool '%1$d'. Warning: cannot collect data of managed system '%1$d'. Warning: cannot collect data of LPAR '%1$d'. Warning: cannot get LPAR name of node '%1$s'. Warning: cannot collect data of node '%1$s'. Warning: generating partial report only. Warning: cannot collect cluster data. Warning: cannot collect cluster tunables. Warning: cannot collect application controllers of resource group '%1$d'. Warning: REST API communication to Hardware Management Console '%1$s' failed. NovaLink '%1$s' Warning: cannot ping to NovaLink '%1$s'. Warning: cannot ssh to NovaLink '%1$s'. NovaLink Node: %1$s Site: %2$s HMC(s): %3$s Managed system: %4$s LPAR: %5$s NovaLink(s): %6$s Node: %1$s HMC(s): %2$s Managed system: %3$s LPAR: %4$s NovaLink(s): %5$s Resource Group "%1$s" is already offline on node "%2$s". Deconfigured: memory '%1$d' GB processing units '%2$d' Resource Group "%1$s" is not ONLINE on node "%2$s". Resource Group "%1$s" is not ONLINE SECONDARY on node "%2$s".