ISO8859-15N<= .=%I=Ta=>>S>?!> S> D? J?N 1? ?3?6@ 2@W4@4@1@9A&7A`EA?A=B B\ BhBBuCB+B(C(%CQ +Cw!(C"+C#5C$-D.%2D\&+D'3D(.D)#E*,EB+Eo,E-0E.(E/-E0F!1yF2Gg3H:4CI*5NInO>II JJ</JY-J(JJ J K K K* *KA!KlKK.K*K*L-L=)Lk)L8L:LM3MSMn*M0M,M)N /N:!+Nj"6N#N$AN%O&*O'O>(OY):Or*O+*O,O-O.AP/PP0*PR1P}2P38P4P58P6Q7Q8%Q$9HQJ:JQ;MQ<%R,=RR>RZ?4Rb@RARB5RCS D-S'E#SUF8SyGGSH8SIT3JT9KTPL.TMdUN0UgO2U$UUV (V)/VR V VV#V0VmV W_%W W W 5W OX /XgX)X'X,YY5YO+YZ#Z=#[ =[00[nJ[4[\/\\] +] !9]L"F]#]$(^%4^&/_ 'N_P(&_)+_*_+`,Aa8-az.nb#/b02cF1cy2d35e4f5+f6f7%f8g9 g9:)gZ;g<g=$g>g?&g@h$A#h8B,h\ChD.hE(hF$hGBi$H,igI=iJiK'iL'jM4j<N#jqO0jP$jQ2jRkS,k>TIkkU%kV!kW kX!lYl@Zl][l|\/l]6l^Km_wmM`Cman bCn'cnkdAne'nfngohpTiqjqk,rl$rm6s#nsZoAsp}t?q}tru;s~utRv;uvvswjwwxMxyy zy{z4|zS}@z~@{<[{}}{k|W5|@|M}:q}P}6~KU~~TEGHX8]PPA,d~JS.;K RIS+A6G@m83SlFO9W1V.6I!>-;BKGH.kNVq>g{C"BfTT(S#|12CuIaF!5h)>**1 ,:.g,.Y?L?*'=4]*&E2(JIs=E6Ax-H+BF1 K<hONA|V Ld *4**JDu//..AMhhdad;+;g?? J# In * * ((7g`:'*+&Vl};2&Ys?]n-@d\B )!"v#A$,%(&'u(:)*@j+S,H-VH./L0'1@2YC3C485g6U7a8I:9<„:;DC<È=IH>-Ē?'@SA<BdC-%DzSEFǢGMȑH5IYJoKeʌL2Me%N'ˋO?˳P;Q/R[S[TaxU#VDWCXDY Z[[P!\/r]?Т^?_B"`CeaAѩbc0҇dNҸe[f2cgMӖhLiE1jPwkNlmb՛naoK`pQ֬qJrFIsJאtDu; v5\wnؒxAy3CzMw{T|V}Xq~Y_$#ۄ$ۨ'#D^8-f69Fp0޷a*Jcubb<q.7@<x@^/KuaTH];3m;YX"P7s?=/)hY1\QV_4wjIaPF9}Re1QRAaeg qxhqV[1;@RJAb$'h<-:<<K$\TW9#9 Ds;k@ pM ^     AW b 9 @{ z5 9 ,E fJ  s+%e^oG?`t&(Ok]B1 q /D t   :V+^pQ   O I|W /!<I!lV! [" !1"i"R"#q"$M#`%2#&R#'Q$4(R$)$*%+'U,C) -*Q.+)/$+0,#1%,@2,f3&,4,5D,68- 7-F8-f99-}::-;>-<@.1=@.r>E.?Y.@Z/SA/BN/C`0DN0zES0FD1GJ1bHI1ID1JA2<KF2~LB2MG3N 3POX3qPT3Q_4Rw4S}4TG5uU$5V%5W6X)6YF6Z*7[D7;\A7]g7^J8*_N8u`N8aE9bf9Yc69d49e:,f9:Jg:h%:i:j:k;l+;m;Gn;o.<p $>B>g>4>N>1?'E?YA?F?U@(#@~@A1AH>AZ3ACA?B BQB[1BrB B"B&C"C-pCPCzC)DU"DD!D#D(E4E1EfvE;E6F4Fk/F.FF6FEG3FGy1GNGnHA5HH$H%I 5IF@I|<I]IAJXHJ)J)K TK7DKYKZL+@LVLUM.Mt:M>MhNlN`NGOT O*OOP'PMPF!P'PPPQQ1QPQaQxQQQ=Q=RRR(Ri*RiR7S'vS_#S%S*T TKT`1T~\TU "U%UHUUcU5ULVVPeVe>V+W @W6*WwPW9WUX-X!Y Y-!YA#YcLYJY7Z ZW *Zx 1Z 7Z A[  =[O&[(['[K\<\Q2\J\Z] ]g!]|2] ]"]^P^)`^z|^_X _q!_" _#>_$``%`c&}`'@aq(#a)&a*)a+=b',*be-Cb.^b/]c30c1c2c3c4c5c6d7d08 dE9 df:1dt;Dd<"d=.e>e=?,eR@,eA7eBeCfD fE$fFafGgHGg.IgvJ)gKh"LzhM8iHN.iO"iPiQ;jgRjS.kAT%kpUVkV>kW<l,XKliY&lZ>l[#m\.m?].mn^*m_tm`Rn=a:nbNncuod3oe!of[og=pBhpiqSjqik/ql+qm"qnror!pr;qrVrrtsrt5ru3rv/swOsHxJsy[szst?{At|lt}4ub~`u?u>v8Yvw>vCw?wTAw;wsx9xOx9yyJQyAz:hz|Bz{(L{O{P|FU|'|3}@}I%}/}#}/~?~41t9E8'+% &2.Y% 1J&L6s0!#!!"C!f90*0*[*,** -4!b03'%+M0y9)'<6Is5#R+$~~|" *%1WsszV//+1/]]{]|^N3A-@Y.u9bSA4U, M ~>/ 4.T;/Y.I1xb:X@2?l_QgT*,,A `hU=@a~lVMY>F3;zY,=s5JUBn%  ' '  3 Es0B-AK_H0a%FE$B1? (0!Y"R#=:$Tx%&9\'ʖ()M*d+e,l-e.}C/n0;01l2 G33h4"Ҝ5ҿ6 78.9:I;=h<)Ӧ=>>c?5s@QԩAjBfCDy|EFGhHpeI[J\2KJڏL/MC NQNO۠P$QoܦRkS݂TNUTRV^ާWXVY2Z[\~]9:^t_`ab;cdeQfSg0hKi"ajk-l mnop4qMr6is}thuvwyLxye_z{J|Zf}f~E(jn; dD^"xPXe`V#,oF0wD!cEss@-Fn  }  H $ " 2< U3     F|-&Hal\r+fEQ`6NY856>h>cf r*OLB3 RV\r f"&#]%{%y?%M'5''(*c+b+/,[0,V,M--a"..&.CP/"/s81#l1\"1312 2@2345Y6)6n7>8&)8e88-88089}9:::';7;EK;}R;X<:KJ>zV>%??B?^ 3@ @F +@f /@ !@V@B;7BDEFrGG@HHAH%I9NI_CIEIdJ8YJIJ KA!K"L#{MC$RM%N&N'6OB(DOy)O*AO+P,P-FP.:QE/!Q0Q1R12dS 3.Sr4MS5S6T7$UG8,Ul9VU:/U;LV <"Vm=KV>@V?jW@1WA#WBCWCX"DKXEXFYGZEH-[ I\;J]KL]3LI]M1]N2]O^/P/^Q`_R_cSM`T`aU)`yV6`Wk`XaFYaZcbx[b\8c_]Rc^;c_?d'`dga e9beCcqeRdVeenfffgWg&hEg~i5hjhkjlNkmnl+nRmo^mp=nLqlnrgnsJo_tou.p*v?pYwPpxpyqzbr{~r|sj}us~tsntst/uZu[vhvyvwu{x xWy?y@z=z~{#{|`#|wv|Z}}m%}}}l}1~NT~\~'2MZh2X=20_0&)/0OcSWTAo^5{L\j[)]DNBAC>\K[cC1I#<76t8C*(lS\677n5/ ,BR"u<riQb$3 >hbn My-r|hK 16R<'>G=]_o:k,AKaC7Hp\[EHaPi^e ` i% Z e gP  A ; E `K x %  O& rv - _ Xw U O& \v  _ 7 12 <d 1 < : 1K .} ; @ !0) "#Z #3~ $< %: &%* ':P (8 )/ *8 +5- ,=c -3 .K /K! 0Um 16 2 3 4F 5r 6_ 7 8$ 9 :\b ; < =q >n ?s @F A! B C# D| EX F G]ë Hb I.l J6ƛ K1 L= M)B N9l O@Ǧ P Q R,Ȉ Sȵ TL Uf VGɇ WJ X$ Y? Zg [n: \8̩ ]I ^K, _x `' aB b c dt eKO fқ g{^ h i, ju kMl lUպ mn nc o6 pB q6] rה sHi tز uG vQ w< xLZ yKڧ zn {b |I }/ ~@  eݐ f f]  lߓ   K  a P l> B  s s t pw r +[  wb   m :   t   d @    ~ r  7    =  S  0x " ~ K C  6{ l / 8O 6   d l ] 4 G ` | { . A W XL  2L    P   c  ? f  Q $   / !  zE  ( 6 /% U  g C & F Usage: %1$s [-s] [-h] [-n nodename] [-R output_file] [classname classname ..] Contacting node %1$s ... Failed to distribute %1$s ODMs to node %2$s. Failed in writing nodename field of HACMPcluster ODM to the local node. The PowerHA SystemMirror ODMs have been successfully distributed to all available cluster nodes. Succeeded in synchronizing %1$s ODM to the remote node %2$s. Error: %1$s ODM on the local node is different from that on the remote node %2$s. %1$s ODM on node %2$s verified. Warning: When Synchronize is selected, then Corrective Actions can not be performedERROR: Local Cluster ID(%ld) different from Remote Cluster ID(%ld). ERROR: Local Cluster name(%1$s) different from Remote Cluster name(%2$s). ERROR: Nodes have different numbers of networks. FATAL ERROR: Ran out of memory! ERROR: Network %1$s is called %2$s on remote node. ERROR: Remote node has node %1$s listed as NOT VALID. ERROR: Remote node has node %1$s listed as VALID. ERROR: Node %1$s's service interfaces do not match. ERROR: Node %1$s's standby interfaces do not match. ERROR: Node %1$s's boot interfaces do not match. Warning: Interface on network %1$s have different names. ERROR: Interface on network %1$s have different names. ERROR: Interfaces on network %1$s have different hardware addresses. ERROR: Interfaces on network %1$s have different IP addresses. ERROR: Interfaces on network %1$s have different attributes. Local Node:Remote Node:ERROR: Interface on network %1$s is not configured on local node. ERROR: Interface on network %1$s is not configured on remote node. ERROR: Could not read local configuration. Fatal Error: Cannot read HACMPnode ODM. Fatal Error: Cannot assign Node Ids. Fatal Error: Cannot read HACMPnetwork ODM. Fatal Error: Cannot assign Network Ids. Fatal Error: Cannot read HACMPadapter ODM. Fatal Error: Empty fields in HACMPadapter ODM class. Fatal Error: No address %s found on node %s. Fatal Error: Unable to update HACMPadapter Class. Fatal Error: Cannot read HACMPtopsvcs ODM. ERROR: Could not find local node in configuration. WARNING: Cannot find connection to node %1$s. ERROR: Node %1$s has lost contact. ERROR: Configchk on node %1$s is not valid. Checking node %1$s. ERROR: SYNC failed. ERROR: Node %1$s does not have a configuration. ERROR: Cannot open a line to node %1$s. could not find valid IP-label for node %1$s. ERROR: could not find an unused address for heartbeat via alias on network %1$s. Either reduce the number of adapters on this network or expand the netmask so more addresses can be allocated per subnet. ERROR: One or more addresses for Heartbeating over IP Aliases conflicts with the base addresses for the same interfaces. ERROR: Using %1$s as a base address for Heartbeating over IP Aliases would result in the address %2$s being used for heartbeat. This address or subnet conflicts with the base address or subnet of address %3$s. ERROR: Using %1$s as a base address for Heartbeating over IP Aliases along with a netmask of %2$s would result in the address %3$s being used for heartbeat. This address or subnet conflicts with the base address or subnet of address %4$s. Specify a different base address for Heartbeating over IP Aliases. Specify a different base address or netmask for Heartbeating over IP Aliases. %1$s: %2$s argument must be followed by a debug level (0 - 9) %1$s: unrecognized argument. SRC request failed: error = %ld ERROR: No shared vgs exist. None of the volume groups entered are shared. ERROR: godm initialize failed for host %1$s. ERROR: Volume group %1$s does not exist No physical volumes No logical volumes %1$s: %2$s godm_get_list: %1$s Host %1$s has no vgs. ERROR: %1$s is not a shared volume group. Volume Group IDs are different: Vg_id in %1$s: %2$s Vg_id in %1$s: %2$s The number of physical volumes is different. Number of physical volumes in %1$s: %2$ld Number of physical volumes in %1$s: %2$ld The number of logical volumes is different. Number of logical volumes in %1$s: %2$ld Number of logical volumes in %1$s: %2$ld ERROR %1$ld: the physical volume %2$s cannot be opened. ERROR %1$ld: lvm_queryvg failed for physical volume %2$s. Volume group id doesn't match Vg_id in %1$s's ODM: %2$s Vg_id in the VGDA: %1$s Number of physical volumes doesn't match Number of Physical Volumes in %1$s's ODM: %2$ld Number of Physical Volumes in the VGDA: %ld Number of logical volumes doesn't match Number of Logical Volumes in %1$s's ODM: %2$ld Number of Logical Volumes in the VGDA: %ld Pv '%1$s' with ID %2$s is not specified in the VGDA. Pv with ID %1$s in the VGDA is not specified in host %1$s's ODM Logical volume '%1$s' has different IDs. Lv id in %1$s's ODM: %2$s Lv id in the VGDA: %1$s Lv '%1$s' with the ID %2$s is not specified in the VGDA. Logical volume '%1$s' has different IDs. Lv id in the VGDA: %1$s Lv id in %1$s's ODM: %2$s Lv '%1$s' with the ID %2$s is not specified in host %1$s's ODM. Logical volume '%1$s' has different IDs. Lv_id in %1$s: %2$s Lv_id in %1$s: %2$s Lv '%1$s' with the ID %2$s doesn't exist in host %1$s. Pv '%1$s' with the ID %2$s does not exist in host %1$s ERROR Usage: vgs -h hostnames [-v vgnames] Where: hostnames is a list of 2 to 4 hostnames separated by commas vgnames is a list of volume group names separated by commas Note: Spaces are not allowed between hostname entries or volume group entries >>> VOLUME GROUPS TO BE INSPECTED: - %1$s - %1$s ************************************************** >>> READING ODM IN HOST: %1$s COMPARING THE HOSTS' ODMs -> COMPARING VG '%1$s' IN HOSTS '%2$s' AND '%3$s': COMPARING ODM WITH VGDA >>> COMPARING ODM IN HOST '%1$s' WITH VGDA: -> CHECKING VOLUME GROUP: '%1$s' ERROR: physical volume not found for this volume group. >> THERE ARE NO INCONSISTENCIES BETWEEN THE HOSTS' ODMs AND THE VGDA. >> THERE ARE NO INCONSISTENCIES AMONG THE HOSTS' ODMs. %1$s Usage: %1$s [-d days] Logical Volumes, Volume Groups, Log Logical Volumes and/or PVIDs associated with filesystem %1$s are not equivalent on all nodes. Filesystem: %1$s not configured on Node: %2$s Log Logical Volumes and/or PVIDs associated with volume group %1$s are not equivalent on all nodes. Volume Group: %1$s not configured on Node: %2$s DISK with PVID: %1$s not configured on Node: %2$s Discovering IP Network Connectivity Discovered [%d] interfaces Cannot open storage file %s IP Network Discovery completed normally IP Network Discovery completed with %d errors. AliasesWARNING: ERROR: Unable to load HA ODMs into memory Internal Error: Unable to generate xml request. Unable to send security/local nodename to node %s on socket %d while sending the security / node information FATAL ERROR: Ran out of memory! Archive file compatibility not done. Can't godmget %s, error = %d Can't odmget %s, error = %d %1$s: internal error Bad data format from node: %2$s %1$s: internal error Bad message format from external verification script %2$s Internal Error No data collected from node: %s Comm error found on node: %s. Error processing Cluster Manager request Cannot send the packet to the node: %s Cannot receive the packet from the node: %s No more alive nodes left Unable to communicate with the remote node: %s. Please check that node: %s has the /usr/es/sbin/cluster/etc/rhosts file configured and the clcomdES subsystem running. Remote command failed, node-%s, command-%s Cannot create new thread PowerHA SystemMirror does not know how to contact node: %s. An initial IP interface must be defined for node: %s before PowerHA SystemMirror can contact the node. The discovery results will be incomplete. Entry '%s %s' is not present in %s Entry '%s %s' is present in %s but IP address is not correct %s: Fatal Error: could not open output file %s. %1$s: Error count must be 0 (for all) or greater (for a specific number). Method name %1$s can not be longer than %2$d bytes. %1$s [-e error_count] [-R output_file] [-o odmdir] [-x AIX odmdir] [-r] [-b] [-c] [-s] [-m] [-V normal | modified ] [-v] [-C yes | interactive] [-A] [-w auto | manual] Verification to be performed on the following: Cluster Topology Cluster Resources Reading Cluster Topology Configuration... %1$s: Fatal Error: Unable to reset the HA ODM directory. %1$s: Fatal Error, could not read cluster configuration from the ODM. Service label: %s configured on network: %s uses IP address takeover, this label is incorrectly using a different subnet as interface '%s'. Please place the service label '%s' on the same subnet as the configured interface: %s on network: %s. Verification of custom method %s failed Verification of custom method %s completed normally Custom verification method %s does not exists Retrieving data from available cluster nodes. This could take a few minutes. Verification has completed normally. Verification exiting with error count: %d Filesystem %s is out of disk space, verification will stop. The files created during verification will be removed. Please free additional space before running verification again. ERROR: Cluster verification is already running on the following nodes: %s Please wait for verification to complete before starting another run. There is not enough free space in %1$s filesystem on node: %2$s. 4 MB of free space is required in /tmp and /var to run verification. Please free at least 4 MB of space in the filesystems mentioned above before running verification. Invalid PowerHA SystemMirror site configuration detected. The site configuration must have two sites defined. Invalid PowerHA SystemMirror site configuration detected. When PowerHA SystemMirror sites are configured, all PowerHA SystemMirror nodes must participate in a site configuration. Run PowerHA SystemMirror Sites verification checksResource group: %s has both a dynamic node priority policy defined as well as a site policy defined. Both policies cannot be defined for the same resource group. The NFS mount / Filesystem specified for resource group %1$s is using incorrect syntax for specifying an NFS cross mount: %2$s. Please use the following syntax for specifying an NFS cross mount, specify NFS_Mount_Point ; Local_Filesystem. Example: NFS Mount Point: /nfsmount/share1 (location where NFS is to be mounted) Filesystem: /mnt/fs1 (filesystem to export) Specify: /nfsmount/share1;/mnt/fs1 Verifying concurrent attribute of volume group %s Volume group %s participating in resource group %s is set as %s on node %s and %s on node %s. This flag should be consistent across all the nodes from this resource group. Performing volume group consistency check Verifying Resource Group: %s Verifying Resources on each Node... Verifying Filesystem: %s Verifying Export Filesystem: %s Verifying NFS Mount Filesystem entry: %s Verifying NFS Mount Point: %s Verifying JFS entry: %s Verifying Network For NFS Mount: %s Verifying Volume Group: %s Verifying Concurrent Volume Group: %s Verifying PVID: %s Verifying NODE_PRIORITY_POLICY: %s Verifying Resources across Resource Groups Verifying Events on Node: %s Verifying export files for resource group: %s Verifying NFS-Mounted Filesystem: %1$s. Verifying Server: %1$s on Node %2$s Verifying Highly Available Communication Link: %1$s on Node %2$s. Verifying Highly Available Shared Tape ... Verifying Highly Available Shared Tape '%s' on Node '%s' ... Verifying custom events Verifying custom events on node: %1$s. Verifying custom verification methods Verifying custom verification methods on node: %1$s Verifying Custom Snapshot Methods Verifying custom snapshot methods on node: %1$s Verifying Cluster Log Directories Verifying cluster log directories on node: %1$s Verifying Custom Disk Methods Verifying Custom Disk Methods on node: %1$s Verifying site configuration with resource groups processed in parallel Verifying Subnet configuration... i Verifying ATM Configuration... Verifying Cluster Topology... Verifying Cluster Resources... Verifying XD Solutions... Verifying XD Solution %s... Verifying Cluster Security Verifying WLM settings for resource group: %s Verifying Highly Available Communication Adapters... Verifying that no resource group has both aliased and replacement labels. Verifying the compatibility between network and adapter type as well as between adapter type and associated hardware. '%s %s %s' is not present in the configuration file %s on node: %s Unable to create a file name Kerberos not properly configured on node: %s. ksrvutil not found. Could not open file %s Nodes not in same realm %s is in realm %s and %s is in realm %s Cannot find %s.%s@%s in %s on node: %s Invalid Cluster Name: %1$s. Cluster names cannot be blank, the first character must be a letter and the remaining characters must belong to the set alpha numeric or '-' and '_'. Invalid node name: %1$s. Node names cannot be blank, the first character must be a letter and the remaining characters must belong to the set alpha numeric or '-' and '_'. Invalid resource group name: %1$s. Resource group names cannot be blank, the first character must be a letter and the remaining characters must belong to the set alpha numeric or '-' and '_'. Invalid Network Name: %1$s. Network names cannot be blank, the first character must be a letter and the remaining characters must belong to the set alpha numeric or '-' and '_'. Invalid Communication Interface Name or Service IP Label: %1$s. Communication interface names and service IP labels cannot be blank, the first character must be a letter and the remaining characters must belong to the set alpha numeric or '-' and '_'. /.rhosts file does not exist on node: %1$s. No nodes were found for conc. RG %s The network module %1$s does not have a name defined. One Network Interface Module name (%1$s) is not valid. NIM names must be constructed from the following character set alphanumeric characters '.', '-' and '_'. Network Interface Module named %1$s does not have a description. Address type for %1$s Network Interface Module is not valid, current value is %2$s (must be greater than or equal to zero). Grace period for %1$s Network Interface Module is not valid, current value is %2$s (must be greater than or equal to zero). Heart beat rate for %1$s Network Interface Module is not valid, current value is %2$s (must be greater than or equal to zero). Failure cycle for %1$s Network Interface Module is not valid, current value is %2$s (must be greater than or equal to zero). Invalid baud rate (%1$s) for the rs232 nim, valid values are 9600, 19200, 38400. On network %1$s, the service IP label associated with (%2$s) is shared, while the service IP label (%3$s) associated with network %4$s is non-shared. Only one type of service IP label may be configured per network. Network %1$s contains a service IP label. Networks with service IP labels may not be included in global networks. Node %1$s has communication interfaces on network (%2$s) and (%3$s). Both of these networks are part of global network (%4$s). No node can have adapters on more than one network that is part of the same global network. Address: %1$s for communication interface: %2$s is not configured properly. Detected communication interfaces on multiple subnets for network %s. Please ensure the routes for the subnets on this network are configured. IP label: %1$s cannot be resolved on node: %2$s. This could be caused by an improperly configured /etc/resolv.conf, or a missing entry in /etc/hosts. Could not contact node: %1$s. IP label: %1$s resolves to address: %2$s on node: %3$s, but resolves to address on %4$s on the local node. IP name resolution must be consistent across the cluster. Persistent node IP label %1$s cannot be defined on ATM network. Persistent node IP label %1$s cannot be defined on HPS network. There is more than one persistent node IP label defined on node: %1$s for network: %2$s. Persistent node IP label %1$s should not have an alternate hardware address defined. The hardware address will be ignored. Persistent node IP label %1$s cannot have the same subnet as the interface or label %2$s on network %3$s. Invalid loopback or localhost address on node: %1$s. Broadcast address for interface: %1$s on node: %2$s not found. Communication interfaces on network: %1$s do not all have the same netmask. An alternate hardware address is defined for interface %1$s on a network that is configured to use IP aliasing. Interface: %1$s is defined on a network that is configured to use IP aliasing. Serial network %1$s is configured to use IP aliasing. Communication interfaces on node: %1$s, network: %2$s are not on different subnets. Service IP label %1$s on network %2$s is on the same subnet as at least one of the communication interfaces on this network. Service IP labels must be on a different subnet when the network is configured to use IP aliasing. Problems encountered while verifying cluster topology for IPAT using IP aliasing. Volume group %s is defined as enhanced concurrent and the LVM level allows use of fast disk takeover on node %s, but not on node %s. This inconsistent definition cannot be supported. PowerHA SystemMirror TTY device: %1$s does not exist on node: %2$s. PowerHA SystemMirror TMSSA device: %1$s does not exist on node: %2$s. PowerHA SystemMirror TMSCSI device: %1$s does not exist on node: %2$s. More than 2 serial networks of type %1$s on node: %2$s. Netmask on %1$s does not match the netmask on %2$s on node: %3$s. All interfaces on the same network must use the same netmask. Netmask for %1$s on node: %2$s does not match the netmask for %3$s on node: %4$s. All interfaces on the same network must use the same netmask. Communication interface %1$s on node: %2$s, network: %3$s is not of type %4$s. Communication interface %1$s on node: %2$s, network: %3$s is not of type %4$s. Network: %1$s requires a service IP label. For network: %1$s, there are service IP labels bound to some, but not all, nodes that participate on that network. Either a separate service IP label must be bound to each node on that network, or one service IP label bound to multiple nodes. Communication interface on node: %1$s, network: %2$s is not on the proper subnet. The communication interface is configured on a different subnet from other adapters on the same node and network. Communication interface %1$s on node: %2$s is not on its default address as specified in the CuDv. Service IP label %1$s on node: %2$s, network: %3$s is not of type %4$s. Service IP label %1$s on node: %2$s, network: %3$s, is not on the proper subnet. Service IP label: %s is improperly configured on node: %s. Node: %1$s does not have a service IP label configured on network: %2$s. Communication interface: %1$s is configured as a standby interface on network: %2$s. Standby interfaces cannot be used on this network because it is configured to use IP aliasing. Please resync the topology to correct this situation. Communication interface on node: %1$s, network: %2$s is not on the proper subnet. Communication interface(s): %1$s are improperly configured, or not available on node: %2$s. Please check to ensure the interface is defined and available by running "smitty chinet", and if necessary re-define the interface. Network: %1$s only has one node defined. A network must include at least 2 nodes. Failure reading ODMs for HA communications adapter verification. No HA communication adapters specified for link %1$s. HA communication adapter %1$s specified for link %2$s is not defined. HA communication adapter %1$s is not a multilink adapter, but it is present in more than one HA communication link: %2$s and %3$s. HA communication link %1$s specified in resource group %2$s contains no adapters associated with node %3$s. Failure collecting WAN device information on node %1$s. Driver %1$s defined as HA communication adapter %2$s does not exist on node %3$s. Node %1$s specified for HA communication adapter %2$s does not exist. Service principal %1$s is not present in the file /.klogin on the local node. Can't open the file containing the reserved words: %1$s. Reserved word is used for calling network: %1$s. Reserved word is used for calling communication interface or service IP label: %1$s. Reserved word is used for calling node: %1$s. Could not find FS %1$s in CuAt ODM on the local node. Can't odmget CuDv on local node. Could not find LV %1$s for FS %2$s in CuDv ODM on local node. Volume Group %1$s is not found on node %2$s. The major number for VG %1$s is not the same on all nodes. Unable to retrieve list of node names from resource group: %1$s. Node: %1$s, participating in resource group: %2$s, is not configured. Node: %s is missing entry '%s %s' in the /etc/hosts configuration file. Node: %s is missing entry '%s %s' in file %s. clverify detected that Automatic Error Notification stanzas need to be reset. This means that clverify is running in synchronization mode, or the previously made request has not been completed. PowerHA SystemMirror will automatically add Error Notification Stanzas for all devices considered to be Single Points of Failure after cluster verification is complete. IP Label %1$s associated with IP address '%2$s' in the PowerHA SystemMirror configuration has conflicting IP Addresses specified in /etc/hosts file: Node %3$s has IP Address '%4$s' /etc/hosts on node %s contains IP address '%s', but it does not map to IP label '%s'. The type "%1$s" of communication interface "%2$s" does not match the type "%3$s" of the associated network "%4$s".Make sure that the network type matches the type of the associated communication interfaces. The type "%1$s" of communication interface "%2$s" associated with the network "%3$s" does not match the type %4$s of the NIC "%5$s" on which it is defined. Hardware changes may have been performed after PowerHA SystemMirror was configured. Please use the PowerHA SystemMirror SMIT screen with the fastpath "cm_extended_config_menu_dmn" to re-discover the current hardware configuration, and ensure it is consistent with the PowerHA SystemMirror configuration. Resource group %1$s contains service IP-label %2$s, which is part of network %3$s. This network is configured to have a Service IP Labels/Address Distribution Preference policy with Persistent Label. This requires each node that is part of the resource group to have a persistent IP label configured on this network. Cluster verification detected that not all nodes of this resource group satisfy this requirement. Please either change the distribution policy for this network or ensure that each of the following node(s) has persistent IP label configured on network %3$s: The LVM time stamp for shared volume group: %s is inconsistent with the time stamp in the VGDA for the following nodes: %s Function: %s invoked snprintf in file %s, line %s failed: ret = %s Unable to allocate memory for ca_packed_data file = %s, line = %d Setting PowerHA SystemMirror timestamp for volume group: %s to %s on node: %s: PASS Setting PowerHA SystemMirror timestamp for volume group: %s to %s on node: %s: FAIL Unable to write to file: %s on node: %s File %s does not exist on node: %s Obtain a list of shared volume group time stamps.Check the HA volume group timestamps for accuracy.Verify Service IP labels configured with distribution dependencies.SSA Volume Group node number: %d for node: %s does not match the node identifier %d defined to PowerHA SystemMirror. Attempted to determine the node number for node: %s, unable to execute: /usr/sbin/lsattr -Elssar Attempted to change the SSA node number: %s on node: %s to %s failed. SSA node number %s changed successfully on node: %s. Node: %s has SSA node number %d defined. Changing %s SSA node number to an available node number [%s]. Invalid SSA node number %d on node: %s, node numbers must be non-zero. Please check the node number by running /usr/sbin/lsattr -Elssar. Backing up %s on node %s to file %s: PASS Backing up %s on node %s to file %s: FAIL Backup of %s on node %s, %s already exists: PASS Starting Corrective Action: %s. Adding entry '%s %s' to %s on node %s: PASS Adding entry '%s %s' to %s on node %s: FAILED Adding entry(s) '%s' to %s on node %s: PASS Adding entry(s) '%s' to %s on node %s: FAILED A corrective action has taken place, restarting data collection and verification checks. Verification will automatically correct verification errors. Verification will interactively correct verification errors. Update /etc/services with missing entries.Update /etc/hosts with missing entries.Update /usr/es/sbin/cluster/etc/clhosts with missing entries.Update /etc/filesystems with automount set to false.Disable the DB2 Fault Monitor Coordinator.No entries in /etc/hosts on node: %s. No entries in %s on node: %s. There are IP labels known to PowerHA SystemMirror and not listed in file %s on node: %s. Verification can automatically populate this file to be used on a client node, if executed in auto-corrective mode. Update /etc/snmpd(v3).conf and /etc/snmpd.peers with missing entries.Update SSA node numbers to be unique cluster wide.Update auto-varyon on this volume group.Setting inoperative cluster nodes interfaces to the boot time interfaces.Disabling auto-varyon for volume group: %s on node: %s: PASS Unable to disable auto-varyon for volume group: %s on node: %s: FAIL Update volume group definitions for this volume group.Updating volume group definitions of shared VG: %s participating in resource group %s on node: %s so that it will be consistent across all the nodes from this resource group: PASS Updating volume group definitions of shared VG: %s participating in resource group %s on node: %s so that it will be consistent across all the nodes from this resource group: FAIL Keep PowerHA SystemMirror volume group timestamps in sync with the VGDA.Auto import volume groups.Re-import volume groups with missing filesystems and mount points.Corrective actions are not supported for Geographically Mirrored volume groups. Please correct the error manually for GMVG %s. Cluster definition not found for the local node. Would you like to import shared VG: %s, in resource group: %s onto node: %sWould you like to re-import shared VG: %s in resource group %s to obtain the file system: %s on node: %sWould you like to update VG: %s timestamp on node(s): %s for resource group: %sDo you want to set autovaryon to "no" for the volume group: %1$s on node: %2$sDo you want to change volume group definitions for the volume group: %1$s participating in resource group %2$s on node: %3$sWould you like to remove the service IP alias: %1$s from interface: %2$s on node: %3$sWould you like to turn off auto-mount for the filesystem: %1$s on node: %2$s [Yes / No]: Invalid response '%s', please try again. Volume group: %s will not be imported onto node: %s Exporting shared VG: %s on node: %s: PASS Exporting shared VG: %s on node: %s: FAIL A corrective action is available for the condition reported below: To correct the above condition, run verification & synchronization with "Automatically correct errors found during verification?" set to either 'Yes' or 'Interactive'. The cluster must be down for the corrective action to run. Importing Volume group: %s onto node: %s: PASS Importing Volume group: %s onto node: %s: FAIL Volume group: %s is now available on node: %s %d problem(s) were resolved. INTERNAL ERROR: Invalid data was received, argument %s is empty. Removing RSCT entries from /etc/services on node: %s, calling "/usr/sbin/rsct/bin/topsvcsctrl -d": PASS Removing RSCT entries from /etc/services on node: %s, calling "/usr/sbin/rsct/bin/topsvcsctrl -d": FAIL Adding RSCT entries to /etc/services on node: %s, calling "/usr/sbin/rsct/bin/topsvcsctrl -a": PASS Adding RSCT entries to /etc/services on node: %s, calling "/usr/sbin/rsct/bin/topsvcsctrl -a": FAIL Adding entry "%s %s/%s" to /etc/services on node: %s: PASS Adding entry "%s %s/%s" to /etc/services on node: %s: FAIL Changing entry '%s' in /etc/services to '%s' on node: %s: PASS Changing entry '%s' in /etc/services to '%s' on node: %s: FAIL The port for entry '%s %d/%s' is already in use by service %s on node: %s Configuration file '%s' either does not exist, or is locked for writing. Adding entry "%s" to %s on node: %s: PASS Adding entry "%s" to %s on node: %s: FAIL Running command: %s on node: %s failed. Refreshing SNMP sub-system on node: %s. Corrective action '%s' failed to correct this condition, the corrective action will not be re-execute. Determine if one or more nodes in the cluster are active. Verifying inactive cluster components. Node: %s is not running cluster services. Node: %s is running cluster services. Unable to perform the corrective action for this condition, one or more nodes are running cluster services. Run cldare -a to determine if cluster services are running.List the filesystems and associated volume groups.Gather netstat -in outputGather lssrc -ls clstrmgrESFailed to obtain ODM %s from node: %s Verification will abort. Adapter %1$s on network %2$s does not have a valid subnet defined for PowerHA SystemMirror. Verification of Cluster Topology for RSCT failed. See "/var/ha/log/topsvcs.default" for detailed information. The PowerHA SystemMirror adapter %s is not available on node %s Hardware address %1$s is not unique between adapter %2$s on node %3$s and adapter %4$s on node %5$s The service IP label: %1$s associated with %2$s can not be found in the cluster topology. Node: %1$s has no available communication interface for takeover of service IP label: %2$s, and will never acquire its resource group. The service IP label associated with %1$s is defined without a communication interface on the same subnet, therefore it can not be configured as part of a resource group. The service IP label: %1$s on node: %2$s is configured to be part of multiple resource groups. A service IP label may be part of only one resource group. The shared service IP label %1$s on node: %2$s is not associated with a distributed resource group. Shared service IP labels are allowed for use only in a distributed resource group configuration. The non-shared service IP label: %1$s on node: %2$s is not associated with a resource group. Non-shared service IP labels are allowed for use only in a resource group configuration. The following restriction pertains to Resource Groups that contain a Service IP Label and have a Startup Policy of "Online On Home Node". Multiple Resource Groups cannot have the same highest priority node and multiple Service IP Labels configured on a Network that does not have "Enable IP Address Takeover via IP Aliasing" set to "yes". The above restriction is violated by the following Resource Groups: Home Node Network Resource Groups (Service IP Labels) ------------------|------------------|----------------------------------- Either enable IP Address Takeover via IP aliasing on the above networks, or ensure that all Resource Groups with "Online On Home Node" have a unique highest priority node. The service IP label: %1$s, belonging to node: %2$s is associated with resource group: %3$s. of the participating nodes in the resource group, the highest priority node must therefore be set to %4$s. (It is currently set to %5$s.) The Service IP label %1$s does not have an associated IP address in /etc/hosts or name services on the local node. The service IP Label in /etc/hosts or name services associated with the IP Address (%1$s) differs from the IP Label as configured in the HACMPadapter class. PowerHA SystemMirror site definitions do not match HAGEO site definitions. Please import the PowerHA SystemMirror site definitions to HAGEO. Site %1$s associated with ESS %2$s does not exist in HACMPsite. No service IP label is defined for the 'network' based distribution for group %1$s The distributed Group %s has 'Online On Either Site' site relationship. A disk with PVID %1$s is a part of the volume group %2$s participating in resource group %3$s on node %4$s. Node %5$s is also part of this resource group, but it does not have this PVID defined as a part of this volume group. The list of PVIDs per volume group should be consistent for all nodes that can require access to this volume group. The disk with PVID %1$s is a part of the volume group %2$s participating in resource group %3$s on node %4$s and site %5$s. Node %6$s on the same site is also part of this resource group, but it does not have this PVID defined as a part of this volume group. The list of PVIDs for a volume group must be consistent in the same site, nodes that are belong to different sites must have different PVIDs for the same volume group. The disk with PVID %1$s is a part of the volume group %2$s which participates in resource group %3$s on node: %4$s and site: %5$s. This PVID is duplicated on node: %6$s, site: %7$s the PVID should not be the same PVID as node %4$s. The list of PVIDs for a volume group must be consistent in the same site, nodes that belong to different sites must have different PVIDs for the same volume group. Shared VG %1$s not found on node %2$s. Volume group names conflict for VG %1$s on nodes %2$s and %3$s.Volume group major numbers conflict for VG: %1$s on nodes %2$s (%3$d) and %4$s (%5$d). Logical volume names conflict on nodes %1$s and %2$s for VG %3$s. Logical volume %1$s not found for VG %2$s on node %3$s. Logical volume %s on %s has %d physical volumes. Check if old unused volume groups have been exported. Filesystem '%s' from Resource Group '%s' shares PVID '%s' with Resource Group '%s'. Volume Group '%s' from Resource Group '%s' (%s) shares PVID '%s' with Resource Group '%s' (%s). PVID '%s' from Resource Group '%s' is also used in Resource Group '%s'. Filesystem %1$s is configured to auto-mount on node: %2$s. No volume group on node: %1$s is configured to auto-varyon. The root volume group on each node should auto-varyon at boot time. Multiple volume groups on node: %1$s are configured to auto-varyon. Volume Group %1$s used in resource group %2$s has automatic varyon configured to "yes" on node %3$s. This parameter needs to be changed to "no" in order for PowerHA SystemMirror to function. Multiple auto_on attribute entries for Volume Group: %1$s on node: %2$s. NFS Mount point: %s not found on node: '%s' NFS mountpoint: %s is not a directory. Resource group: %s contains GMD resources. Skipping filesystem consistency check. Logical Volumes, Volume Groups Log Logical Volumes and /or PVIDs associated with filesystem %1$s are not equivalent on all Nodes. Log Logical Volumes and/or PVIDs associated with volume group '%s' are not equivalent on all Nodes. Filesystem %1$s on node %2$s does not exist. Filesystem %1$s on node %2$s does not exist for resource group: %3$s. Resource group %3$s is set to automatically import. Directory: %1$s was specified to be exported on node: %2$s. Since this is not a filesystem, make sure the filesystem in which this directory resides has been specified as a filesystem for the resource group. Resource group %1$s contains Filesystems/Directories to Export, but does not have a Service IP Label Specified. It is required that a Service IP Label be specified, or mounts could be lost if the resource group moves to another node. Duplicate Mountpoint: [%1$s] conflicts with [%2$s] in resource group [%3$s] NFS Mount point '%1$s' must use absolute path name. The Network For NFS Mount (%1$s) specified in resource group %2$s is not an IP network. Resource group "%1$s" contains a network for NFS mount point(%2$s), but does not have a Service IP Label specified on network: %2$s. Please add a service IP label to network: %2$s, or NFS mounts could be lost if the resource group moves to another PowerHA SystemMirror cluster node. Log Logical Volumes and or PVIDs associated with volume group %1$s are not equivalent on all Nodes. Volume group: %1$s on node: %2$s does not exist. Volume group: %1$s on node: %2$s does not exist, but is set to auto import in resource group: %3$s. Disk %1$s on node %2$s does not exist. Connections Service %1$s is not defined properly on node %2$s. Connections service %1$s/%2$s is not defined on node %3$s. Application server %1$s not defined in the HACMPserver ODM. The application server must be defined before it can be used in a resource group. File %1$s used to start application %2$s does not exist or is not executable on node %3$s File %1$s used to stop application %2$s does not exist or is not executable on node %3$s. The application monitor script file: %1$s does not exist or is not an executable on node: %2$s. Validating Application Monitors... %1$s Application Monitor belongs to following resource groups %2$s. An application monitor: %s belongs to more than one resource group. Application monitors may only belong to one resource groups. A Resource Group can not contain more than one Application Monitor. Resource Group: %s contains %d application monitors. A resource group can only contain one application monitor. Please remove all but one application monitor from resource group: %s. Highly Available Communication Link: %1$s not properly defined in the HACMPcommlink ODM. Communication Link %1$s is included in multiple resource groups %2$s and %3$s. The DLC %1$s is not defined to CS on node %2$s One of the ports (%1$s) is not defined to CS/AIX on node %2$s. One of the links (%1$s) is not defined to CS/AIX on node %2$s. The Application Service File %1$s cannot be found on node %2$s. The Application Service File %1$s is not executable on node %2$s. The Application Service File %1$s is not readable on node %2$s. Tape resources found in concurrent resource group '%1$s'. Tape resources are not supported for resource groups with 'Online On All Nodes' startup policy. NO nodes (none) found in resource group '%1$s'. Tape resource %1$s used in resource group %2$s has no definition in the ODM. Tape resource '%1$s', used in resource group '%2$s', has too many definitions in the ODM. Tape resource name [%s] must start with a letter. Path for start script file '%1$s' for tape resource '%2$s' is not absolute. Path for stop script file '%1$s' for tape resource '%2$s' is not absolute. Tape device %1$s for tape resource %2$s is not present on node %3$s. Device %1$s, specified as a highly available tape drive, is not a tape device. Tape device '%1$s' for tape resource '%2$s' is not available on node '%3$s'. Tape device '%1$s' for tape resource '%2$s' has SCSI LUN '%3$s' on node '%4$s', different from SCSI LUN ('%5$s') on other nodes . Start script file '%1$s' for resource '%2$s' does not exist or is not executable on node '%3$s'. Stop script file '%1$s' for resource '%2$s' does not exist or is not executable on node '%3$s'. TTY %s reserved for paging on node %s is either not found or not available TTY %1$s on node %2$s is reserved for heartbeating and can't be used for paging TTY %1$s on node %2$s is reserved for DBFS and can't be used for paging. File %1$s defined for paging method %2$s doesn't exist on node %3$s. Connections Services and Fast Connect Services coexist on this node: %1$s Node %1$s participates in more than one Fast Connect resource group Fast connect application does not exist on this node %1$s. The print queue: %1$s does not exist on node: %2$s. Node %1$s does not have fast connect services installed, or Fast Connect Services are improperly configured. Required event %1$s not in the HACMPevent database on node %2$s. Event %1$s has no command configured on node %2$s. Event %1$s's command %2$s does not exist or is not executable on node %3$s. Event %1$s's notify event (%2$s) does not exist or is not executable on node %3$s. Event %1$s's recovery event (%2$s) does not exist or is not executable on node %3$s. Event %1$s's pre event (%2$s) does not exist in ODM or is not executable on node %3$s. Event %1$s's post event (%2$s) does not exist in ODM or is not executable on node %3$s. Custom snapshot method: %1$s's file: %2$s does not exist or is not executable on node: %3$s. PPRC configuration contains errors ERCMF configuration contains errors SVC PPRC configuration contains errors SRDF configuration contains errors Group %s has a replicated resource and site relationship of IGNORE. If a network does not support IPAT via aliasing the number of distributed resource groups on the network must be one less than the number of nodes on the site which has lesser number of nodes, (network=%s) Recommended disk space required for logs (upper limit): The log directory %s does not exist or is not writable on node: %s. Please create this directory and run this command again. Note: You must create this directory locally on all nodes for proper functionality. The HACMPlogs ODM either does not exist, or is empty. The cluster log entry for %s could not be found in the HACMPlogs ODM. Defaulting to log directory %s for log file %s. The directory %s, specified for log file %s, is not an absolute path (does not begin with '/'). %s.0f MB disk space remaining on node %s. The directory %s, on node %s specified for log file %s, is part of the AFS-mounted filesystem %s. The directory %s, specified for log file %s, on node %s is part of the DFS-mounted filesystem %s. The directory %s, specified for log file %s, on node %s is part of the NFS-mounted filesystem %s. The directory %s, specified for log file %s, on node %s is part of the filesystem %s, which is managed by HACMP. Therefore, it cannot be used for this purpose.As a result, it could unexpectedly become unavailable. Not all nodes in the cluster are in the concurrent RG %1$s. Can't odmget HACMPresource Resource group: %1$s has specified that file system recovery is to process in parallel and it has nested file systems associated with it. Cannot mix ; type entries with standard entries The ; entry is not defined properly. Not all the NFS mount points for node %1$s are unique, mount point = %2$s NFS mount point %s is incorrect. The NFS mount point must be outside the directory tree of the local mount point. A communication interface is defined for node %1$s on the network %2$s, but no service IP label Multiple entries for a single filesystem or directory found in %1$s on node: %2$s. Filesystem or directory: %1$s is configured to be exported in PowerHA SystemMirror, but it does not have a corresponding entry in %2$s file on node: %3$s. PowerHA SystemMirror will use default behavior to export this filesystem or directory. Not using export file: /usr/es/sbin/cluster/etc/exports on all nodes in resource group: %s. No nodes were defined for the resource group %1$s. There are sites and resource groups with parallel processing defined. Sites only support serial processing. Resource group: %1$s has a site policy of non-ignore and is set for parallel processing. Resource groups with site policies can only have a serial processing order for acquisition and release of the resource group. Resource group: %1$s has a site policy of non-ignore and is set for parallel processing. Resource groups with site policies and replicated resources can only have a serial processing order for acquisition and release of the resource group. Persistent IP label %1$s on node %2$s on network %3$s has no communication interfaces defined on the same node and network. This persistent IP label will be ignored. Persistent IP label %1$s on node %2$s and network %3$s has no communication interfaces defined on the same node and network. This persistent IP label will be ignored. Node: %1$s communication interfaces: %2$s and %3$s both belong to the same cluster network, %4$s and are configured to use the same ATM device, atm%5$ld. An alternate hardware address was configured for service label: %1$s on an ATM network. Invalid ATM hardware address:[%s] Invalid hexadecimal digit in ATM hardware address:[%s] The selector byte of ATM hardware address is out of range:[%s] The selector byte of ATM hardware address is not unique:[%s] Node %1$s: More than one atm service IP label with Hardware Address Takeover is configured to potentially coexist on ATM device atm%2$ld. Multiple atm service IP labels with Hardware Address Takeover are not allowed to coexist on the same ATM device. The following service labels are in conflict: %3$s. Node %1$s: Adapter %2$s is configured on interface at%3$d. Resource group %4$s contains node %1$s and service label %5$s, that has %6$d as selector byte in it's hardware address. For all nodes belonging to %4$s the selector byte of the hardware address of any atm service IP label cannot coincide with any number of an ATM interface configured in the cluster. Node %1$s: IP address %2$s is configured on interface at%3$d. Resource group %4$s contains node %1$s and service label %5$s, that has %6$d as selector byte in it's hardware address. If %4$s is brought online on node %1$s, the configuration of IP address %2$s on interface at%3$d will be permanently lost. Node %1$s: More than one atm service IP label exists with %2$d as selector byte in it's hardware address, that belongs to a resource group containing node %1$s. Multiple atm service IP labels cannot use the same selector byte if they are configured in resource groups that have a node in common. The following service labels are in conflict: %3$s. Two ATM service labels (%1$s,%2$s) with HWAT have identical MAC-portion(first 6 bytes) of the hardware address, which may cause the ATM switch problems ATM service label with HWAT and ATM adapter (%1$s,%2$s) have identical MAC-portion(first 6 bytes) of the hardware address, which may cause the ATM switch problems Broadcast address: %s for IP interface: %s on node: %s conflicts with the calculated broadcast: %s based on netmask: %s. Please use smitty chinet to change the broadcast address for this interface. There may be an insufficient number of communication interfaces defined on node %1$s network %2$s. Multiple communication interfaces are recommended for networks that will use IP aliasing. Network: %1$s has only one participating node. %2$s network communication requires two nodes. Node: %1$s has %2$ld rs232s configured on tty: %3$s PowerHA SystemMirror fileset: %1$s is installed on one or more nodes, but is not installed on node: %2$s The version levels are different for PowerHA SystemMirror fileset: %1$s Cluster verification detected that some cluster components are inactive. Please use the matrix below to verify the status of inactive components: HATivoli is installed but the Tivoli oserv daemon is not running on node: %1$s The service IP label: %1$s is not configured to be part of a resource group. It will not be acquired and used as a service address by any node. The current resource group configuration may not support proper fallover behavior. On a network that does not use IP aliasing fallover requires that the takeover node is not the highest priority node in any other resource group. Please check to ensure that this is the case. There are too many distributed resource groups: %1$s with service IP labels in network: %2$s, nodes: %3$s On a network that does not use IP aliasing, you may configure no more than N-1 distributed resource groups, where N is the total number of nodes on the network. Filesystem %1$s not found [%1$s] may conflict with [%2$s] in resource group [%3$s] Node: %1$s does not have a service IP label defined for the specified network For NFS mount: %2$s in resource group: %3$s. Dynamic Node Priority Policy is configured in a resource group with only two nodes. The priority calculation is irrelevant and will be ignored. Dynamic Node Priority is configured in a resource group defined over more than one site. The priority calculation may fail due to slow communication, in which case the default priority will be used. The file: %1$s on node: %2$s might start some AIX Connections services at boot time that could cause problems with PowerHA SystemMirror. Connections service %1$s/%2$s does not have %3$s '%4$s' defined on node: %5$s. Application Monitor: %1$s does not have an associated resource group or server. lsdev -Cl '%1$s' for tape resource '%2$s' failed on node '%3$s'. File share: %1$s is not on the shared filesystem, unless there exists a symbolic link to the share. File share: %1$s does not exist on the shared disk, unless there exists a symbolic link to the share. There are no shared filesystems defined, therefore your Fast Connect file shares may not be available during fallover and recovery. The node: %1$s has inconsistent netbios name with the other nodes participating in the non-concurrent resource group. The node %1$s has the same netbios name as other nodes participating in the concurrent resource group. Custom method: %1$s's file: %2$s does not exist or is not executable on node: %3$s. Custom disk method: %1$s's file: %2$s does not exist or is not executable on node: %3$s. Node: %1$s is a member of more resource groups with IPAT than it has communication interfaces available for takeover on network: %2$s. Only one node: %1$s was defined for resource group: %2$s. WLM class %1$s specified in resource group %2$s is not defined. %1$s resource group %2$s contains a secondary WLM class. Resource groups with 'Online On Home Node Only' or 'Online On First Available Node' may contain secondary classes. Resource group %1$s contains a secondary WLM class, but no primary class. WLM class(es) %1$s %2$s and no application servers are associated with the resource group: %3$s. Error parsing %s: too many classes. Error parsing %s: class name too long. WLM class: %1$s specified in %2$s has subclasses configured WLM subclasses are not supported by HACMP. '%s %s %s %s %s %s' is not present in the %s '%s %d/%s' is not present in the /etc/services on node %s '%s %d/%s' contains an invalid port number (%d) on node: %s '%s %d/%s' contains an invalid proto value (%s) on node: %s Node: %s State: %s Network: %s State: %s Label: %s Address: %s State: %s Resource Group: %s State: %s Verifying Configuration of Errnotify Stanzas -------------------------------------------- Error notification stanzas will be added during synchronization for the following: Node: %s en_label: %s en_resource: %s en_class: %s Additional stanzas exist in ODM errnotify for resources that are monitored for availability by means of Error Notification. The notify method configured by PowerHA SystemMirror ("/usr/es/sbin/cluster/diag/clreserror") is the most efficient method of recovery for dependent resource groups. Node: %s en_label: %s en_resource: %s en_class: %s Error labels are missing in the Error Report Template Repository of one or more cluster nodes. No recovery will be provided for a resource group if a corresponding error affects resources on which that group depends. Node: %s Error Label: %s Network %s has %d node(s) defined, for heartbeating over this network to occur at least 2 nodes should be defined. The PowerHA SystemMirror network "%s" has only one network interface configured on the following nodes:%s For nodes with a single Network Interface Card per logical network configured, it is recommended to include the file '/usr/es/sbin/cluster/netmon.cf' with a "pingable" IP address as described in the 'PowerHA SystemMirror Planning Guide'. File 'netmon.cf' is missing or empty on the following nodes: %s File 'netmon.cf' has non-zero size, but does not contain information about IP labels on the following nodes: %s To prevent the IP stack from being a single-point-of-failure, PowerHA SystemMirror recommends a non-IP network topology in which each node in the cluster can use non-IP networks to reach all other nodes in the cluster, either directly or via other nodes. This topology should be in addition to any configured IP-based networks. The following node(s) or sets of "non-IP connected" nodes violate this recommendation: It is recommended that these nodes or sets of nodes be inter-connected via additional non-IP networks, such as RS-232, shared disks for disk heartbeating, etc. %s: Invalid Startup Preference %s: Invalid Fallover Preference %s: Invalid Fallback Preference Custom Resource groups do not support sites. Please set site policy to 'ignore' for %s Resource Group %s uses Dynamic Node Priority as Fallover Preference. Please configure a dynamic node priority to use for this group. The fallback timer policy %s, used by %s is non existent The fallback date in the timer policy '%1$s' is in the PAST. Resource Group '%2$s' will ignore this fallback timer policy. The Resource Group Settling time value is: %d secs. The Resource Group(s) affected by the settling time are: None of the Resource Groups are configured to use the settling time. Resource Group '%1$s' is configured to use '%2$s' fallback timer policy. Verifying Resource GroupsCustom Resource groups are supported only on 'aliased' networks. %s uses service adapter from non-aliased network. The network corresponding to %s NOT found. Adapter information not found for %s Forced Varyon option is specified for resource group %1$s. However, logical volume: %2$s in volume group: %3$s is not defined with super strict mirroring on node: %4$s. Forced varyon may not work as expected. One or more nodes must be configured before executing the following operation; cluster verification. One or more nodes must be configured before executing the following operation; IP harvesting. %1$s: Resource groups with 'Bring Offline (On Error Node Only)' as fallover policy when the startup policy is not 'Online On All Available Nodes' could allow resources to become unavailable during error conditions. Network option "%s" is set to %s on node %s. Please be aware that this setting will be changed to %s during PowerHA SystemMirror startup. Network option "%1$s" is set to %2$s on the following nodes: Please be aware that this setting will be changed to %1$s during PowerHA SystemMirror startup. Network option "%1$s" has different settings among cluster nodes. Please make sure that the command (no -o %2$s) provides the same output on all of the following cluster nodes: Out of node space for network options Network option "%s" has different settings between nodes "%s" and "%s". Please make sure that the command no -o "%s" provides the same output on all cluster nodes. Verifying the network options. Cross-site mirroring is configured for volume group %s defined in resource group %s, but the forced varyon option is not set. Change this option to "true" to configure cross-site mirroring. The following Resource Groups have nodes with an inconsistent PowerHA SystemMirror/XD product installation: The following PowerHA SystemMirror/XD products are currently installed on the cluster nodes:(no PowerHA SystemMirror/XD product is installed) All nodes of a Resource Group that should utilize PowerHA SystemMirror/XD Data Replication should have the same PowerHA SystemMirror/XD products installed. Resource group %1$s has its inter-site management policy set to %2$s, but there are no PowerHA SystemMirror sites defined. Make sure that two sites are defined on the cluster or change site policy to "ignore". The RSCT level is inconsistent in the cluster: To ensure PowerHA SystemMirror is working properly, it is recommended that the same level of RSCT software be installed on nodes with the same level of AIX. SNA Communication link %s is configured as part of resource group %s however, node %s does not have the appropriate SNA communication software installed. Using SNA communication links requires Communication Link V6.1 or higher installed on all nodes that could possibly acquire the link. X.25 Communication link %s is configured as part of resource group %s however, node %s does not have the appropriate X.25 software installed. Using X.25 communication links requires AIX Communication Server V2.0 or higher installed on all nodes that could possibly acquire the link. Cluster verification detected problems trying to reach disk with pvid %s on %s. This disk is defined as a part of volume group %s, which is a part of resource group %s supported on node %s. Check disk availability to ensure node %s can acquire this volume group. The MTU sizes do not match for communication interfaces %s and %s on network %s. The NIC %s on node %s has an MTU size of %s, and the NIC %s on node %s has an MTU size of %s. To correct this error, make sure that the MTU size is consistent across all NICs on the same PowerHA SystemMirror network. Setting network option.Cluster verification detected that some of the disks on the cluster use both hdisks and device paths on different nodes. To ensure correct device processing, please configure all nodes to use either hdisk or vpath devices for the following PVIDs: The user group "%s" does not have a matching group ID on node(s): %s The below are the values for the group "%s" %s Please set the group ID to the same value on the above node(s). Node: %s does not have the user group "%s" defined in /etc/group. Please add the user group "%s" to /etc/group. group: %s label: %s -> aliased -> replacement This service label belongs to IPAT via replacement network (network id = [%d]) group: %s: Resource group '%1$s' has a startup policy of '%2$s' and site relationship of '%3$s'. This configuration is not permitted. Resource group '%1$s' has a startup policy of '%2$s' and site relationship of '%3$s'. This configuration is not permitted when the global distribution policy is set to 'node'. ERROR: Resource group '%1$s' has a startup policy of '%2$s' and site policy of '%3$s'. This configuration is only permitted when the global distribution policy is set to 'network' Unable to open the file parameterized file %s. (internal) Unable to find parser module %s in data structure pcsModules. (internal malloc) Unable to allocate %d bytes, out of memory! @(file = %s, line = %d) Directory %1$s contains file %2$s whose name exceeds the maximum allowed file name length. Loading verification pararameterized file: %1$s Invalid directory specified as the root directory for parameterization files: %s Missing specifier for field Component.Nodes, valid values are 'ALL', or 'S=:[C=]' Too many lines (exceeded %d lines) in verification parameterization file %s Collect the Oracle Smart Assist OPMN configurationVerifying /etc/inittab configuration's settings for clinit, and pst_client entriesVerify parameterized verification files in /usr/es/sbin/cluster/etc/config/verifyCheck the Oracle AFC/CFC smart assist application's OPMN configuration for changesThe Oracle Application Server: %s OPMN configuration has changed. Please check to ensure the OPMN components defined to this application have not been removed or modified from the original configuration. PowerHA SystemMirror requires /etc/inittab entry %s to have action %s. This ensures PowerHA SystemMirror cluster services are started at node reboot. The entry should read: %s:%s:%s:%s Presently, node: %s has entry: %s set to action: %s. Please change this entries action then re-run verification. It may also be necessary to run 'init q' after changing the entry in order to start PowerHA SystemMirror background processes. PowerHA SystemMirror requires /etc/inittab entry: %s to run at runlevel %s. This ensures PowerHA SystemMirror cluster services are started at node reboot. The entry should read: %s:%s:%s:%s Presently, node: %s has entry: %s set to runlevel(s): %s. Please change this entries run level, then re-run verification. It may be necessary to run 'init q' after changing the entry in order to start PowerHA SystemMirror background processes. PowerHA SystemMirror requires /etc/inittab entry: %s:%s:%s:%s PowerHA SystemMirror cluster node: %s is missing this entry. Please add this entry to /etc/inittab, then re-run verification. It will be necessary after adding the entry to run 'init q'. Running 'init q' will start PowerHA SystemMirror background processes. The following tty devices are enabled in /etc/inittab on the following cluster node(s): Node Name Device Identifier Action -------------------------------- ---------- ---------- ------- RSCT will be unable to start until these entries are removed or disabled within /etc/inittab. Please remove these entries, or change the action to 'off' then run 'init q' to stop the getty or terminal processes. Checking required fileset: %s (%s) Checking required APAR: %s Checking required user: %s (UID %d) Checking required user: %s Checking required group: %s (GID %d) Checking required group: %s Checking AIX swapspace levels, required minimum: %s, freespace: %s Checking filesystem %s; for a minimum of %s freespace. Checking for required file: %s Validating on nodes: PowerHA SystemMirror Verification Parameterization ModuleVerifies sets of files exist over a set of cluster nodes. Checks for fileset consistency across multiple cluster nodes. Checks for sufficient diskspace is available on chosen volumes. Checks for sufficient swapspace is available on chosen volumes. Checks that certain APARS are installed during cluster verification. Checks that specific users are present and have the same UID on specified cluster nodes. Checks that specific groups are present and have the same GID on specified cluster nodes. Parameter modules collectorThe %s requires filesystem %s; node %s does not have this filesystem defined. The %s requires %s of free disk space on filesystem %s; node %s does not meet this requirement. The %s requires %s of free swapspace; node %s does not meet this requirement. The %s requires %s of available swapspace; node %s does not meet this requirement. The %s requires file %s; node %s does not have a copy of this file. The %s requires fileset %s; node %s does not have this fileset installed. The %s requires fileset %s version %s; node %s has version %s installed. The %s requires APAR %s; node %s does not have this APAR installed. The %s requires user %s; node %s does not meet this requirement. The %s requires user %s to have a UID of %d; node %s has a UID of %d. The %s requires group %s; node %s does not meet this requirement. The %s requires group %s to have a GID of %d; node %s has a GID of %d. GPFS cluster is not configured. GPFS cluster is using '%s' network. Delete GPFS Cluster before modifying this network. Failed to read HACMPnode ODM or No nodes defined for PowerHA SystemMirror Cluster. GPFS cluster is using '%s' network. There is no adapter defined for this network on node '%s' GPFS configuration data does not match. You must manually fix the error before attempting to reconfigure HACMPcluster. '%s' has been added to the PowerHA SystemMirror cluster configuration. This node will be added to the GPFS cluster as well. hdisks that belong to '%s' are either not configured or not available. Local hdisk information NOT FOUND!! Remote hdisk information NOT FOUND!! Unable to detect the hdisk used by GPFS filesystem. This may be because the hdisk that is being used by GPFS is no longer available on this machine. Unable to detect the local PVID for '%s' The PVID for '%s' hdisk is 'None'! This disk is configured for GPFS.. Unable to detect the REMOTE PVID for '%s' The PVID for '%s' hdisk is None! This disk is configured for GPFS.. The PVID for '%s' hdisk do not match the one on the remote node! '%s' has been deleted from the configuration. This node will be deleted from the GPFS cluster as well. popen failed to execute command '%s'. Failed to retrieve GPFS filesystems Popen failed to execute command '%s'. Failed to retrieve GPFS Logical Volumes Failed to read data from command '%s' Failed to retrieve GPFS Logical Volumes popen failed to execute command '%s'. Failed to retrieve GPFS hdisks Popen failed to read data from command(fence_valid_params). Invalid hdisk read. Data read in is: '%s' '%s' resource group is using '%s' GPFS volume group. '%s' resource group is using '%s' GPFS Filesystem. Verifying GPFS configuration Verifying '%1$s' filesystem on '%2$s' (node being added) '%s' consists of %d hdisks COMPARED Filesystems: %1$s with %2$s COMPARED CVG: %1$s with %2$s COMPARED VG: %1$s with %2$s Comparing: %1$s and %2$s Memory allocation failed. File:%s, Line:%d GPFS cluster is using '%1$s' network. None of the adapters used by GPFS network exists in the current configuration. Please delete GPFS cluster before modifying %2$s network. Connected to node %s, IP = %s VERIFICATION NAME: %s %s %s Data Collectors: No data collectors ID: %d Description: %s Collector on node %s returned the following error: Collector on node %s returned the following warning: Collector succeeded on node %s (%d bytes) Collector returned successfully on node %s (no data was returned) BEGIN CHECK END CHECK Check: PASSED Check: FAILED Total elapse time: %d second(s) PASS FAIL TRUE FALSE unknown Nothing to verify for this check Nothing to verify for node '%s' Nothing to verify for group '%s' Nothing to verify for resource '%s' Executing external command %s No relative path specified Verifying that GEOsite count equals HACMPsite count Verifying that all sites in GEOsite are in HACMPsite with the same nodelist Verifying that resource group is non-concurrent Skipping cluster/etc/exports file verification on cluster with sites Checking for duplicate entries in cluster/etc/exports on node %s Checking for exported FS resources in cluster/etc/exports on node %s Verifying that cluster/etc/exports exists on all nodes, or none in resource group %s Verifying that log dirs are valid There is %d MB of available space remaining for path %s on node: %s This is below the recommended filesystem space of %d MB for log file %s. File: %s Type: %s Permission: %s Verifying that all nodes in resource groups exist in cluster Verifying that service IP label is in topology: %s Verifying that each node in RG has interface for service IP label Verifying that all service IP labels are configured correctly Node: %s Service IP Label: %s Service label with no interface is not in RG: %s Is the label in only one RG Shared label in distributed RG Non-shared label in cascading RG Node highest priority in only one RG Highest priority node owns labelGathering data about all disks, volume groups, logical volumes, and filesystems configured across the cluster. Finished gathering data Checking resource configuration for all disks, volume groups, logical volumes, filesystems, and node priority policies. No mixing of NFS mount methods in one RG Mount point is properly specified Mount point is not a duplicate Filesystem not set to automount Volume group is set to autovaryon Mountpoint is a directory on each node Filesystem %s is correctly configured on each node Filesystem is accessibleFilesystem uses the same mountpoint, logical volume, volume group, log logical volume, and disks on all nodes Export filesystem %s is correctly configured on each node Volume group %s is correctly configured on each node Volume group is accessible Volume group uses the same disks on all nodes Disk %s is correctly configured on each node Disk is accessible Verifying that logical volumes have disks on node %s Verifying that file systems use same disk as another resource group Verifying that volume groups use same disk as another resource group Verifying that resource groups share same disks Verifying that application servers specified as resource are in HACMP config Verifying that application server start, stop, and monitor scripts exist on all nodes in the resource group Verifying that network %s will properly support NFS Network must use IP Network should have a service label Network must use IP address takeover Verifying that all required events exist on node %s Verifying that all pre-and-post event scripts exist on node %s Verifying that all required event scripts exist on node %s Verifying that all resource groups with replicated resources have non-ignore site policies. Verifying that all rotating resource groups have service labels Verifying that rotating resource group configuration supports fallover Interdependent resource group set (%d) Group set is on an IP aliasing network Group set contains fewer groups than the number of nodes in the smallest group. Verifying that rotating resource group has concurrent site policy. Verifying that VGs for all PowerHA SystemMirror filesystems are found in the system ODM Verifying that volume group and logical volume configuration is consistent on all nodes. Comparing volume group configuration on node %s and node %s. Comparing logical volume configuration on node %s and node %s for volume group %s Verifying that all communication link resources are defined in PowerHA SystemMirror Communication link in only one resource group All SNA DLCs, ports, and link stations for link %s exist Application service file for link %s has correct permissions Verifying that no node must host Fast Connect services and AIX Connections services at the same time. Verifying that no node must host Fast Connect services from more than one resource group at the same time. Verifying that Fast Connect fileshares are on a filesystem defined in the same resource group. Verifying that Fast Connect configuration is consistent on all nodes. Fast Connect application exists Fast Connect print queues are consistent netbios nodename is correct monitor is in one group Resource Groups have only one monitor Verifying that all resource groups with NFS exports have service IP labels. Resource group is non-concurrent Tape resource is correctly configured Tape resource defined Tape resource well named Start script is absolute path Stop script is absolute path Device exists Device correct type Device available LUN matches Start script is executable Stop script is executable Verifying that all remote notification ports are available. Verifying that all remote notification message files exist. Verifying node %s %s: Local cluster services are not active. No inactive cluster components detected. Verifying that tivoli oserv process is running on all nodes in cluster with cluster.hativoli installed. Verifying that fileset '%s' is installed on all nodes Verifying that all nodes are present in concurrent resource groups with fencing enabled and no replicated resources. No secondary class without primary Secondary class in cascading RG only WLM resource group has application server WLM class %s exists WLM class %s has no subclass Verifying that all NFS mount points are unique. Verifying that rotating resource groups on non-aliased networks support fallover for sites Network requires sites Group count < (nodes at site - 1) Network supports fallover Verifying that network, interface, IP label, and node names are not reserved words. RESERVED WORD: %s Verifying exported filesystems exist on local node. Verifying participating nodes for resource group '%s' have volume group %s Major numbers match Verifying that resource groups using serial filesystem recovery do not have any nested filesystems. Verifying adapters on network '%s' are all of nim type '%s'. No interfaces to verify for this network. Verifying cluster nodes /etc/inetd.conf for '%s' daemon entry. Verifying /etc/services entry '%s %d/%s' Verifying that networks with takeover interfaces also have service IP labels. Verifying that resource groups have more than one node. Verifying that nodes on resource groups have enough interfaces for proper fallover. Verifying that communication adapters specified for use in highly available communication links exist, and are configured correctly. Link has communication adapters Adapter %s exists Adapter %s is in multiple links Node %s has communication adapter Verifying that all communication links specified in HACMPcommadapter exist Verifying that all node/driver pairs specified in HACMPcommadapter exist Verifying that principal for service %s is in /.klogin Verifying errnotify on node %s Currently configured errnotify entries: Target list of errnotify entries to configure: Entries: en_resource: %s, en_class: %s, en_label: %s Verifying that all communication interfaces have valid subnets. Verifying that all global networks are properly configured. Network does not use address takeover Each node has only one interface on net Obtaining adapter and tty information Verifying that no networks have both dynamic and static service IP labels Verifying that serial networks all have exactly two nodes. Verifying that all NIMs are correctly configured Verifying that all IP labels resolve to correct IP address on all nodes. Verifying that loopback entries exist, and broadcast addresses are correct on all nodes. Loopback is present Broadcast addresses are correct Verifying that all hardware addresses are unique Hardware addresses to compare: Enhance security is not enabled. Kerberos Realm: %s Verifying kerberos remote access setting for PowerHA SystemMirror IP addresses Verifying that node %s does not have multiple RS232 interfaces defined on the same TTY device. Verifying that node %s does not have multiple PowerHA SystemMirror RS232 interfaces configured to use the same TTY device. Verifying cluster name Verifying node names Verifying network names Verifying resource group names Verifying communication interface and service IP label names Verifying that multiple ATM interfaces are not configured to use the same ATM device (atm%ld). Verifying that multiple service IP labels with alternate hardware addresses are not configured to fallover to the same ATM device on node %s. Verifying that all PowerHA SystemMirror networks contain a service IP label or a communication interface marked as service. Verifying that all persistent labels are correctly configured. Network supports persistent labels One persistent label per network/node Persistent label has no hardware address Network contains a boot-time IP address or service interface Subnet doesn't contain standby interfaces Verifying that all serial network communication interfaces exist. Verifying that all PowerHA SystemMirror interfaces on the same network use the same netmask. Verifying that all service IP labels and communication interfaces are correctly configured. BOOT INTERFACES SERVICE INTERFACES STANDBY INTERFACES Interface exists Interface of correct type: %s Interface on good subnet Local node name is %s Security mode is %s Node %s configured on network Network %s: Verifying IP network %s has at least two nodes. Verifying superstrict mirroring in volume groups with forced varyon Verifying Disk Heartbeat Networks Either no diskhb networks or adapters defined Checking network %s Endpoint1 - Node: %s, Net: %s, device: %s Endpoint2 - Node: %s, Net: %s, device: %s Network: %s, Node: %s, Device: %s, PVID: %s, Mode: %s Check of network %s Passed Verifying that fileset '%s' is installed on all nodes and same software level installed on the nodes with the same level of AIX. WARNING Initialize Topology Configuration...Checks the validity of user defined topology names for cluster, node, network, adapter, and groupVerify HACMPnim is validVerify no mixed static / dynamic service adapters appear on one networkCheck for global networks defined to PowerHA SystemMirror, if there are any verify the constituent networks are not IPAT capable.Obtain interface and tty adapters for HA.Check the standby adapters to ensure all standby adapters on the same network are on the same subnet. Generates a warning for each standby not on the same network subnet.Performs the host command on each adapter specified in HACMPadapter to determine if the adapter is available on that node.Obtain the AIX ATM configuration for use in other checksObtain the atm interface topology information.Print the ATM Configuration headerChecks if two adapters of the same cluster network are configured on interfaces over the same ATM device, and issues a warning for each pair found.Verify that all persistent labels are configured correctly.Checks that hardware address for each adapter is unique on all nodes in the cluster. Checks loopback entry in /etc/hosts, and validity of broadcast addressesVerifying IP aliasing is configured correctly.Verifying Kerberos security settings.Verify serial networks against actual tty/tmssa/tmscsi devices on the specified nodes.Verify point-to-point serial networks have two defined points.Verifying correct network netmasks exist on all HA adapters.Verifying labels and interfaces are properly defined for all cluster nodes.Verifying RSCT Topology configuration.Verifying contents of /etc/snmpd(v3).conf and /etc/snmpd.peersCheck all atm networks are private.Check installed cluster filesets on all nodes.Check /etc/services for cluster service ports.Check /etc/inetd.conf for cluster daemons.Check adapters, networks and node names against a list of reserved words in /usr/es/sbin/cluster/etc/reserved_words.Check that every IP capable network has at least two nodes defined on the network.Deallocate memory used in topology initialization routine.Check /etc/hosts contains the same entries as defined to PowerHA SystemMirror.Check /usr/es/sbin/cluster/etc/clhosts.client contains the information about IP labels known to PowerHA SystemMirror.Check that all nodes have a unique SSA node number.Verifying single-adapter networksVerify the boot time interfaces are configured properly prior to starting cluster services.Verifying Automatic Cluster Configuration Monitoring settingsNode %s selected to automatically verify the cluster configuration, is not a part of the cluster. Please change the node name to one of the cluster nodes in Automatic Cluster Configuration Monitoring settings. Verifying XD networksVerifying the network optionsVerifying installed rsct filesets on all nodes.Checking for missing WAN-related software. Check all service label resources.Check HAGEO configuration.Check all PPRC resources.Check all ERCMF resources.Check all SVC PPRC resources.Check all SRDF resources.Check all replicated resources.Check node participation in rotating resource groups.Set up general data structures for resource checks.Check logical volume configuration consistency.Verify all disk, VG, and filesystem configurations are consistent on all nodes.Verify AIX Connections exist and are consistently configured on all nodes.Verify all application server start, stop, and monitor scripts are executable on all nodes.Check that application monitor is not included in more than one RG, and that only one monitor is configured per RG.Verify that number of application monitors does not exceed limit.Verify that SNA DLCs, ports, and link stations exist on all nodes, and that HA commlink app_svc_file exists.Checking that all NFS resources have a service labelVerify tape resources are correctly configured, and that tape start/stop scripts are executable.Verify remote message file exists and is readable on all nodes.Verify all fast connect resources are consistent on all nodes.Verify all event command, recovery, and notification scripts are executable on all nodes.Verify all pre/post event scripts are executable on all nodes.Verify all custom verification scripts are executable on all nodes.Verify all custom snapshot scripts are executable on all nodes.Verify cluster logs are configured correctly on all remote nodes.Verify all custom disk scripts are executable on all nodes.Checks the integrity of the cluster's WLM configuration settings, and verifies the configuration on the local node.Verify the exports file is the same on all cluster nodes.Verifying resource groups processed in parallel do not have nested filesystems.Verifying exported filesystems and logical volumes exist.Verifying resource groups with filesystem(s) to NFS export, check the major number of the owning volume group is the same on all nodes in the resource group.Verifying NFS mount points for all filesystems in all resource groups are unique.Verifying all cluster nodes belong to concurrent resource groups.Verifying the number of distributed resource groups is one less than the number of nodes in the cluster.Verifying at least one node belongs to a cascading resource group.Verifying that nodes that belong to one or more cascading resource groups have the at least the same number of standby adapters.Verifying there are no parallel resource groups, if there are sites defined.Verifying the resource group sticky bit is consistent across all cluster nodes.Verifying configured WAN communication links are available on all cluster nodes.Verifying configuration of error notification is consistent across all cluster nodes.Harvest IP data from all cluster nodes.Harvest Disk Heartbeat data from all cluster nodes.Verifying files listed in PowerHA SystemMirror File Collections.Verifying Resource Group dependenciesVerifying Geographically Mirrored Volume GroupsVerifying user-defined XD utilitiesUser-defined XD verification utility %s failed.The %1$s filesets are installed on following node(s): %2$s. The current PowerHA SystemMirror/XD configuration does not include %3$s resources in any of the existing PowerHA SystemMirror resource groups where node(s) %2$s participate. Please add a %4$s resource to a resource group to make use of the %1$s functionality.Cannot locate file %s to verify %s configuration.Node %1$s does not have all filesets required to support %2$s PowerHA SystemMirror resources. The following filesets need to be installed on this node: %3$s Retrieve the output from ver_get_ipaddr_map for each nodeRetrieve the output from ver_get_addr_info for each nodeRetrieve a list of kerberos principals.Retrieve filesystem configuration settings.Retrieve list of configured SNA DLCs.Retrieve list of configured SNA ports.Retrieve list of configured SNA link stations.Retrieve a list of running processes.Retrieve a list of WAN adapters.Determine log information from all cluster nodes.Execute the command /bin/df -k to determine filesystem space on all nodes.Determine netbios server name on node.Retrieve contents of /usr/es/sbin/cluster/etc/exports.Retrieve the .klogin file from each cluster nodeRetrieve the file /etc/snmpd.confRetrieve the file /etc/snmpdv3.confRetrieve the link /usr/sbin/snmpdRetrieve the file /etc/snmpd.peersRetrieve the file /etc/inetd.confRetrieve the file /etc/servicesRetrieve the file /etc/hostsRetrieve the file /usr/es/sbin/cluster/etc/clhosts.clientRetrieve the file /usr/es/sbin/cluster/netmon.cfRetrieve a copy of ODM CuAt from each nodeRetrieve a copy of ODM CuPv from each nodeRetrieve a copy of ODM CuDv from each nodeRetrieve a copy of ODM CuDvDr from each nodeRetrieve a copy of ODM PdAt from each nodeRetrieve a copy of ODM PdDv from each nodeRetrieve a copy of ODM product from each nodeCollect stats for specified file.Collect basic AIX connection configuration data.Collect AIX connection resource configuration data.Retrieve the SSA node number.Retrieve list of SSA devices.Collect AIX connections auto-init data.Collect volume group and major number data.Collect the resource groups with sticky bit set.Collect the errnotify current and target rgpa structures.Harvest IP information from remote nodes.Collect fastconnect print queue status.Collect the mode and PVIDs of disks used for disk heartbeat.Verifying the modes and PVIDs of disks defined in diskhb serial networks.Harvest Disk Heartbeat information from remote nodes.Obtain the list of network options.Obtain the oslevel.Retrieve the list of cluster,CAA,rsct,sna and sx25 filesets installed on all nodesCollect OEM method file information. FATAL ERROR: no space remaining in filesystem. Unable to write to %s/clverify.log. %s/current will be removed to free space. Existing directory %s must be renamed or deleted to recover from previous run of clverify that was unexpectedly terminated. Failed creating directory %s. Failed removing directory %s. Failed renaming directory %s. Node %s: File does not exist, unable to open file. Unable to write response xml file %s Discovered [%d] interfaces The cluster is currently inactive. Modified mode verification can only proceed when cluster services are running. Verification will proceed in normal mode. Unable to determine ODM ACD/DCD directories for modified verification mode. Modified mode flag -V will be ignored. Running cluster verification in modified mode verifies only those HA components that have changed. If you have made system level changes to your cluster nodes, please re-execute verification in normal mode. INTERNAL ERROR: Unable to run in modified mode, execution of command '%s' failed. Modified mode flag -V will be ignored. Disk Heartbeat networks with less than 2 nodes Not even # of adapters or network w/o adapters Unable to read collected data from node %s Devices are not in Enhanced Concurrent Mode VG Communication device %s on node %s and %s on node %s in the disk heartbeat network %s do not have the same PVID. Both communication devices in a given disk heartbeat network must be the same physical disk. Disk Heartbeat Networks have been defined, but no Disk Heartbeat devices. You must configure one device for each node in order for a Disk Heartbeat network to function. Node %1$s: Read on disk %2$s failed. Check cables and connections. A reserve may be set on that disk by another node. Node %1$s: PVID could not be determined for disk %2$s, used in disk heartbeating network %s. The PVIDs of disks used in disk heartbeating network %1$s do not coincide. Node %2$s: %3$s %4$s Node %5$s: %6$s %7$s Node "%1$s: Volume group %2$s is not in enhanced concurrent mode. %3$s contains %4$s, which is used in a disk heartbeating network. "Disk heartbeating network %1$s contains only one adapter. Node %2$s: %3$s The maximum number of resource groups %d has been exceeded. The current cluster configuration has %d resource groups defined. Please remove %d resource group(s). The maximum number of interface configured (%d) in PowerHA SystemMirror has been exceeded. The current cluster configuration has %d PowerHA SystemMirror interfaces define. Please remove %d interface(s). Checking /etc/snmpd(v3).conf and /etc/snmpd.peers for HA entries Cannot perform verification in snapshot mode.Sticky attribute is not consistent across the cluster for RG %s Elapse Time: %d second(s) Run user defined custom verification routines.Adapters on network %s do not all have the same netmask. A hardware address is defined for interface %s on a network that is configured to use IP aliasing Standby interface %s is defined on a network that is configured to use IP aliasing Serial network %s is configured to use IP aliasing. Communication interfaces on node %1$s and network %2$s are not on different subnets. Verifying cluster topology for IP aliasing. There may be an insufficient number of communication interfaces defined on node: %s, network: %s. Multiple communication interfaces are recommended for networks that will use IP aliasing. Service adapter %1$s on network %2$s is on the same subnet as at least one of the communication interfaces on this network. Service labels must be on a different subnet when the network is configured to use IP aliasing for IP address takeover. Only one infiniband interface per network per node is supported at this time. Node: %s network: %s has more than one defined. Check resource groups with service labels on the same network.Service label %1$s is part of %2$s resource group %3$s and is on network %4$s. Service label %5$s is on the same network, but resource group %6$s (which contains %7$s) is of type %8$s. All resource groups with service labels on the same network must be of the same type. Retrieve a copy of ODM errnotify from each nodeVerifying File Collection files File %s in collection %s does not exist on node %s. File %s in collection %s is empty on node %s. File %s in collection %s is not a regular file on node %s. File collection %s does not contain any files. Cross-Site LVM Mirroring was configured, whereas no sites or only one site were defined %1$s: Incorrect data returned from node %2$s Number of physical volumes exceeded limit of %d The volume group %s, which has Cross-Site LVM Mirroring enabled, has a configuration error. The disks are assigned to one site (that is, there is not a copy on the other site), or not assigned to any site. A volume group that has Cross-Site LVM Mirroring enabled should have the disks assigned to both sites. To view information about the disks assigned to sites, go to "Change/Show Disk/Site Definition for Cross-Site LVM Mirroring". The logical volume %s in %s configured for Cross-Site LVM Mirroring is improperly configured. It does not have a copy of each logical partition on both sites. Volume group %s was configured for Cross-Site LVM Mirroring. The information about logical volumes belonging to this VG cannot be retrieved from the cluster nodes. Check VG configuration. Resource Group "%1$s" contains service IP labels that use both IP Address Takeover via Aliasing and IP Address Takeover via Replacement. All service IP labels in a Resource Group must use the same method for IP Address Takeover. Resource Group "%1$s" contains multiple service IP labels from the same IP Address Takeover via the Replacement network. A resource group may contain multiple service IP labels that use IP Address Takeover via Replacement only if the service IP labels are configured on different PowerHA SystemMirror networks. Resource group serial processing acquisition order: (%s..%s) contradicts the group dependencies Group dependencies take precedence and RG '%s' will be acquired before RG '%s' Dependencies are configured for resource group '%s' site config must be set to 'ignore' Parent resource group %s is not presented in the HACMPgroup ODM Parent resource group '%s' has dependencies specified and it does not have a monitor specified for its application server '%s' Child resource group %s is not presented in the HACMPgroup ODM Fallback timer is specified for child RG '%s', however, child may bounce when the parent RG '%s' falls back If "Preferred Primary Site" is specified for groups having the same "Online on Same Site" policy, then the home nodes must be on the same site for the groups to be able to start. One or more resource groups from the list below will not start due to the above conflict: %s Please change the node list, change the Site Policy on the Resource Group, or groups participating in the "Online on the Same Site" policy. Monitor %1$s will not be used until it is associated with an application server. Resource Group: %1$s contains %2$d application servers which are configured with application monitors. Gather /etc/inittab from each cluster nodeGathering DB2 instance environment variablesGathering /etc/passwd from each cluster nodeGather the DB2 software levelsGathering DB2 rhosts informationVerify the Fault Monitor Coordinator is disabled on all nodes participating in instance resource groups.Verify the .rhosts for the instance owner contains an entry for the service IP label.Verify DB2 instances are not set to auto start on node rebootEnsure all nodes that participate in an instance resource group are running the same level of DB2Verify DB2 instance owners and groups both exist on all participating nodes, and have the same UID and GIDs.Verify DB2 /etc/services entries on participating nodes of an instance resource group.Verify DB2 instance db2nodes.cfg file contains the PowerHA SystemMirror service IP label.The Fault Monitor Coordinator on node: %s is enabled. Please disable the FMC by running the chitab command for the fmc service and specifying off for the action. The DB2 Instance %s is set to restart at system reboot on node: %s Please turn off the autostart feature by setting the DB2 instance environment variable DB2AUTOSTART=NO The variable can be set by running the command: db2set -i %s DB2AUTOSTART=NO as the instance owner. The DB2 Instance %s metadata in HACMPsa_metadata is improperly configured. Please remove, then re-add the DB2 instance to the cluster configuration. When re-adding the instance ensure that the instance home filesystem is mounted. The DB2 instance user: %s with uid: %s and gid: %s does not exist on node: %s. Please create the user with the same uid and gid as above, then re-run verification. The DB2 instance user: %s on node %s has settings different from that of the primary node. The user should have a UID of %s and GID of %s. The current UID is %s and GID %s. Please change the user settings on node %s and re-run verification. Validating instance: %s with user: %s(%s), group: %s(%s) on node: %s: Validating instances on node: %s are not set to autostart: Validating node: %s does not have the fault monitor coordinator enabled in /etc/inittab: Validating DB2 software levels on node: %s: The software level of DB2 UDB on node: %s is at %d.%d (%s), expecting software level %d.%d. Please ensure every node that participates in an instance resource group is running the same level of DB2 UDB software as all other nodes in the cluster. The software level of DB2 UDB on node: %s is at %d.%d (%s). PowerHA SystemMirror only supports DB2 UDB 8.1 or 8.2. The service ports for DB2 UDB instance: %s is not defined in /etc/services on node: %s. Please add the /etc/services entries for instance %s to /etc/services. Node %s and node %s have unequal port definitions in /etc/services for DB2 UDB instance: %s Node: %s has service entry %s %d/%s Node: %s has service entry %s %d/%s. Please change the entries on one of the nodes to be the same as the other node. Node %s is missing an entry in /etc/services for DB2 instance: %s, Entry: %s %d/%s The /etc/services port numbers for DB2 UDB Instance: %s on node: %s and DB2 UDB Instance: %s on node: %s conflict and may prevent PowerHA SystemMirror from properly starting over one of the DB2 instances. Node: %s has entry %s %d/%s Node: %s has entry %s %d/%s Please correct the instance service entries to ensure each instance uses a unique set of port numbers. The service IP label %s for DB2 UDB instance %s is not in the .rhosts file in the instance home directory %s. Please add the .rhosts entry '%s %s' in the DB2 UDB instance home directory, then re-run verification. A service IP label is not defined for DB2 instance %s in resource group: %s. Please use the Extended Configuration Panel to add a service IP label for this instance. Alternatively remove the instance resource group, then re-add the instance to the PowerHA SystemMirror cluster configuration. The .rhosts file for DB2 UDB instance: %s has the entry '+ +', which is a security risk allowing all users from any host to rsh into the node where the DB2 instance is running. It is advised that this entry be removed, and replaced with the following entry: '%s %s'. The db2nodes.cfg file for DB2 UDB instance: %s is missing the service IP label: %s. Please change the service label in db2nodes.cfg to the service label: %s then re-run verification. Interface %s (%s) is not configured in AIX on node: %s Please check to ensure the interface is properly defined to AIX. If the interface is not defined to AIX then add the interface by running 'smitty chinet' and change network interface: %s to the above IP definition. The DB2 Smart Assistant for HACMP only supports DB2 UDB Enterprise Edition. Node: %s does not have the DB2 Enterprise Edition signature fileset installed which is either db2_08_01.essg, or db2_08_02.essg for versions 8.1 and 8.2 respectively. Failed to varyon the volume group: %s. Unable to varyoff the volume group: %s Unable to change the /etc/inittab entry to disable the DB2 fault monitor coordinator (FMC). Please manually change the entry by running the command chitab "%s" Stopping the fault monitor coordinator process: %s Failed to stop the fault monitor coordinator on node: %s Please run the command: %s -d to stop the fault monitor coordinator. Would you like to disable the fault monitor coordinator on node: %1$sTurning off the FMC in /etc/inittab on node: %s The DB2 UDB instance home directory %s is not mounted on node: %s Unable to set DB2AUTOSTART for instance: %s on node: %s to OFF. The instance will continue to startup on node startup. To manually disable autostart run the db2iset -i %s DB2AUTOSTART=OFF command as the instance owner, or the root user. DB2 instance: %s autostart was successfully disabled on node: %s Disable the DB2 autostart for each PowerHA SystemMirror monitored instance.Would you like to disable the DB2 autostart for instance: %s on node: %sChange the DB2 UDB instance owners .rhosts file.Would you like to add the entry: '%s' to be added to the file %s/.rhosts for DB2 UDB instance: %sPowerHA SystemMirror is unable to determine if the .rhosts file for instance: %s is properly configured. The DB2 instance home directory: %s is not mounted on any of the participating nodes in the instance resource group. Please check to ensure the entry '%s %s' is added to the .rhosts file Then re-run verification and synchronization with the instance home directory mounted on one of the participating nodes of the instance resource group. Adding entry: '%s' to DB2 UDB instance: %s ~/.rhosts file on node: %s Unable to open file: %s on node: %s Unable to allocate additional memory, aborting corrective action. PowerHA SystemMirror is unable to determine if the file %s/sqllib/db2nodes.cfg for instance: %s is properly configured. The DB2 instance home directory: %s is not mounted on any of the accessible participating nodes in the instance resource group. Please check to ensure the service IP label '%s' is configured as the service IP label for DB2 UDB partition 0 in db2nodes.cfg. Then re-run verification and synchronization with the instance home directory mounted on one of the participating nodes of the instance resource group. The service IP label is missing for DB2 UDB instance: %s for resource group: %s Without the service IP label defined to PowerHA SystemMirror, the instance will be unable to communicate with external clients. DB2 will not start if the service IP label defined in db2nodes.cfg is not configured on the node starting the instance. Please either remove the DB2 instance from the PowerHA SystemMirror configuration and re-add or add a service IP label to the instance resource group. The volume groups that are required for DB2 UDB instance: %s for PowerHA SystemMirror resource group: %s is missing from the resource group configuration. Without the volume groups defined to PowerHA SystemMirror, the DB2 instance volume groups will not be varied on, or the filesystems mounted prior to PowerHA SystemMirror starting the DB2 instance. Please add the missing volume groups to the resource group configuration, or alternatively remove the existing DB2 instance configuration from PowerHA SystemMirror and re-add using the Smart Assistant tool. The application server required for DB2 UDB instance: %s for PowerHA SystemMirror resource group: %s is missing from the resource group configuration. Without the application server, PowerHA SystemMirror will be unable to start or stop the DB2 instance. Please add the DB2 UDB application server to the resource group configuration, or alternatively remove the existing DB2 instance configuration from PowerHA SystemMirror and re-add using the Smart Assistant tool. Validate the DB2 instance resource group contains SA resources.Varying on volume group: %s on node: %s Unable to vary on volume group: %s on node: %s Please check to ensure its not varied on on another node in the PowerHA SystemMirror cluster. Corrective actions can be enabled for Verification and Synchronization in the PowerHA SystemMirror extended Verification and Synchronization SMIT fastpath "cl_sync". Alternatively use the Initialization and Standard Configuration -> Verification and Synchronization path where corrective actions are always executed in interactive mode. Update DB2 UDB /etc/service entries for configured instances.Do you want the missing/updated DB2 entries to be added to /etc/services on node: %sUnable to add /etc/service entry '%s %s/%s' on node: %s: FAILED Please check to ensure there are no other service entries with the same port. Adding /etc/service entry '%s %s/%s' to node: %s: PASSED The /etc/services entries for DB2 instance: %s conflict with other instance entries on other PowerHA SystemMirror cluster nodes. Node: %1$s has the aliased service IP label: %3$s attached to physical interface: %2$s. Please use ifconfig to remove the service IP label: %3$s from interface %2$s prior to starting cluster services, by running the command: ifconfig %2$s delete %3$s Successfully removed the service IP label: %s from interface: %s on node: %s The discovery information obtained on the instance owner for DB2 UDB instance: %s is incomplete. Verification will skip this user. Please ensure the instance owner uses the same user name, group, uid and gid as the primary node on all takeover node(s). The discovery information obtained on the DAS administrator for DB2 UDB instance: %s is incomplete. Verification will skip this user. Please ensure the DAS administrator uses the same user name, group, uid and gid as the primary node on all takeover node(s). The discovery information obtained on the fence user for DB2 UDB instance: %s is incomplete. Verification will skip this user. Please ensure the fence user uses the same user name, group, uid and gid as the primary node on all takeover node(s). The DB2 instance owner home directory: %s for instance: %s, and user: %s is incorrectly set to path: %s on node: %s Please change the instance owners home directory on node: %s to match the instance owner home directory. The node: %s is missing an /etc/services entry for DB2 UDB instance: %s, '%s %d/%s'. Please add this entry to /etc/services. Retrieve the list of potential single points of failure without Auto Error Notify stanzas configured on them. Verifying for the resources with single points of failure. Cluster verification detected devices that could be considered a single point of failure on node %s. The request has been made to run automatic error notification stanzas update after cluster verification is complete. Could not create file for SPOF. Successfully set network option %s="%s" on node %s SUMMARY REPORT Results of: %s NodeErrors LoggedErrors Auto-CorrectedRemaining Errors requiring manual interventionTotalsGathering listen directive and end point Host for WebSphere Components. Gathering installation information for WebSphere Components. Gathering WebSphere Cluster information. Gathering Transaction Log information for WebSphere Clusters. Verify the end points and listen directives for WebSphere match resource group service IP labels. Verify the WebSphere Components are still installed. Verify the node in the WebSphere Cluster match the nodes in the resource group. Verify the transaction log for the members of the WebSphere Cluster point to the shared memory location. The service IP label in resource group %s does not match the listen directive for the IBM Http Server originally installed on node %s. The service IP label in resource group %s does not match the end point host for the WebSphere Application Server %s in Cell %s, Node %s. The service IP label in resource group %s does not match the end point host for the Deployment Manager %s for Cell %s. The resource group %s contains node %s that does not host a member for WebSphere Cluster %s. Please remove the node from the resource group or use Smart Assist for WebSphere to remove transaction log recovery for this cluster. The %s originally installed on node %s at %s has been de-installed. Please use The WebSphere Smart Assist to remove the component. The WebSphere Cluster %s has a member hosted on node %s and that node is not part of the resource group %s. Please remove the member from the WebSphere Cluster or use Smart Assist for WebSphere to remove transaction log recovery for this cluster. The transaction log directory (%s) for member %s in WebSphere Cluster %s does not match the originally value. Validating the listen directive for the IBM Http Server originally installed on node %s. Validating the end point host for the WebSphere Application Server %s in Cell %s, Node %s. Validating the end point host for the Deployment Manager %s for Cell %s. Validating the installation of %s on node %s. Validating the nodes hosting the members of WebSphere Cluster %s. Validating the transaction log directory for member %s in WebSphere Cluster %s. Unable to validate listen directive for the IBM Http Server originally installed on node %s. Installation may not be reachable. Unable to validate end point host for WebSphere Application Server %s in Cell %s, Node %s. Installation may not be reachable. Unable to validate end point host for Deployment Manager %s for Cell %s. Installation may not be reachable. Unable to validate the nodes host members of WebSphere Cluster %s. Deployment Manager is not reachable. Unable to validate the transaction log directory for member %s in WebSphere Cluster %s. Deployment Manager is not reachable. Setting the transaction log directory for member %s in WebSphere Cluster %s. Failed to set the transaction log directory for member %s in WebSphere Cluster %s. Would you like to correct the transaction log directory for member %s in WebSphere Cluster %sThe WebSphere Cluster %s has transaction logs managed by PowerHA SystemMirror resource group %s, but was not found. Please use Smart Assist for WebSphere to remove transaction log recovery for this cluster. Verifying OEM Volume Group and Filesystems support. This could take a few minutes... OEM Filesystem method %s on Node %s has incorrect permissions. Please change permission on this file to allow execution by root. The following OEM Filesystem methods do not exist on Node %s. Please copy this method to Node %s, or alternatively use the PowerHA SystemMirror File Collection feature to ensure an up-to-date copy of the method exists on all Nodes. %s The following OEM Filesystem methods on Node %s currently exist on shared volume group %s. Methods must always be accessible by all nodes participating in a resource group. Please move the methods to a non-shared(local) volume group on each node that may need to execute them. %s The following OEM Filesystem methods are resources in resource group %s which contain replicated resources. PowerHA SystemMirror does not support OEM Filesystems as resources in a resource group that contains replicated resources. Please either remove the OEM Filesystems to another resource group, or change the inter-site Management Policy to this resource group to "ignore". %s OEM Filesystem method differs on the following nodes: %s OEM Volume Group method %s on Node %s has incorrect permissions. Please change permission on this file to allow execution by root. The following OEM Volume Group methods do not exist on Node %s. Please copy this method to Node %s, or alternatively use the PowerHA SystemMirror File Collection feature to ensure an up-to-date copy of the method exists on all Nodes. %s The following OEM Volume Group methods on Node %s currently exist on shared volume group %s. Methods must always be accessible by all nodes participating in a resource group. Please move the methods to a non-shared(local) volume group on each node that may need to execute them. %s The following OEM Volume Group methods are resources in resource group %s which contain replicated resources. PowerHA SystemMirror does not support OEM Volume Groups as resources in a resource group that contains replicated resources. Please either remove the OEM Volume Groups to another resource group, or change the inter-site Management Policy to this resource group to "ignore". %s OEM Volume Group method differs on the following nodes: %s Resource group with OEM resources should be configured with serial processing order. Run smitty cm_processing_order to update the processing order for resource group %s or remove the OEM Filesystems from this resource group. Resource group with OEM resources should be configured with serial processing order. Run smitty cm_processing_order to update the processing order for resource group %s or remove the OEM Volume Groups from this resource group. Resource Group '%s' (%s) and Resource Group '%s' (%s) share same Filesystem '%s' Resource Group '%s' (%s) and Resource Group '%s' (%s) share same Volume Group '%s' Verifying Resource Groups Verifying group %1$s Verifying Resource Group Policies Verifying site support Verifying resource group policy dependencies Verifying fallback timer policy Verifying Network used Verifying Settling time Verifying GPFS network Verifying GPFS Adapters Verifying GPFS filesystems Verifying hdisks used in GPFS filesystem on node %1$s Communication interface %1$s on node %2$s, network %3$s, is on the same subnet as one or more interfaces on the same node. Communication interface %1$s on node %2$s, network%3$s, is on the same subnet as one or more interfaces on the same node. This arrangement can lead to multiple routes to the same subnet which can adversely affect your application after a failure. This configuration will not affect PowerHA SystemMirror because the network is using heartbeat via IP aliases. ERROR: Dynamic LPAR resources are configured, but the DLPAR callback scripts are either not installed or do not have the appropriate permissions on node %1$s. Please refer to the Administration and Troubleshooting Guide for more information on the DLPAR callback scripts. ERROR: The HMC (IP address: %1$s) configured on node %2$s is not reachable. Make sure the HMC IP address is correct and the HMC is turned on and connected to the network. ERROR: The minimum resource configuration for the application server %1$s is greater than the LPAR maximum. The minimum number of CPUs configured for the application server is %2$d. The participating node %3$s has LPAR maximum set to %4$d. Either increase the maximum CPU value for this LPAR, or decrease the minimum number of CPUs required for this application server. ERROR: The minimum resource configuration for the application server %1$s is greater than the LPAR maximum. The minimum memory configured for application server is %2$d (in 256MB blocks). The participating node %3$s has LPAR maximum set to %4$d (in 256MB blocks). Either increase the maximum memory value for this LPAR, or decrease the minimum amount of memory required for this application server. ERROR: An HMC has been configured for node %1$s, but the node does not appear to be DLPAR capable. WARNING: Capacity Upgrade on Demand is enabled. This requires an activation key be set on the HMC and may result in extra charges. WARNING: Node %1$s does not have an associated HMC configuration. The application server %2$s in resource group %3$s has Dynamic LPAR resources configured. The Dynamic LPAR resource configuration requires an HMC association on each node in which the resource group participates. WARNING: The minimum amount of CPU resources required by all application servers that can coexist on node %1$s is greater than the CPU LPAR maximum. In certain cases, this may cause a resource group failure. The CPU LPAR Maximum for node %1$s should be increased, or the minimum CPU amounts for the application servers should be decreased. WARNING: The minimum amount of memory resources required by all application servers that can coexist on node %1$s is greater than the memory LPAR maximum. In certain cases, this may cause a resource group failure. The memory LPAR Maximum for node %1$s should be increased, or the minimum memory amounts for the application servers should be decreased. An XD_data network has been defined, but no additional XD heartbeat network is defined. It is strongly recommended that an XD_ip or an XD_rs232 network be configured in order to help prevent cluster partitioning if the XD_data network fails. Cluster partitioning may lead to data corruption for your replicated resources. %s network %s is configured for IP address takeover. IP address takeover is not supported on %s networks Geographically Mirrored Volume Group verification failed. The Inter-Site Management Policy for Resource Group %1$s is `%2$s'. Only `Prefer Primary Site' and `Online On Either Site' are supported as the Inter-Site Management Policy for Resource Groups that contain Geographically Mirrored Volume Groups. The Startup Policy for Resource Group %1$s is `Online On All Available Nodes'. This Startup Policy is not supported for Resource Groups that contain Geographically Mirrored Volume Groups. Resource Group %1$s contains Geographically Mirrored Volume Group(s), but it does not include at least one node at each PowerHA SystemMirror site. There must be one active node at each site in order for PowerHA SystemMirror to make the Remote Physical Volume(s) available. Warning: monitor %1$s has a failure action of 'notify' but has no notify method configured on %2$s. Could not read CuAt ODM. Could not read CuDv ODM. Could not read HACMPresource ODM. Memory allocation failed. Disabling auto-varyon for volume group: %s on nodes in resource group: %s: PASS Unable to disable auto-varyon for volume group: %s on nodes in resource group: %s: FAIL Do you want to set autovaryon to "no" for the volume group: %1$s on all nodes in resource group: %2$sThere are %1$d application monitors defined for application server "%2$s". Only %3$d application monitors per application server are allowed. Would you like to bring the resources of this resource group: %1$s offline, so that you can then move the resource group to a node other than node:%2$sActive Resources found. Bringing active resources offline. Issued resource offline event to clean resources that belong to resource group %1$s on node %2$s. The output of the resource_offline event is logged in /tmp/hamcp.out. To ensure PowerHA SystemMirror brings the resource group online on this node: Once cluster services are running on this node, then use SMIT to bring the resource group ONLINE on this node. To bring the resource group online on another node, you must: 1. Manually bring the resources offline. 2. Using SMIT, move the resource group to the node of your choosing. %1$s has volume group %2$s varied on. This volume group is part of resource group %3$s, which is in the %4$s state on node %5$s. %1$s has an active aliased service IP label %2$s attached to physical interface: %3$s. This service IP label is part of resource group %4$s, which is in the %5$s state on node %6$s. To ensure PowerHA SystemMirror brings the resource group online on this node: 1. It is recommended that you start cluster services on this node using the SMIT option to "Manage Resource Groups: Manually". 2. Once cluster services are running on this node, then use SMIT to bring the resource group ONLINE on this node. To bring the resource group online on another node, you must: 1. Manually bring the resources offline. 2. Using SMIT, move the resource group to the node of your choosing. Application monitors are required to detect the application failures in order for PowerHA SystemMirror to recover them. The application %s, configured in %s does not have an application monitor configured. This application will be started by PowerHA SystemMirror when the resource group is activated. Verifying that the application server %s has monitors configured. Checking for active resources. Failed: resource offline event to clean resources that belong to resource group %1$s on node %2$s Failed to change the startup option from Automatically to Manually. Changing the "Manage Resource Groups" option from "Automatically" to "Manually" due to active resources found. ver_check_active_resources :Invalid Corrective Action option ver_check_active_resources: Invalid "Manage Resource Groups" option HACMP is changing the "Manage Resource Groups" from "Automatically" to "Manually". PowerHA SystemMirror will not activate resource groups on when starting cluster services to the active resources. Checking for active resources Updating volume group definitions of shared VG: %s participating in resource group %s so that it will be consistent across all the nodes from this resource group: PASS Updating volume group definitions of shared VG: %s participating in resource group %s so that it will be consistent across all the nodes from this resource group: FAIL Data collection complete Start data collection on node %1$s Collector on node %1$s completed Waiting on node %1$s data collection, %2$d seconds elapsed Completed %1$d percent of the verification checks There is %1$d MB of available space remaining on filesystem: %2$s on node: %3$s This is less than the recommended available space of %4$d MB for the log file(s) on that filesystem: %5$sStable Storage Path is changed from previous configuration. This change should be attempted only when the resource group is offline. cluster.es.nfs fileset is not installed on at least one node in the cluster. NFSv4 functionality can not be used until until this fileset is installed on all the nodes in the cluster. At least one of the cross mounts was mounted on NFSv4 exports of '%s'. This configuration change takes it away causing problems to the cross mount. This change will be allowed only when the resource group is offline. At least one of the cross mounts was mounted on NFSv2/NFSv3 exports of '%s'. This configuration change takes it away causing problems to the cross mount. This change will be allowed only when the resource group is offline. Checking WPAR related configuration for WPAR-enabled Resource Groups Collecting WPAR related data from all nodes Corrective Action for creating a WPAR Corrective Action for Cloning an existing WPAR from one node to another Do you want to create a WPAR of name '%1$s' (with default configuration) on node '%2$s' to serve resource group '%3$s'? (This operation can take several minutes).Do you want to clone WPAR '%1$s' from node '%2$s' to node '%3$s'? (This operation can take several minutes).Resource Group '%1$s' has WPAR enabled property set but none of its nodes are WPAR capable. Resource Group '%1$s' is WPAR enabled but node '%2$s', which is a part of this resource group, is WPAR capable and does not have a WPAR named '%3$s'. Resource Group '%1$s' is WPAR enabled but node '%2$s', which is a part of this resource group, is WPAR capable and does not have a WPAR named '%3$s'. Node '%4$s' has a WPAR which is correctly configured for this resource group. Cloning this WPAR's configuration on node '%5$s' could fix this error. Application Server '%1$s' belongs to resource group '%2$s' which is WPAR-Enabled. PowerHA SystemMirror will not be checking for the access permissions for the scripts that reside in the WPAR. Please ensure that all the scripts associated with this application server have correct access permission in the corresponding WPAR. Failed to collect or organize WPAR related configuration data. Cannot proceed with further verification checks for WPAR Configuration. Failed to create the WPAR config directory. Command '%1$s' failed with rc='%2$d' Could not open file '%1$s' in '%2$s' mode. errno=%3$d Could not get WPAR related configuration data from node '%1$s' of the cluster Failed to allocate '%1$d' bytes of memory for organizing WPAR related configuration data Could not write line '%1$s' to '%2$s' file. rc=%3$d Failed to get list of nodes for resource group '%1$s' Failed to get the size of wpar config file '%1$s'. errno=%2$d Memory allocation for '%1$d' bytes while trying to convert the configuration of WPAR to string format. Could not read the WPAR configuration file '%1$s'. Read for '%2$d' bytes failed with errno '%3$d' Could not format WPAR configuration data collected for verification. Cmd '%1$s' failed with rc '%2$d' Failed to capture the output for the cmd '%1$s'. Could not open file '%2$s' in write mode. Check that the underlying directory has appropriate permission and sufficient disk storage. Failed to create default WPAR '%1$s' on node '%2$s'. cmd '%3$s' failed with rc '%4$s'. Please try creating the WPAR using standard AIX commands or contact IBM Technical Support team for further help Successfully created the WPAR '%1$s' on node '%2$s' with default configuration Failed to clone WPAR '%1$s' on node '%2$s' Cmd '%3$s' failed with rc '%4$s' Successfully cloned the WPAR '%1$s' on node '%2$s' Network: %1$s has multiple boot IP labels defined for a single node. Multiple boot IP labels per node are only supported on networks using IPAT via IP aliasing. Either change the network definition to use IPAT via IP aliasing or eliminate the multiple boot IP labels. Resource group %1$s is WPAR enabled and has a collocation dependency. Resource groups that are collocated with it will not be able to share resources with it such as files and shared memory. Verifying Resource Group collocation dependencies for WPAR-enabled resource groupsThe HMC with IP label %1$s configured on node %2$s is not reachable. Make sure the HMC IP address is correct, the HMC is turned on and connected to the network, and the HMC has OpenSSH installed and setup with the public key of node %3$s. Node %1$s does not have an associated HMC configuration. The application server %2$s in resource group %3$s has Dynamic LPAR resources configured. This configuration requires an HMC association to each DLPAR capable node in which the resource group participates. It is allowed to not define an HMC to some nodes, but PowerHA SystemMirror will not perform DLPAR operations on those nodes. The minimum amount of CPU resources required by all application servers that can coexist on node %1$s is greater than the CPU LPAR maximum of %2$d. In certain cases, this may cause a resource group failure. The CPU LPAR maximum for node %3$s should be increased, or the minimum CPU amounts for the application servers should be decreased. The minimum amount of memory resources required by all application servers that can coexist on node %1$s is greater than the memory LPAR maximum %2$d (in %3$dMB blocks). In certain cases, this may cause a resource group failure. The memory LPAR maximum for node %4$s should be increased, or the minimum memory amounts for the application servers should be decreased. The minimum resource configuration for the application server %1$s is greater than the LPAR maximum. The minimum number of CPUs configured for the application server is %2$d. The node %3$s has its CPU LPAR maximum set to %4$d. Either increase the maximum CPU value for this LPAR, or decrease the minimum number of CPUs required for this application server. The minimum resource configuration for the application server %1$s is greater than the LPAR maximum. The minimum memory configured for the application server is %2$d (in %3$dMB blocks). The node %4$s has its memory LPAR maximum set to %5$d (in %6$dMB blocks). Either increase the maximum memory value for this LPAR, or decrease the minimum amount of memory required for this application server. An HMC has been configured for node %1$s, but the node does not appear to be DLPAR capable. Capacity Upgrade on Demand is enabled. This requires an activation key be set on the HMC and may result in extra charges. The following resource group(s) have NFS exports defined and have the resource group attribute 'filesystems mounted before IP configured' set to false: %1$s It is recommended that the resource group attribute 'filesystems mounted before IP configured' be set to true when NFS exports are defined to a resource group. Remember to redo automatic error notification if configuration has changed. Not all cluster nodes have the same set of PowerHA SystemMirror filesets installed. The following is a list of fileset(s) missing, and the node where the fileset is missing: Fileset: Node: -------------------------------- -------------------------------- Application monitors are required for detecting application failures in order for PowerHA SystemMirror to recover from them. Application monitors are started by PowerHA SystemMirror when the resource group in which they participate is activated. The following application(s), shown with their associated resource group, do not have an application monitor configured: Application Server Resource Group -------------------------------- --------------------------------- The following application server start/stop script(s) are either not executable, or do not exist on node(s): Node: Application Server (Script) --------------------------------- ------------------------------------ No NFS version consistency check is done for this group %1$s, the group has replicated resources. Verifying NFS mount point %1$s version entry in /etc/filesystems is consistent across the nodes. /etc/filesystems for node %1$s is not found. Bad data in the /etc/filesystems on node %1$s. Bad data format for the mount point entry %1$s in the /etc/filesystems on node %2$s. The mount point entry %1$s in the /etc/filesystems on node %2$s is not NFS The NFS version of the mount point %1$s in /etc/filesystems is inconsistent in one or more of the following nodes. Please correct the inconsistent entries. Node: %1$s NFS Version: %2$s Volume group not varied-onThe netmask applied to interfaces defined within network %1$s is inconsistent: Interface Node IP Address Netmask --------- -------------------------------- ---------------- ------------- All interfaces defined within a network are required to use the same netmask. Multiple communication interfaces are recommended for networks that use IP aliasing in order to prevent the communication interface from becoming a single point of failure. There are fewer than the recommended number of communication interfaces defined on the following node(s) for the given network(s): Node: Network: ---------------------------------- ---------------------------------- Checking whether NFS mount point %s is in a shared VG. NFS mount point %1$s is in VG %2$s in resource group %3$s. The NFS mount point must not be in a shared VG. VG not found for filesystem %1$s Cannot open file %1$s to verify %2$s configuration.Cannot write to file %1$s:%2$s Disk Heartbeat Network: %1$s has been defined with no Disk Heartbeat devices. You must configure one device for each node in order for a Disk Heartbeat network to function. %1$s node is part of Disk Heartbeat Network: %2$s, however the node's data could not be collected for verification. The Disk Heartbeat on the node will not be checked. Make sure the node is accessible. Disk %1$s is configured to be used on node %2$s, however the node's data could not be collected for verification. The disk status on the node will not be checked. Make sure the node is accessible. Interface %1$s(%2$s) is configured to be used on node %3$s, however the node's data could not be collected for verification. The interface status on the node will not be checked. Make sure the node is accessible. %1$s device %2$s is configured to be used on node %3$s, however the node's data could not be collected for verification. The device status on the node will not be checked. Make sure the node is accessible. %1$s XD resources have been defined, however installed fileset information could not be collected from node %2$s. Verification will not be able to check whether all required filesets are installed on the node. The NFS version of the exported filesystem %1$s in /usr/es/sbin/cluster/etc/exports is inconsistent in one or more of the following nodes. Please correct the inconsistent entries: Bad data format for the exported filesystem entry %1$s in the /usr/es/sbin/cluster/etc/exports on node %2$s. Verifying all atm networks are defined as "private" networks. Validating ATM network %1$s is private: PASSED FAILED ATM network %1$s is not defined as 'private' File /.rhosts does not exist Can not define the permissions on file /.rhosts File /.rhosts is not readable by root. It should have permissions -rw-------- and should be owned by user root and group system File /.rhosts has permissions %1$s. It should have permissions -rw------- and should be owned by user root and group system File /.klogin does not exist There are %1$d networks configured while the maximum recommended limit for the networks is %2$d. The cluster may not handle networks properly if the number of networks configured is greater than the maximum recommended limit. Node %1$s participates in network %1$s Could not find any nodes with adapters on network %1$s Collector for disk heartbeat did not obtain the device name on node: %1$s. Collector for disk heartbeat did not obtain the device (%1$s) pvid on node: %2$s. Collector for disk heartbeat did not obtain the device (%1$s, %2$s) mode on node: %3$s. Mode is [%1$s] on node %2$s (should be 32 for Concurrent) %1$d Disk Heartbeat Networks have been defined, but no Disk Heartbeat devices. You must configure one device for each node in order for a Disk Heartbeat network to function. An error was encountered when updating the automatic error notification stanzas.Updated automatic error notification stanzas.Failed to write the wpar config to a temporary file, errno=%1$s. Could not clone WPAR '%2$s' on node '%3$s' Updating automatic error notification stanzas.Verify "PowerHA SystemMirror" user group exists on all nodes in /etc/groupChecks to ensure a non-concurrent / concurrent resource group has not been configured.Verifying NFSv4 grace period settingsVerifying NFSv4 domain nameFor multi-node disk heartbeat all filesets listed must be at the minimum level noted above. Update the filesets on all nodes to the required level(s) then run Verification again. Retrieve a list of running processes and arguments.Retrieve the NFSv4 domain name.Gathering /etc/group from each cluster nodeRetrieve a copy of ODM SRCsubsys from each nodeCollect Active volume groups listFor PowerHA SystemMirror to perform monitoring over a network at least two interfaces that reside on separate nodes must be defined to a network. The following network(s) only contain interface(s) from a single node: Network: Single Node: -------------------------------- -------------------------------- Verifying NFS version for the exported filesystem %1$s entry in /usr/es/sbin/cluster/etc/exports is consistent across the nodes. Node %1s has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used. PowerHA SystemMirror will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won't take effect until the next time that nfsd is started. If this warning persists, the administrator should perform the following steps to enable grace periods on %2s at the next planned downtime: 1. stopsrc -s nfsd 2. smitty nfsgrcperiod 3. startsrc -s nfsd Grace periods are not fully enabled on node %1$s, however NFSv4 stable storage has been configured. The NFSv4 stable storage will not be used on this node if grace periods are disabled. Resource group %1$s has NFSv4 exports and crossmounts, however the NFS domain has not been set on node %2$s. The mount operation will fail for NFSv4 exports if the NFS domain is not set. Use the command 'chnfsdom ' to set the domain name. The nodes in resource group %1$s are configured with more than one NFS domain. All nodes in a resource group must use the same NFS domain. Use the command 'chnfsdom ' to set the domain name. Cluster Node NFS Domain ------------------------------ ------------------------------ The managed system %1$s with HMC IP label %2$s configured on node %3$s experienced an error while trying to retrieve data. Make sure the managed system name is correct. No IPv6 service IP on non-aliased PowerHA SystemMirror network No IPv6 boot interface No IPv6 service IP on non-aliased PowerHA SystemMirror network. No IPv6 node bound service addressesVerifying ndpd-host daemon status and LL address configuration on IPv6 hosts. Verifying that the configured IPv6 service IP addresses are valid. Verifying PowerHA SystemMirror network topology for IPv6 service IP. Verifying that no IPv6 service IP label is configured as a resource in WPAR-capable resource group. Verifing that no NFS(v2/v3) filesystem resources are exported via IPv6 service IP label. Verifying that no IPv6 address is configured for communication with HMC. PowerHA SystemMirror network %1$s on Node %2$s has its communication interface configured with IPv6 label(%3$s). IPv6 label configuration on boot adapters is not supported. IPv6 node bound service label (%1$s) is configured on PowerHA SystemMirror network (%2$s). IPv6 node bound service labels are not supported. IPv6 service label (%1$s) is configured on non-aliased PowerHA SystemMirror network (%2$s). on node %3$s. IPv6 service IP configuration is supported only on aliased PowerHA SystemMirror network. IPv6 address (%1$s) for Hardware Management Console (HMC) communication with PowerHA SystemMirror nodes is not supported. Invalid/Unsupported Service IP %1$s (%2$s) is configured as resource in RG %3$s. Link local address '%1$s' is configured as resource '%2$s' in RG '%3$s'. Configuration of Link local address as service IP is not supported. Resource group "%1$s" has IPv6 service label "%2$s" configured on unsupported network type. IPv6 service IP can only be configured on network of type "%3$s". Verifing that no WPAR enabled RG has IPv6 service IP. Verifying NFS v2/v3 filesystem (%1$s) for export via IPv6 service IPVerifying HMC address "%1$s"No Link local addresses detected on interface %1$s of node %2$s. "ndpd-host" daemon is inactive on node %1$s. The daemon manages the Neighbor Discovery Protocol (NDP) for non-kernel activities: Router Discovery, Prefix Discovery, Parameter Discovery and Redirects. Validating IPv6 addressVerifying no loopback/unspecified address configuration for service IPVerifying that IPv6 service IP is not a link local addressDuplicate Address Detection (DAD)IPv6 service IP address ("%1$s") is configured as resource in a WPAR-enabled resource group ("%2$s"). This is not a supported configuration. Resource group ("%1$s") has filesystem "%2$s", exported via IPv6 service IP (%3$s). "%2$s" is a NFS v2/v3 exported filesystem. Export via IPv6 service IP is only supported for filesystems which are exported via NFSv4. Failed (return code %1$s) to create socket of Address Family(%2$s), Type(%3$s) and protocol(%4$s). IOCTL operation, %1$s, failed (errno = %2$s). setsockopt() failed (errno = %1$s) to set %2$s option for socket level %3$s. Neighbor discovery protocol (via Duplicate Address Detection) has detected the non-uniqueness of Unicast IPv6 service IP address (%1$s/%2$s) on link %3$s. A unicast IPv6 address must be unique on a given link. Neighbor discovery protocol failed to perform Duplicate Address Detection for %1$s/%2$s on link %3$s due to some internal error. Collecting ndpd-host daemon status. Corrective action of starting ndpd daemons. Corrective action of assigning link-local addresses to ND-capable network interfaces. Do you want to start %1$s daemon on node %2$s ?Do you want to auto-configure link-local address on interface %1$s of %2$s ?Started daemon %1$s on node %2$s. Command %1$s failed (return code %2$d) to start daemon %3$s on node %4$s. Configured a link local address on interface %1$s of node %2$s. Command (%1$s) failed (return code %2$s) to configure link local address on interface %3$s of node %4$s. Valid network type for IPv6 Persistent addresses Validating IPv6 persistent address No IPv6 persistent IP on non-aliased PowerHA SystemMirror network Skipping all subnet validations for IPv6 persistent label (%1$s) defined for node %2$s on aliased PowerHA SystemMirror network %3$s. Invalid/Unsupported Persistent IP %1$s (%2$s) is configured for node %3$s. Link local address '%1$s' is configured as persistent IP '%2$s' for node '%3$s'. Configuration of Link local address as persistent IP is not supported. IPv6 Persistent label (%1$s) for Node %2$s, is configured on network of type "%3$s", which is not supported. IPv6 persistent IP can only be configured on network of type "%4$s". IPv6 persistent label (%1$s) for node %2$s is configured on non-aliased PowerHA SystemMirror network (%3$s). IPv6 service IP configuration is supported only on aliased PowerHA SystemMirror network. PowerHA SystemMirror /etc/inittab entry exists: %1$s:%2$s:%3$s:%4$s telinit mode is disabled on PowerHA SystemMirror cluster node: %5$s. The entry should not exist. Please remove this entry from /etc/inittab, then re-run verification. It will be necessary after removing the entry to run 'init q'. The MTU sizes do not match for communication interfaces on network %s. To correct this error, make sure that the MTU size is consistent across all NICs on the same PowerHA SystemMirror network. Node Interface MTU IP_label %-16s %s %s %s NFS server for filesystem mounted on %1$s on node %2$s appears to be down. popen failed to execute command '%s'. Failed to retrieve NFS mountpoint. Volume group %1$s is not varied-on on node %2$s Service IP label %1$s is not online on node %2$sIPV6 loopback address is not present, it is recommended to have IPv6 loopback address ::1 if you are using IPv6 only network(s) across the nodes in the cluster. No WPAR enabled Resource Group is configuered. Internal Error No data collected for the node while collecting OS Level for the node : %1$s . The AIX Technology Release level on the node %1$s is less than 6100-02, for the node to support IPv6 boot address the AIX Technology Release level must be at least 6100-02. check the AIX OS level while adding service IPv6 address to Resource Group .obtain the RSCT level. verifying IPv6 HACMPnetwork for nimname. verifying AIX and RSCT LEVEL for IPv6 HACMPnetworks. network %1$s has Prefix length %2$d the prefix length for IPv6 address must be in the range of 1 to 128. PowerHA SystemMirror network %1$s has network type %2$s IPv6 networks only support the network types ether, XD_DATA and XD_IP. The AIX Technology Release level on the node %1$s is less than 6100-02, for the node to support IPv6 boot address the AIX Technology Release level must be at least 6100-02. Internal Error No data collected for the node while collecting RSCT Level for the node : %1$s. On the node %1$s the RSCT level is %2$s For IPv6 networks RSCT level should be at least 2.5.3.0 on all the nodes in the cluster. verifying the configuered boot IPv6 address are valid. Invalid/Unsupported BOOT IP %1$s is configured for network %2$s on the node %3$s. Retrieve the output from ver_get_addr6_info for each node. Internal Error No IPv6 ifconfig data collected from node: %1$s Neighbor discovery protocol (via Duplicate Address Detection) has detected the non-uniqueness of Unicast IPv6 boot IP address (%1$s/%2$s) on link %3$s. A unicast IPv6 address must be unique on a given link. Node: %s Network: %s AIX version could not be determined on node %1$s. Please check whether oslevel command is working on node %2$s. Checks to ensure a non-concurrent / concurrent resource group has not been configured.Verifying that User defined resource specified as cluster resources are defined in the cluster configuration. User defined resource %1$s not defined in the HACMPudresource ODM. The user defined resource must be defined before it can be used in a resource group. Verifying User defined resource type scripts exist on all nodes in the resource group User defined resource '%1s' belongs to resource group '%2s' which is WPAR-Enabled. PowerHA SystemMirror will not be checking for the access permissions for the scripts that reside in the WPAR. Please ensure that all the scripts associated with this resource type have correct access permission in the corresponding WPAR. Verifying User defined resources : %1$s on Node %2$s Resource monitors are required for detecting resource failures in order for PowerHA SystemMirror to recover from them. Resource monitors are started by PowerHA SystemMirror when the resource group in which they participate is activated. The following user defined resource(s), shown with their associated resource group, do not have an monitor configured: User Defined Resource Resource Group -------------------------------- --------------------------------- The following user defined resource type management script(s) are either not executable, or do not exist on node(s): Node: Resource Type(Script) --------------------------------- ------------------------------------ Verify all all user defined resource type methods are executable on all nodes."The minimum amount of Processor Units required by all application servers that can coexist on node %s is greater than the CPU LPAR maximum of %.2f. In certain cases, this may cause a resource group failure. The Processor Units LPAR maximum for node %s should be increased, or the minimum Processor Units amounts for the application servers should be decreased. "The SDNP script file: %1$s does not exist or is not an executable on node: %2$s. SDNP script must be provided to use the failover policy specified for the resource group %1$s Multiple SDNP script are defined for the resource group %1$s SDNP script timeout value must be provided to use the failover policy specified for the resource group %1$s Invalid SDNP timeout value specified for the resource group %1$s. it should be less than %2$f seconds. Verifying SDNP scripts and timeout values for SDNP enabled resource groupsUnable to communicate with the remote node: %1$s. Please check that cluster is configured and the clcomd subsystem is running. Disable the automatic start of Process Engine Update the required entries of Process Engine in /etc/services Turning off the automatic start of Process Engine in /etc/inittab on node: %1$s The automatic start of process engine couldn't disabled in /etc/inittab on Node: %1$s Please manually disable the automatic start of process engine Node %1$s and Node %2$s have unequal port definitions in /etc/services for Process Engine Node: %3$s has service entry %4$s %5$d/%6$s Node: %7$s has service entry %8$s %9$d/%10$s. Please change the entries on one of the nodes to be the same as the other node. Validating node: %1$s does not have the automatic start of Process Engine enabled in /etc/inittab The Process Engine automatic start enabled on node: %1$s Please disable the automatic start of Process Engine in /etc/inittab Retrieve the file /etc/rc.dt Verify the Process Engine automatic start is disabled on all nodes participating in Process Engine resource groups. Verify required entries of Process Engine in /etc/services on participating nodes of Process Engine resource group Turning off the automatic start of daemon %1$s in /etc/inittab on node: %2$s, as part of %3$s Print Subsystem The automatic start of %1$s daemon couldn't disabled in /etc/inittab on Node: %2$s, as part of %3$s Print Subsystem Please manually disable the automatic start of daemon %4$s Disable the automatic start of Print Subsystem Retrieve the PowerPC print subsystem fileset installed information from each cluster node Retrieve the System V print subsystem fileset installed information from each cluster node Validating node: %1$s does not have the automatic start of %2$s Print Subsystem enabled in /etc/inittab The relevant daemons "lpd", "qdaemon", and "writesrv" shouldn't be enabled for automatic start in /etc/inittab as part of PowerPC print subsystem The %1$s daemon is enabled automatic start on node: %2$s Please disable the automatic start of daemon in /etc/inittab as part of %3$s Print Subsystem The relevant daemon "lpsched" shouldn't be enabled for automatic start in /etc/inittab as part of System V print subsystem The %1$s fileset level of %2$s print subsystem on node: %3$s is at %4$d.%5$d, expecting level %6$d.%7$d Please ensure every node that participates in an %8$s print subsystem resource group is running the same level of %9$s print subsystem filesets as all other nodes in the cluster. The node: %1$s doesn't have the %2$s fileset installed as part of %3$s print subsystem Verify the PowerPC print subsystem automatic start is disabled Verify the System V print subsystem automatic start is disabled Verify the required filesets for PowerPC print subsystem is installed and the version is same on all nodes participating in PowerPC print subsystem resource group. Verify the required filesets for System V print subsystem is installed and the version is same on all nodes participating in System V print subsystem resource group. Filesystem %1$s is configured to auto-mount on node: %2$s. Command 'chfs -A no ' could change the auto-mount to default configuration. Verifying LDAP versionVerifying LDAP client configurationLDAP is configured in cluster and TDS version on node %1$s is lesser than 6.2 It should be upgraded to 6.2 or above. LDAP is configured in cluster and LDAP client configuration doesn't match on node %1$s . Collecting TDS Version. Collecting TDS client Configuration. Verifying EFS enableCollecting EFS enable details. EFS is configured in cluster and EFS is not enabled on node %1$s efsenable -a command should be executed Corrective action of reconfiguring LDAP client Do you want to reconfigure LDAP client on %1$s to reflect correct LDAP server list ?Command (%1$s) failed (return code %2$s) to configure LDAP client on node %3$s with %4$s . Configured a LDAP client on node %1$s. Invalid site merge option detected. Please use SMIT to correct the settings. Invalid site merge option detected. Only one site can be configured with the "continue" option. Please use SMIT to correct the settings. Invalid site merge option detected. Unrecognized value in odm. Please use SMIT to correct the settings. Verifying whether the variable CL_PARRALLEL_PROCESSING has the same value on all nodes. Collect the value of CL_PARRALLEL_PROCESSING from each node. The value of the variable CL_PARRALLEL_PROCESSING is inconsistent in one or more of the following nodes. Please correct the entries. Node: %1$s Value: %2$s Node %1$s contains an invalid value for the variable CL_PARRALLEL_PROCESSING in /etc/environment.The value can either be TRUE or FALSE. Validating multicast communication using mping Unable to communicate using multicast messages TRUCOPY configuration contains errors GENERIC XD configuration contains errors Check all TRUCOPY resources.Check all GENERIC XD resources.Verifying not all networks are marked as privateAll networks are defined as 'private'. This may slow down cluster operations and event processing Verify HACMPsircol is validCollect the PCM fileset level on each node to check the hyperswap supported level. Collect the LSS details of disks which are part of Mirror groups added to the cluster. Collect the repository disk accessibility status from each node. Collect the disk accessibility from each node which are part of Resource group having particular Mirror group. Verifying PCM Fileset LEVEL Verifying MirrorGroup disk accessibility information ResourceGroup: %1$s and MirrorGroup: %2$s are differing in disks configured Resource Group: %1$s and MirrorGroup: %2$s are differing in set of volume groups configured Resource Groups %1$s %2$s are in different hyperswap state which are having the site collocation relation No PPRC path configured on the node:%1$s Its recommended to not to have two disks to base on same lss for two different MirrorgGroups All the secondary disks in an MirrorGroup must be part of same site All the primary disks in an MirrorGroup must be part of same site Recovery action should be "AUTO" in case of Hyperswap enabled MG Disk %1$s in the MirrorGroup: %2$s is not accessible on node: %3$s MirrorGroup %s is configured with more than one ResourceGroup Installed PCM level is: %1$s not supported for Hyperswap on the node: %2$s DS8K Inband supported AIX level is: %1$s and Installed AIX level on the node %2$s is: %3$s Multiple Mirror Groups are included in %1$s. Resource Group should include only one Mirror Group. The disk with UUID %1$s is a part of the volume. UUID '%s' from Resource Group '%s' is also used in Resource Group '%s'. Verifying UUID: %s Duplicate IP address found on node %1$s in /etc/hosts: %2$s Duplicate host found on node %1$s in /etc/hosts: %2$s The version levels are different for CAA fileset: %1$s Verifying that CAA fileset '%s' is at the same level on all nodes Check installed CAA filesets on all nodes. Since CAA fileset '%s' is at different versions between nodes, check is skipped since this is acceptable. IPv6 Loopback is presentNetwork %1$s is a single adapter network - subnet checks bypassed for persistent label %2$s An XD_data network has been defined, but no additional XD heartbeat network is defined. It is strongly recommended that an XD_ip network be configured in order to help prevent cluster partitioning if the XD_data network fails. Cluster partitioning may lead to data corruption for your replicated resources. ERROR: User defined resource name '%1$s' is not valid. ERROR: An internal error occurred with odm_set_path. Verifying User Preferences for Resource Groups Verifying Resource Group: %1$s ERROR: service label %1$s is defined as a resource in resource group %2$s but was not found in the adapter configuration database. If this problem persists, contact IBM support. Cluster is defined as Linked cluster and must have sites defined. The %1$s depdendency pair %2$s cannot be used with the %3$s %4$s type dependency. Parent resource group '%1$s' has dependencies specified and includes application controller %2$s. An application monitor is recommended for application controllers when dependencies are configured. A fallback timer is specified for the child resource group '%1$s'. Please note that this group may also bounce when the parent resource group '%2$s' falls back WARNING: To make your applications highly available in case of complete site down, define a site-specific service IP label for the following sites: A problem occurred retrieving resource group state for group %1$s on node %2$s If this problem persists, please contact IBM support. An error occurred with the sendto() subroutine. errno was %d. Please report this problem to IBM support. Hyperswap enabled mirror group %1$s is included in more than one resource group. This cluster uses Unicast heartbeat Verifying user supplied application monitor names Invalid Application Monitor Name: %1$s. Application monitor names cannot be blank, the first character must be a letter and the remaining characters must be alphanumeric or '_'. Issued resource offline event to clean resources that belong to resource group %1$s on node %2$s. The output of the resource_offline event is logged in hacmp.out. Check for adequate filesystem free space before verification. Node %1$s has only one adapter on network %2$s. Skipping subnet checks for this node and network. Unable to communicate using multicast messages. Please review /var/hacmp/clverify/ver_mping.log for details. Discovery was not able to run on all nodes in the cluster, skipping verify. %s %2.2f MB disk space remaining on node %s. There are %1$d nodes configured which exceeds the supported limit of %2$d nodes. Please reduce the configuration. [Network:%1$s/Node:%2$s/IP:%3$s/Intf:%4$s]: Netmask in AIX CuAt ODM[%5$s] does not match netmask in HACMPnetwork ODM [%6$s] PowerHA SystemMirror requires /etc/inittab entry: %1$s PowerHA SystemMirror cluster node: %2$s is missing this entry. Please add this entry to /etc/inittab, then re-run verification. It will be necessary after adding the entry to reboot the system. The system reboot will start CAA and PowerHA SystemMirror background processes. Verifying sircol configuration. 'lscluster -c' indicates there is no CAA cluster yet. Verification of sircol configuration detected %1$d error(s) HACMPsircol ODM is empty/not readable. Linked cluster detected. A linked cluster has to have sites, but no sites are defined. Verifying node %1$s. Node %1$s has not been added to caa yet, skipping this node. When sites are defined, all nodes must be part of a site. Node %1$s does not belong to a site. In a linked cluster each site has to have a HACMPsircol ODM stanza. No HACMPsircol stanza found for site %1$s. No backup repository disk has been defined for site %1$s. No backup repository disk has been defined. Backup repository disk with PVID %1$s is not known at node %2$s. Backup repository disk with PVID %1$s is assigned to VG %2$s at node %3$s. Probably repository disk has been replaced with this disk. In that case you need to sync PowerHA configuration at the node where the replacement was executed. A disk can not be used as a backup repository if it is part of a VG. Either remove this disk from the VG or delete it from list of repository disks. caavg_private is not imported (has no disk assigned) on node %1$s. VG caavg_private at node %1$s contains multiple disks: Remove all disks from VG caavg_private, which have been added manually. PVID of VG caavg_private at node %1$s [%2$s] does not match PVID for repository disk in HACMPsircol ODM [%3$s]. Collect DLPAR information. An internal error occurred while checking %1$s. Please report this problem to IBM support. HMC versions do not match. %1$d HMC has been configured on node [ %2$s ]. Should be at least %3$d. %1$d (out of %2$d) HMC can be pinged on node [ %3$s ]. Should be at least %4$d. %1$d (out of %2$d) HMC can be reached using password-less SSH on node [ %3$s ]. Should be at least %4$d. %1$d (out of %2$d) HMC are configured consistently on node [ %3$s ]. Should be at least %4$d. The PVID value mismatches between ODM value %1$s and Disk value %2$s for %3$s on the node %4$s The paramater value %1$s is not same on the node %2$s with value %3$s and on node %4$s with value %5$s The disk reserve policy of %1$s on the node %2$s is %3$s which is not a recommended value The kernel parameter %1$s is not same on node %2$s with value %3$s and on node %4$s with value %5$s The security parameter %1$s is not same on node %2$s with value %3$s and on node %4$s with value %5$s Percentage of packets dropped from adapter %1$s obtained from transmit statistics is %2$d which is more than %3$d percent on node %4$s Percentage of packets dropped from adapter %1$s obtained from receive statistics is %2$d which is more than %3$d percent on node %4$s Reserve policy of disk %1$s is not retrieved on node %2$s Ifix with label %1$s is installed on node %2$s but not on node %3$s pbufs count(aio_cache_pbuf_count) from ODM is not retrieved for Voulme Group %1$s on node %2$s pbufs count(aio_cache_pbuf_count) for volume group %1$s is %2$d which is less than recommended value %3$d on node %4$s pbufs count(aio_cache_pbuf_count) for volume group %1$s mismatches between ODM value %2$d and LVM value %3$d on the node %4$s pbufs count(aio_cache_pbuf_count) for volume group %1$s is not same on node %2$s with value %3$d and node %4$s with value %5$d Either Configured reserve policy, Effective reserve policy or Reservation status of disk %1$s on node %2$s is not as needed to support SCSIPR Disk Fencing. It must be as follows Configured Reserve Policy : PR_shared Effective Reserve Policy : PR_shared Reservation Status : SCSI PR reservation - Write_Exclusive_All_Registrants Critical Resource Group is not selected. It is required to select a Resource Group as a Critical Resource Group Critical Resource Group %1$s does not exist. Critical Resource Group %1$s must have all the cluster nodes included as participating nodes. Critical Resource Group can not be included as a child in any dependency relationship. Startup policy for Critical Resource Group can not be Online On All Available Nodes. Volume Group %1$s is not SCSI Persistent Reserve Type 7H capable on node %2$s. Failed retrieving the SCSI Persistent Reserve capability of Volume Group %1$s on node %2$s. Disk Fencing is not supported on Hyperswap disks. Hyperswap is enabled in the cluster. Please disable it in order to enable Disk fencing. Critical Resource group can not be Med or Low priority in ANTICOLOCATION dependency relationship. It can only be the first one in highest priority list. Verifying mismatch of PVID values between ODM and disk Verifying AIX values obtained from runtime expertVerifying entstat statistics for all adapters on each node Verifying the reserve policy value of each disk Verifying the values of security parameters across cluster Verifying the values of kernel parameters across cluster Verifying the value of pbufs count of all GMVGs Verifying ifix labels installed on each node Verifying the SCSI Persistent Reserve policy of each disk Verifying SCSI Persistent Reserve capabilities of Volume Groups Verifying properties of Critical Resource Group Verifying if Hyperswap is enabled Collect PVID values of each disk from LVM and ODM Collect the AIX values from AIX RunTime Expert for each nodeCollect entstat statistics for all adapters on each node Collect reserve policy of each disk Collect the values of security parameters across cluster Collect the values of kernel parameters across cluster Collect the value of pbufs count of all GMVGs Collect the list of ifix labels installed on each node Collect SCSI Persistent Reserve policy of each disk Collect SCSI Persistent Reserve capability of Volume Groups The xml file %1$s does not exist on the node %2$s AIX Runtime Expert failed to retrieve LVM profile values on the node %1$s AIX Runtime Expert failed to retrieve NFS profile values on the node %1$s AIX Runtime Expert failed to retrieve Device Driver profile values on the node %1$s AIX Runtime Expert is not installed on the node %1$s Node %1$s : Failed to obtain complete information about On/Off CoD processors. On/Off CoD processors will not be taken into account for this node during verification. Node %1$s : Failed to obtain complete information about On/Off CoD memory. On/Off CoD memory will not be taken into account for this node during verification. Node %1$s : Failed to obtain complete information about Enterprise Pool CoD. Enterprise Pool CoD will not be taken into account for this node during verification. Unexpected value in HACMPcustom ODM. If this problem persists, please use standard problem reporting procedures. An error occurred while checking if Resource Group(s) %1$s is able to run on node %2$s. If this problem persists, please use standard problem reporting procedures. An error occurred while checking if all Resource Groups together are able to run on node %1$s. If this problem persists, please use standard problem reporting procedures. Take care that you currently are using parameter ALWAYS_START_RG=1. There is no guarantee that the Resource Groups are running with optimal resources and no resource verification will be done. If you want that guarantee, please run command : clmgr manage cluster roha ALWAYS_START_RG=0 At the time of verification, node %1$s would not have been able to acquire sufficient resources to run Resource Group(s) %2$s (multiple Resource Groups in case of node collocation). Please note that the amount of resources and CoD resources available at the time of verification may be different from the amount available at the time of an actual acquisition of resources. Reason : %3$s At the time of verification, node %1$s would not have been able to acquire sufficient resources to run all Resource Groups associated with it together. Please note that the amount of CoD resources available at the time of verification may be different from the amount available at the time of an actual acquisition of resources. Reason : %2$s The Enterprise Pool CoD %1$s is configured for this node, but at the time of verification, its available memory would not have been enough to be able to acquire sufficient resources to run the Resource Group(s). The Enterprise Pool CoD %1$s is configured for this node, but at the time of verification, its available processors would not have been enough to be able to acquire sufficient resources to run the Resource Group(s). You are agreeing with On/Off CoD costs, but at the time of verification, the memory provision on Managed System %1$s (%2$d memory days) would not have been enough to be able to acquire sufficient resources to run the Resource Group(s) on this node. You are agreeing with On/Off CoD costs, but at the time of verification, the processors provision on Managed System %1$s (%2$d processors days) would not have been enough to be able to acquire sufficient resources to run the Resource Group(s) on this node. You are agreeing with On/Off CoD costs, but you have no valid license for memory on Managed System %1$s. On/Off CoD memory will not be used to try to keep the Resource Group(s) highly available on this node. You are agreeing with On/Off CoD costs, but you have no valid license for processors on Managed System %1$s. On/Off CoD processors will not be used to try to keep the Resource Group(s) highly available on this node. You are not agreeing with On/Off CoD costs, but you currently have a valid license for it with %1$d memory days remaining on Managed System %2$s. On/Off CoD memory will not be used to try to keep the Resource Group(s) highly available on this node. You are not agreeing with On/Off CoD costs, but you currently have a valid license for it with %1$d processors days remaining on Managed System %2$s. On/Off CoD processors will not be used to try to keep the Resource Group(s) highly available on this node. At the time of verification, only one node (out of %1$d) was able to acquire sufficient resources to run Resource Group(s) %2$s. It might not have been recovered if it failed over. At the time of verification, no node (out of %1$d) was able to acquire sufficient resources to run Resource Group(s) %2$s. As there are Resource Group(s) for which at most one node was able to acquire sufficient resources to run it at the time of verification, if you want to ensure high availability for these Resource Group(s), you should : - Update your configuration according to above warnings - Release resources on the related nodes - Associate the Resource Group(s) with other nodes - In case of node collocation : reduce the size of the node collocation - Modify the Application Controllers to fit the available resources If you want to be able to run the Resource Group(s) even without enough resources or CoD resources, you still can execute this command : clmgr manage cluster roha ALWAYS_START_RG=1. Resource Group %1$s : Only one node is configured for this Resource Group. It will not be possible to recover the Resource Group if it fails over. Resource Group %1$s : Multiple nodes on same Managed System : - %2$s List of nodes : %3$s Resource Group %1$s : All nodes related to a Resource Group should be hosted by different Managed Systems. For this Resource Group, %2$d Managed Systems are hosting %3$d nodes. Depending on source and destination nodes, when moving the Resource Group, the releasing of CoD resources may be synchronous and consume more time (than asynchronous mode). Resource Group %1$s : All nodes related to a Resource Group should be hosted by different Managed Systems. Here, all nodes are hosted on the same Managed System. When moving the Resource Group, the releasing of CoD resources will ever be synchronous and consume more time (than asynchronous mode). The backup repository disk %1$s is down on node %2$s. Failed to check consistency for HMC %1$s : %2$s. Failed to get local managed system name for HMC %1$s : %2$s. Getting status of backup repository disksOn site %1$s, backup repository disk with PVID %2$s is : On cluster %2$.%1$s, backup repository disk with PVID %3$s is : - DOWN on nodes : %1$s - UNSEEN on nodes : %1$s Unseen disks are either unreachable or down for a long time. Please check disk status on listed nodes. No backup repository disk is UP for nodes : monitor is in one controller%1$s Application Monitor belongs to following application controllers %2$s. Monitor %1$s will not be used until associated application controller is linked with resource group. %1$s Application Controller belongs to following resource groups %2$s. Application Controller: %1$s does not have an associated resource group. controller is in one resource groupThe primary HMC %1$s is not declared as an HMC in PowerHA SystemMirror configuration for node %2$s. As a result, EPCoD resources cannot be used. The secondary HMC %1$s is not declared as an HMC in PowerHA SystemMirror configuration for node %2$s. Node %1$s : The secondary HMC is not configured for EPCoD pool %2$s. It is mandatory to configure a secondary HMC to avoid the need of recreating the EPCoD pool in case of HMC failure. In order to establish a secondary HMC, please execute this command on the primary HMC : chcodpool -p [EPCoD_pool_name] -o update -a "backup_master_mc_name=[Secondary_HMC_name]" - UP but already part of a VG. Local disknames : %1$s No backup repository disk is UP and not already part of a VG for nodes : More than 30 seconds clock drift detected between %1$s and the local node Directory %1$s is not a valid file system - neither the file system to be exported nor any of its lower qualified paths appear in /etc/filesystems on node %2$s. Checking for clock drift between nodes %1$s: ERROR: an incomplete cluster configuration has been detected. Please complete the basic setup of nodes, networks and adapters before running verification. This can result in the inability to collect log files for problem determination which may lead to delays in identifying problems and providing fixes. You should carefully consider the potential side effects of using a remote filesystem for cluster log files. The major number for VG %1$s is not the same on all nodes. This may cause trouble in future if NFS exports are added to resource group. To fix it, export and re-import the volume group on all nodes with the same major number or use C-SPOC. INFO: The default value for the parameter ALWAYS_START_RG is 1. With this default value, PowerHA SystemMirror bring up Resource Groups even if sufficient ROHA resources are not available. If you want to bring up Resources Groups only if sufficient ROHA resources are available, run command : clmgr manage cluster roha ALWAYS_START_RG=0 and then synchronize the cluster. More than 30 seconds clock drift detected between %1$s and the local node HMC versions do not match ,there is atleast one or more HMC running on version %1$s or below and atleast one or more HMC running on version %2$s or above , it is not a supported configuration . Skipping kernel parameters check for the node %1$s, since operating system level is different for the %2$s and %3$s nodes Resource group %1$s has NFSv4 exports, however the NFS domain has not been set on node %2$s. The mount operation will fail for NFSv4 exports if the NFS domain is not set. Use the command 'chnfsdom ' to set the domain name. HMC %1$s is running on UNKNOWN hmc version. %1$d (out of %2$d) HMC can be reached using REST-API or password-less SSH on node [ %3$s ]. Should be at least %4$d. %1$d NovaLink has been configured on node [ %2$s ]. Should be at least %3$d. %1$d (out of %2$d) NovaLink can be pinged on node [ %3$s ]. Should be at least %4$d. %1$d (out of %2$d) NovaLink can be reached using password-less SSH on node [ %3$s ]. Should be at least %4$d. %1$d (out of %2$d) NovaLink are configured consistently on node [ %3$s ]. Should be at least %4$d. Failed to check consistency for NovaLink %1$s : %2$s. Failed to get local managed system name for NovaLink %1$s : %2$s. NovaLink %1$s is running on UNKNOWN NovaLink version. The log file size for %1$s is currently set to %2$d Megabytes. Based on your cluster configuration, the recommended size for this log is %3$d Megabytes. You can change this value using smit or the clmgr command. Failed to retrieve interfaces from ODM on node %1$s for unknown reason. You are agreeing with On/Off CoD costs, but either HMC is not added or not configured properly for this node. NovaLink does not support On/Off CoD. The resource group %1$s is configured as critical RG, hence RG [%1$s] should be online to use quarantine policy effectively. Failed to get HMC version for %1$s. Check ssh or REST API connectivity with HMC. On cluster %1$s, backup repository disk with PVID %2$s is : The storage %1$s is not reachable using password-less SSH on the node %2$s. IP address %1$s is not reachable using password-less SSH on the node %2$s. AWS bucket %1$s is not reachable from the node %2$s. Please check firewall settings and network connectivity. Consistency group %1$s is defined in resource group %2$s. You cannot use consistency groups defined in resource group for backup. Resource group %1$s configured for backup is not defined in the cluster. Volume group %1$s configured for backup is not defined in the resource group %2$s. Volume groups should be defined in resource group before using for backup. The script cl_cbm_storage_check failed to execute on node %1$s. Following storages configured for backup %1$s does not appear to exist. %2$sPlease configure or remove from the backup profile. Invalid master vdisk name %1$s in relationship %2$s which is configured for the backup profile %3$s. Invalid source vdisk name %1$s in flashcopy map %2$s which is configured for the backup profile %3$s. Invalid target vdisk name %1$s in flashcopy map %2$s which is configured for the backup profile %3$s. Consistency group(s) %1$s are not found in any of the following storage(s) %2$s. Please make sure storage is reachable and consistency group is available in the storage configured for backup profile %3$s. No mappings found in the following consistency group(s) %1$s which are configured for backup profile %2$s. Failed to get information of consistency group %1$s. Please make sure storage is reachable and consistency group is available in the storage configured for backup profile %2$s. The specified volume group disk(s) %1$s do not appear to exist in any of the following replicated resource(s) %2$s configured for backup profile %3$s. The disks in the replicated resource(s) %1$s do not exist in any of the following volume group(s) %2$s configured for backup profile %3$s. Target disk %1$s configured for backup profile %2$s is not found on node %3$s.Target disk should be shareable to all the cluster nodes. A volume group exists on the target disk %1$s. Volume group should not exist on disk which is used as target disk for cloud backup method. Notify method "%1$s" defined for backup is not a valid executable on node %2$s. The path or file "%1$s" which is defined as notify method for backup does not appear to exist on node %2$s. - up but has VGDA on it after automatic repository replacement. This situation may occur when active repository disk goes offline due to power outage or storage failures,automatic repository replacement activates backup repository and converts primary repository disk to backup repository disk,and this backup repository became active. Failure in execution of lspv command on disk %1$s. This failure may occur when lspv command executes on down node. Following storage(s) configured for the backup profile %1$s does not appear to exist. %2$sPlease configure or remove from the backup profile. You have configured a recovery script for event %1$s. Recovery scripts are no longer supported and will be ignored. Cloud bucket %1$s is not reachable from the node %2$s. Please check firewall settings and network connectivity. Resource group "%1$s" configured for backup is concurrent. Please provide resource group which is non-concurrent. State of replicated resource %1$s is %2$s. It is recommended to bring the resources offline for the resource group: %1$s on the node: %2$s before starting the cluster services. Would you like to bring the resources offline and then start the cluster services ?Warning: the value of the variable %1$s in /etc/environment is different between nodes. Please verify this is correct. The storage %1$s is not reachable using password-less SSH on the node %2$s Please check storage connectivity and make sure there is storage connectivity on all the cluster nodes to proceed with backup. IP address %1$s is not reachable using password-less SSH on the node %2$s Please check network connectivity and make sure that IP address is pingable on all the cluster nodes to proceed with backup. Cloud bucket %1$s is not reachable from the node %2$s Please check firewall settings and network connectivity and make sure that the bucket is reachable from all the cluster nodes to proceed with backup. Target disk %1$s configured for backup profile %2$s is not found on the node %3$s Make sure that target disk is sharable between all the cluster nodes to proceed with backup. The path or file "%1$s" which is defined as notify method for backup does not appear to exist on the node %2$s Make sure that notify script configured for backup is created on all the cluster nodes to proceed with backup. Notify method "%1$s" defined for backup is not a valid executable on node %2$s Please make sure it is an executable file on all the cluster nodes to proceed with backup. Invalid master vdisk name %1$s in relationship %2$s which is configured for the backup profile %3$s To proceed with backup correct master vdisk name in storage. Invalid source vdisk name %1$s in flashcopy mapping %2$s which is configured for the backup profile %3$s To proceed with backup correct source vdisk name in storage. Invalid target vdisk name %1$s in flashcopy mapping %2$s which is configured for the backup profile %3$s To proceed with backup correct target vdisk name in storage. Consistency group(s) %1$s are not found in any of the following storage(s) %2$s To proceed with backup make sure storage is reachable and consistency group is available in the storage configured for backup profile %3$s No mappings found in the following consistency group(s) which are configured for backup profile %1$s %2$s To proceed with backup create mappings in storage and associate to consistency group. Failed to get details of consistency group %1$s. To proceed with backup make sure storage is reachable and consistency group is available in the storage configured for backup profile %2$s. The specified volume group disk(s) %1$s do not appear to exist in any of the following replicated resource(s) configured for backup profile %2$s %3$s To proceed with backup make sure disk(s) in volume group belong to configured replicated resource. Disk(s) in replicated resource(s) %1$s configured for backup profile %2$s do not exist in any of the following volume group(s) %3$s To proceed with backup make sure disk(s) exists in volume group A volume group exists on the target disk %1$s. To proceed with backup make sure target disk configured for cloud backup method is not part of any volume group Resource group %1$s configured for backup is not defined in the cluster.Please define and perform verify and sync Resource group "%1$s" configured for backup is concurrent. Please provide resource group which is non-concurrent and perform verify and sync to proceed with backup. Volume group %1$s configured for backup is not part of the resource group %2$s Volume group should be defined in resource group before using for backup.To proceed with backup please define and perform verify and sync. Consistency group %1$s is defined in the resource group %2$s You can't use consistency group defined in resource group for backup.To proceed with backup please remove from the resource group and perform verify and sync Following storage(s) configured for the backup profile %1$s do not appear to exist. %2$sTo proceed with backup please configure or remove from the backup profile and perform verify and sync. There are %1$s warnings.To proceed with backup please clear all warnings and perform verify and sync if there is any change in configuration. If the disk is not listed as part of a volume group you may have to re-import the volume group to clear up any stale VGDA information. The interface device statistics for interface %1$s on node %2$s indicate that the ratio of dropped to transmitted packets was %3$d percent. The interface device statistics for interface %1$s on node %2$s indicate that the ratio of dropped to received packets was %3$d percent. It is recommended that you check the operational status of this interface. You may also want to reset the statistics with the "entstat -r %1$s" command. Verify LVM Preferred Read and storage location. LVM Preferred Read for volume group %1$s is set to %2$s, but the associated storage location, %3$s, is not configured for any mirror pool copy. Hence the LVM Preferred Read setting will be overridden as roundrobin so that AIX will decide which copy needs to be used while reading the data. There are %1$d warnings for cloud backup management configuration. Please fix all warnings and perform verify and sync again. The maximum number of application configured (%1$d) in a PowerHA SystemMirror resource group has been exceeded. Resource group "%2$s" contains %3$d PowerHA SystemMirror applications defined. Please remove %4$d applications controllers from %2$s. Filesystem %1$s used in resource group %2$s is configured with rootvg on node %3$s. This filesystem need to be remove from Rsource Group. Filesystem %1$s used in resource group %2$s is configured with rootvg only on node %3$s. Remove the filesystem from Resource Group and perform verify and sync again. Python is not installed on the following nodes: %1$s. Python version is different across all the nodes in a cluster. The version should be same on all the nodes. Python version installed on node %1$s is %2$s. Comparing the Python version on all nodes in a cluster. Python version is same on all the nodes in a cluster. The maximum number of volume group configured (%1$d) in a PowerHA SystemMirror resource group has been exceeded. Resource Group "%2$s" contains %3$d PowerHA SystemMirror volume groups defined. Please remove %4$d volume groups from %2$s. The maximum number of application monitors configured (%1$d) in a PowerHA SystemMirror resource group has been exceeded. Resource group "%2$s" contains %3$d PowerHA SystemMirror application monitors defined. Please remove %4$d application monitors from %2$s. Line %1$d of netmon.cf from node %2$s is not properly formed and will be ignored. The line is: %3$s Each non-blank line must start with either a '#' (pound sign) indicating a comment, or the keyword '!REQD'. netmon.cf on node %1$s has more than %2$d lines. Only %2$d lines are supported in netmon.cf. Each non-blank line must have exactly three tokens. Extra characters were found after the third token and will be ignored. The second token must be one of the following: The keyword '!ALL' An IP address configured on node %1$s An interface name (like en0) configured on node %1$s The third token must be an IP address. IP address must be pingable and outside of any virtual network used by this cluster. The CAA log file %1$s on node %2$s indicates that CAA encountered problems interpreting the file on that node, or that one or more IP addresses could not be resolved. Please check the CAA log file %1$s on node %2$s and the netmon.cf file on that node for errors. Verify there are no single adapter networks. Checking netmon.cf is correct and consistent across the cluster. netmon.cf has different content on different nodes. Please verify this is intentional. There are %1$d nodes in the cluster. Of those, %2$d nodes do not have a netmon.cf file. The maximum number of resources configured (%1$d) in a PowerHA SystemMirror resource group has been exceeded. Please remove %2$d resources from resource group %3$s. Verify RSCT critical daemon restart grace period. critical daemon restart grace period is configured in RSCT configuration file /etc/ctfile.cfg as %1$d seconds on node %2$s and this value is considered as highest priority. Hence cluster and node level RSCT critical daemon restart grace period values are ignored. critical daemon restart grace period for cluster level is configured with %1$d seconds. Recommended value is less than %2$d seconds. Application monitor %1$s is configured with Action on Application Failure as Fallover policy and resource group %2$s Startup policy is configured as 'Online On All Available Nodes'. Application monitor %1$s cannot be included in a resource group %2$s. Change Action on Application Failure of %1$s to 'notify' or remove %1$s from %2$s. Node level critical daemon restart grace period is configured for the following nodes: %1$s Node level configuration has highest priority than cluster level, so cluster level attribute is ignored for the above nodes. Node level critical daemon restart grace period is configured for the following nodes: %1$s with %2$s seconds respectively. Recommended value is less than %3$d seconds. RSCT configuration file /etc/ctfile.cfg have invalid entry for critical daemon restart grace period. To make your applications highly available in case of complete site down, define a site-specific service IP label for the following site: %1$s. Verify skip_events_on_manage flag. Cluster tunable "skip_events_on_manage" is enabled, this will bypass any cluster resource processing when cluster services are restarted from unmanage state. Reset this flag to have cluster resources processed normally on startup. Verify cloud connection for cloud tiebreaker. Split merge policy is configured as cloud tiebreaker and cloud verification checks are failed on following nodes: %1$s Refer %2$s file for error details. Failed to fetch split merge configuration for the cloud tiebreaker on following nodes: %1$s Hence unable to check cloud connectivity. Split merge policy is configured as cloud tiebreaker and cloud verification checks are passed on all nodes of a cluster. There is no split and merge policy set. You may also configure a quarantine policy using disk fencing or active node halt policy. The quarantine policy can be used instead of a split and merge policy or both can be used at the same time for added protection. Checking for split and merge policies. Checking cloud configuration and access on all nodes. Split and merge policies are not set to Cloud. Split merge policy is configured as cloud tiebreaker and it requires boto3 module, however following nodes does not have boto3 module. %1$s. Split merge policy is configured as cloud tiebreaker and it requires python, however following nodes does not have python. %1$s. LVM encryption is enabled for some of the volume groups configured in PowerHA, however the same is not supported on the following nodes: %1$s Please ensure LVM encryption is supported across all the nodes in a cluster. LVM encryption is enabled for some of the logical volumes with %1$s authentication method, however it is not properly configured on the following nodes: %2$s Please ensure %3$s is configured across all the nodes in a cluster. Verify lvm encryption support. LVM encryption is enabled for some of the logical volumes, however authentication method is not added for the following logical volumes: %1$s Number of installed ifixes is not same on all the cluster nodes. Please check below list of ifixes for each node in the cluster.