ISO8859-1L,;-9E-u<-J-0.Cp.t7. K/ //i / J/ 0Z00p1|1e2E2d31l394.4=4lr4H5o5v6q6E7m67 7!8"9#9$:%=;&<;';(D<)N<*5=+M=M,=->%.>/?L0@1'@2Z@3 A 4A5OB.6B~7C 8C9D7:D;Ez<[F=`Fw>=F?G@ZGA:H/B^HjCbHDmI,EdIFsIGJsHKI>KJLKLLMMNXNuNOORPsOQdPZR]PStQTQUeRHVRWSBXJSYNTZPTb[>T\3T]5U&^U\_rU`?V`afVbhWc-WpdTWelWfX`gYhYiZJjgZkR[RlQ[mM[n\Eoc]p]dq<^ r|^Gs^t0_^ut_vE`wb`Jxa`yLaz[a\{=a|6a}5b-~6bc2bb?cZ7ccBdbgde Seef<bfKgngQBgvh=hz|hi5ViSj1TjNjbk)k[lCal@mcmBwmznn]o go~do5pK8ppHqAoqqfrDrfs.rs[tbtd8tnufuoDuNv:vjcvMw LwW[wnx^xojxhy9MyGy]z8Wz[zZ{JJ{J{L|;U|=|N}6}kA},}9~S~Ka~lknn^IpZVt@4 0AhrfNBk,F*|qi#X#|^)2E;xTX _b7^1 R?gFsAf*ZGRRfH>5tXxMF> tLLrf'7^{ o DJ + R [TjiQ)F{l/vjS X`Y`/ta^>ej(j4 6G H~ | D SZj[!}4$ /N4WF :)Fp Xw! "#$4%&1'7(?<)|*r+,-.?/ 0 123.4"5R167<89(:$;&<<c=> di&i; : H g<  Ri w6  @ Cak6 L!4i"M#$B%?&ND'(")*?+ ,9- U.;`/ 0u12]%34Q5678 9:; <= >?@QAYBsmCD[EUFLsGHI J) K 7LAM `NjO |PQR$STSU)VZCWXtY $Z!1[S\q] ^ _ `abcd (N0O@Sd  \ #$M G&h))7*8F8=cZAr%X*3 n^!5"/#E3$Gy%K&C '*Q(|#> 7  Xe0 .@ o w K3 9F-Lzi{+H@iĉ:gAũiſ )!1" Ƴ#"$%&'( ) *)+8,H-8N.BLJ/[0&1<24Y3RȎ4L5u.6 ɤ7Eɰ89V:];6e<˜=P˰>?@A|B͘CyͶD0EnKFκGHVI%J6K VLaMNEψOP<QR<+S$hT<ЍU%V^WOX*mYӘZԗ[]h\)]n^__\p`$a[bWNcצdM׿e fQgkhN{ijDk#l?6mvn@ْo p=qrK#sot4wuڬv4ڴwxNy@zBW{ۚ|F۳}~B  MCYܝEܱ=D>L݋@ݓJ(A;}>ބ T #D0 uH߁A@# doAS.cnr sc 1  I+>jGTg Z ERQBSslz E EW^A^Hv2 Le@u Ms_?> _jqwf= A,K x x3;Ra/h/qH~Q2  #$ H !O q 6y   +  9C Z)f3 -@ `!n" #)$ %& ')()$"*G+>Y,+-R. /> +0 j12 2  3D 4  5+ 6  @7! N8 p9' y: ;  <" =* > ?, @  HA. SB C" D E! F G H I J +KN 2L  M' N  O  P Q R S T  U V  *W 6X JY ^Z f[ y\ ] ^ _ ` a b  c d  efg h 'i 3j?k+Clom.no p1q rNs ]t%ju v3w x,y z,{ @|PK}~ 0f_h0 6CiI   H i1 :gPLfi%bC>""4<CK Zg v!"K#O$(%8&ZM'()4*+ ,E-.V/f0n1 `2!j345 |6[78v9 k:v; &<1=>>? @7A TBQaCDIE F_ G yHw I!JP"K"aL"yM#:N#VO #gPM#tQ#%R%S%T & U &V&6W&HX&MYk&cZ&[&\#)]S)^*_Y*1`r*a*b +c+d +e+"f+g+h+i+j+k,Mla,]m ,n%,o,pS,q-Jr@-_s -tS-u.v@.w .Wx.ey .zE/{/G|@/]} /~/ 080F0K"0T 0w@0%0 01 1#1A1]1p1=1D12!@25 2v22222 3 3 3 3#P3.3"3 3=3]4R4e;4405 5C(5OE5x56](6f 66 6`67$[7-7`7 7#78l8188:bD:y:$: :;G;;^L;u;;0=d"=3?)?(@;@?c@{T@7A46:Al AkABB0IB@kBB 7C iCA C FC kDDtD|DQE H`5HjII\I?JJWJm4JJJKoFKtK K!K"Lw#{L$L%M&M'M( N)CN*N+N,O-O.P/P0Q,1dQC2R3R4 R59R6 S }!S GSB%SFS6SpT.TT T T T T TUU!TU<#VW8XpPXHX6YCAYzbYZZ[9\qZ]`^c^g P^!_"!_#B`$a)%/b&=b1'bo(\c)(c*:d+adA,e-wf2.>f/9f0,g#1gP2g3dh4i5i67i7>i8j9j:dks;k<Vl{=Vl>m)?!m@HnA5oB:oUCoD;pQE=pF&qGqHTrIsdJ:sKsLDtMtNuOuPuQ uRvSLvTwUdx VVxqWExXyYy-ZyC[yY\y^]/yu^1y_1y`,z az6b/zMc0z}d0ze+zf{ g5{"h7{Xi7{j/{k{l5|m6|En6||o1|p|q`|rG}]s}t$~6u9~[v9~w1~xJy?LzV{#|!}J) 3:t k'8IHk 7 iI  F k|?.\8NcDGIW !o"t#h$qp%&d'\(cn)*+ i,Ys-."/0W1j2Fp3; +: .k:Ik} 7 i 2 FH jl\DS38 !"# $"%&~ '()*5+= PGiJH[i 3 :F g  RiR8& 5BQBaEm} 4!O"5#NE$%$&'1( )Q*V+g,p-_w./-0 1!#2E3QV45S67 893: O; Z<f=i>1n?@ A-B CCDFEGeFGSH IOJnK+wLMDN OP 2632-001 Attribute "%1$s" cannot be specified when defining a new resource. 2632-002 Attribute %1$s appears in request more than once. 2632-003 Class name %1$s is not recognized by this resource manager. 2632-004 Could not initialize control point for class %1$s. 2632-005 Attribute "%1$s" must be specified when defining a new resource. 2632-006 Domain name "%1$s" is already defined. 2632-007 The specified domain name is not valid. The name cannot be the word 'IW' or contain the character '/'. 2632-008 The specified domain name "%1$s" is too long. 2632-009 Internal error, buffer is too small to hold /var/ct/cfg/clusters. 2632-010 Input to a class action is not valid. 2632-011 Port %1$d is in use. 2632-012 Operation is not available since node is not online in a domain. 2632-013 (unused)2632-014 The value specified for %1$s is not valid. Valid range is between 1024 and 65000.2632-015 The following error was returned from the RMC subsystem while attempting to initialize a new domain configuration on node %1$s: %2$s 2632-016 The following error was returned from the RMC subsystem while attempting to remove a the domain definition from the specified node. initialize a new domain configuration on node %1$s: %2$s 2632-017 The domain definition cannot be removed from node %1$s by the RemoveConfig action while the domain is not offline. 2632-018 Error %1$d was returned from %2$s while attempting to commit a domain configuration change. 2632-019 The following error was returned from the RMC subsystem while attempting to put the domain configuration on node %1$: %2$s 2632-020 The online operation cannot be performed because the node is already online in domain %1$s 2632-021 The online or offline operation cannot be performed because a state change is already in progress. 2632-022 Input to action "PropagateConfig" is not valid. 2632-023 The specified domain does not exist. 2632-024 The following error was returned from the RMC subsystem while attempting to contact node %1$s during a start domain operation.: %2$s 2632-025 A response was received from a "PropagateConfig" action for node %1$s that does not have a valid format. 2632-026 Operation is not available since the node %1$s is not offline. 2632-027 The request to bring the node online in domain %1$s cannot be completed due to its configuration being down level and attempts to synchronize it with other nodes have failed. 2632-028 The communication group cannot be defined because there is an existing communication group with name "%1$s". 2632-029 The communication group "%1$s" cannot be removed since it is still referenced by one or more network or serial interfaces. 2632-030 Attribute "%1$s" has an invalid or out of range data value. 2632-031 The communication group "%1$s" is undefined. 2632-032 The communication group "%1$s" can not be assigned to the network interface, because another network interface from the same node already belongs to "%1$s" . 2632-033 The node is offline in the domain %1$s. Force option must be specified to remove the domain definition from the offline node. 2632-034 The communication group "%1$s" can not be assigned to the point to point network or serial interface because there are already two or more members in the communication group. 2632-035 The communication group "%1$s" can not be assigned to the point to point network or serial interface because the existing members of the communication group are not point to point interfaces or are incompatible. 2632-036 The communication group "%1$s" can not be assigned to the point to point network interface because the IP address and Destination Address do not match the existing member of the communication group. 2632-037 The NIM path "%1$s" is undefined or not accessible. 2632-038 The NIM path "%1$s" is not executable by the user. 2632-039 The domain %1$s cannot be removed. The domain may be in a transition pending state, or was just removed by another command. 2632-040 Node %1$s cannot be pinged and is therefore not reachable. 2632-041 One or more node names must be specified when defining a new domain. 2632-042 The file %1$s does not have a valid format. 2632-043 The string "%1$s" could not be resolved into an IP address or name. 2632-044 The domain cannot be created due to the following errors that were detected while harvesting information from the target nodes: 2632-045 The domain cannot be created due to the following errors that were detected while placing the domain configuration on the target nodes: 2632-046 The following errors were detected while attempting to find the latest configuration for the domain. The domain cannot be brought online. 2632-047 The following errors were detected while attempting to find the latest configuration for the domain. However, these errors do not prevent the domain from being brought online. 2632-048 The following errors were detected while attempting to send the start node operation to each node defined to the domain. 2632-049 Invalid Online Criteria %1$d. 2632-050 The target node cannot be brought online because it is online in another domain. 2632-051 %1$s2632-052 The following error occurred on the resource class %1$s when the originator node attempted to get agreement from all online nodes on the requested operation. One or more nodes might be just taken offline or not functioning properly at the time. Error message: %2$s 2632-053 The requested operation failed due to the rejection from a subsystem. 2632-054 The requested operation failed due to the rejection from the resource class %1$s. Message returned from the resource class: %2$s. 2632-055 The requested operation failed because a subsystem that required configuration coordination returned an invalid response code %1$d. 2632-056 The following error was returned from the RMC subsystem while attempting to retrieve a list of classes that require configuration coordination: %1$s 2632-057 The requested configuration change is not allowed since it will result in network partition(s) that will prevent some nodes from being brought online. 2632-058 An exit code of %1$d was returned from the command that builds the configuration file for topology services. The error output from this command is: %2$s2632-059 The following problems were detected while successfully creating the domain. Nodes that could not be harvested will not be included in the new domain. 2632-060 The requested operation failed due to the rejection from the resource class %1$s. 2632-061 Invalid configuration coordination response %1$d was returned the resource class %2$s. 2632-062 The input to the internal action %1$s is not valid. 2632-063 An exit code of %1$d was returned from the command that builds the configuration file for the Resource Monitoring and Control subsystem. The error output from this command is: %2$s2632-064 The specified value of the force option is not valid or has the wrong data type. 2632-065 The specified list of node numbers is not valid. 2632-066 The specified list of node numbers contains out of range values or duplicate values. 2632-067 This node is a duplicate of node %1$s and will not be included in the domain definition. 2632-068 This node has the same internal identifier as %1$s and cannot be included in the domain definition. 2632-069 The operation cannot be completed because the node is going offline or is already offline. 2632-070 The operation cannot be completed because an error %1$s occurred when the node was joining to the domain. 2632-071 The node cannot be added to the domain because the version of RSCT on the node is earlier than the version that is active in the domain. 2632-072 The operation cannot be performed because a majority of quorum nodes or configuration daemons is not currently active in the domain, or because the quorum of the domain is not currently satisfied. 2632-073 The specified node number is not valid or is in use. 2632-074 The following problems were detected while successfully adding nodes to the domain. Nodes that could not be harvested were not added to the domain. 2632-075 This node has a duplicate IP address (%1$s) with node %2$s. The duplicate address will not be used. Correct the network configuration on one of the nodes or use -c option to perform the operation. 2632-076 This node has a duplicate IP address (%1$s) which will not be used. The duplicate address will not be used. Correct the network configuration on one of the nodes or use -c option to perform the operation. 2632-077 The following problems were detected while adding nodes to the domain. As a result, no nodes will be added to the domain. 2632-078 The following error was returned from the RMC subsystem while attempting update the IP configuration: %1$s 2632-079 This node is already defined to a cluster that is not compatible with a Peer Domain so it will not be included in the cluster definition. 2632-080 A new critical resource owner cannot be accepted because the sub-domain does not have operational quorum. 2632-081 The installed version of RSCT on the node is older than the active version in the cluster. 2632-082 An operation to complete the migration to a new version of RSCT is already running. 2632-083 The minimum version in the cluster is the same as the active version so there is no migration to complete. 2632-084 The request to bring the domain online using the local configuration cannot be completed since the indicated node is already online with a different configuration version. 2632-085 The ownership of tie breaker must be set to either 0(Deny ownership) or 1(Grant ownership). 2632-086 The operation to specify ownership of the tie breaker device cannot be performed since the current tie breaker is not of type "Operator". 2632-087 The operation to specify ownership of the tie breaker device cannot be performed when the domain is not in a tie situation. 2632-088 The specified tie breaker "%1$s" is not valid or is unavailable. 2632-089 The value specified for the OpQuorumOverride attribute is not valid. 2632-090 The value specified for the CritRsrcProtMethod attribute is not valid. 2632-091 The predefined tie breaker "%1$s" cannot be removed. 2632-092 The active tie breaker cannot be removed. 2632-093 The tie breaker type "%1$s" does not exist. 2632-094 The type of tie breaker specified does not support heartbeating so the non-zero value specified for the heartbeat period is not valid. 2632-095 The time value specified for attribute "%1$s" is larger than the maximum period allowed of %2$d seconds. 2632-096 A tie breaker may not be modified while it is active. 2632-097 The tie breaker cannot be defined because there is an existing tie breaker with name "%1$s". 2632-098 The node name "%1$s" that is specified in the value for the NodeInfo attribute does not exist. 2632-099 The node is not network accessible. 2632-100 The node is network accessible but the subsystems on it cannot be reached. 2632-101 The communication group cannot be defined because the name specified is reserved for internal use. 2632-102 The following error was returned from the RMC subsystem while attempting to retrieve the state from node %1$s of another domain during the process of merging two sub-domains: %2$s2632-103 The operation cannot be completed because it requires at least RSCT version %1$s but only version %2$s is active in the domain. 2632-104 An exit code of %1$d was returned from the command that defines or undefines the topology services subsystem. The error output from this command is: %2$s2632-105 An exit code of %1$d was returned from the command that defines or undefines the group services subsystem. The error output from this command is: %2$s2632-106 The fields DeviceInfo(%1$s) or NodeInfo(%2$s) of IBM.TieBreaker are not configured correctly. 2632-107 Required memory could not be allocated during the tie-breaker operation. 2632-108 Tie-Breaker device %1$s is not found or unable to access (status %2$d). 2632-109 Unable to create the RSCT tie-breaker device file %1$s (error %2$d) 2632-110 The operation was rejected by one or more nodes, probably because one or more resources are online or there was an error encountered in determining if any resources are online. 2632-111 The operation was rejected by one or more nodes because the updates could not be applied. 2632-112 An error code of %1$d was returned from the topology services while enabling or disabling deadman switch for critical resources activation or deactivation. 2632-113 The target node %1$s is not defined in domain %2$s 2632-114 The operation cannot be performed because a majority of nodes or configuration daemons is not currently contacted. 2632-115 The following errors were detected while attempting to synchronize the latest configuration for the domain. The domain cannot be synchronized. 2632-116 The specified QuorumType is not valid. 2632-117 The specified QuorumType cannot be supported because the version of RSCT on a node is earlier than "%1$s". 2632-118 A mkrpdomain operation is already executing on node %1$llx. 2632-119 The target node %1$llx cannot be brought online because it is not defined in the domain. 2632-120 The target node id (%1$llx) is not same as the defined node id (%2$llx) in peer domain. 2632-121 The action "SetQuorumState" is not available for the given domain. 2632-122 %1$s: The following error was returned from the TieBreaker subsystem: %2$s2632-123 The fanout value must be in the range %1$d to %2$d. 2632-124 Node "%1$s" is not defined to domain "%2$s". 2632-125 Node "%1$s" is not online in domain "%2$s". 2632-126 Node "%1$s" is not offline in domain "%2$s". 2632-127 Node "%1$s" duplicated in the node list. 2632-128 The specified domain name is not valid. The name cannot be the words "IW", "." or ".." or contain the character '/' or whitespace. 2632-129 Batch operation failed replicating attribute changes. 2632-130 The value for HeartbeatActive must be 0 or 1. 2632-131 An unknown error was detected while attempting to find the latest configuration for the domain. The domain cannot be brought online. 2632-132 An internal error occurred during the current operation. 2632-133 The value for CSSKRefreshInterval must be 0 for no refresh or between 30 seconds and 30 days. 2632-134 The value for CSSKType must be one of CSSKTYPE_None, CSSKTYPE_DES_MD5, CSSKTYPE_3DES_MD5, CSSKTYPE_AES256_MD5, CSSKTYPE_AES128_SHA256, CSSKTYPE_AES128_SHA512, CSSKTYPE_AES256_SHA256 or CSSKTYPE_AES256_SHA512. 2632-135 Cluster shared secret key refresh interval was specified but not key type.2632-136 The installed RSCT version %1$s on node "%2$s" does not support cluster shared secret keys. 2632-137 Node "%1$s" does not support the current domain's cluster shared secret key type "%2$s". 2632-138 The nodes specified do not share a common security configuration. 2632-139 Node "%1$s" does not support a security configuration common with the nodes currently in the domain. 2632-140 A domain must be online to update its shared secret key. 2632-141 The cluster shared secret key type %1$s specified by the domain configuration is not supported on this node. 2632-142 The cluster shared secret key file %1$s is invalid. 2632-143 Round %1$d of attempts to synchronize the cluster shared secret key for domain %2$s produced the following errors. 2632-144 The cluster shared secret key configuration (type %1$s, refresh interval %2$d) in class IBM.RSCTParameters for the domain being brought online is invalid. 2632-145 Attempting to verify a cluster shared secret key signed by node %1$s failed. 2632-146 Cluster shared secret key update cannot be run because no key is enabled. 2632-147 The active version of the peer domain does not support shared secret keys. 2632-148 The value of Harvest Interval must be between 5 seconds and 4 hours. 2632-149 The active version of the peer domain does not support a change in the Harvest Interval. 2632-150 An exit code of %1$d was returned from the command that retrieves the default priority value for the topology services subsystem. The error output from this command is: %2$s2632-151 At least one node must be specified as a member of the quorum set for the domain. 2632-152 At least one node must be specified as preferred to be the Group Services group leader. 2632-153 The quorum node and/or preferred node list is invalid. 2632-154 There must be at least one quorum node and one preferred GroupServices group leader node. 2632-155 Specifying non-quorum nodes cannot be supported because the version of RSCT on a node is earlier than "%1$s". 2632-156 Specifying non-preferred nodes cannot be supported because the version of RSCT on a node is earlier than "%1$s". 2632-157 The requested operation failed because a subsystem that required quorum coordination returned an invalid response code %1$d. 2632-158 The local node's configuration version is different from that of the active domain. 2632-159 The configuration version on the local node is older than the proposed configuration version. 2632-160 Error code %1$d was returned while attempting to run CSSK update/enable/disable operation. 2632-161 The value %1$d for IsQuorumNode is invalid. 2632-162 The value %1$d for IsPreferredGSGL is invalid. 2632-163 Specifying non-tiebreaker-accessible nodes cannot be supported because the version of RSCT on a node is earlier than "%1$s". 2632-164 There must be at least one quorum node with tiebreaker access. 2632-165 Attribute "%1$s" requires the version of RSCT active in the domain to be at least %2$s or equivalent. 2632-166 The node cannot join or come online in a peer domain because a Management Domain is active using a configured IPv6 interface. 2632-167 The fence agent cannot be defined because there is an existing fence agent with name "%1$s". 2632-168 The value specified for the "Type" attribute is not valid. 2632-169 The value specified for the "Timeout" attribute is less than the minimum allowed value %1$d. 2632-170 The value specified for the "HealthcheckInterval" attribute is less than the minimum allowed value %1$d. 2632-171 The fence agent cannot be undefined because it is currently active in the domain. 2632-172 The fence agent could not be activated because the supporting module type was not found. 2632-173 The current active fence group "%1$s" is busy. 2632-174 Attributes of the fence agent cannot be changed because the agent is currently active in the domain. 2632-175 The fence group cannot be defined because there is an existing fence group with name "%1$s". 2632-176 The specified value for the "Type" attribute is not valid. 2632-177 The "ExecutionList" attribute must contain at least one fence agent. 2632-178 The specified fence agent "%1$s" does not exist. 2632-179 The "%1$s" fence module function "%2$s" returned the following error for node %3$s: %4$s 2632-180 The value specified for the "FailureAction" attribute is not valid. 2632-181 The value specified for the "NodeCriteria" attribute is not valid. 2632-182 The fence group cannot be undefined because it is currently active in the domain. 2632-183 Attributes of the fence group cannot be changed because the group is currently active in the domain. 2632-184 The fence agent "%1$s" is specified more than once in the "ExecutionList" attribute. 2632-185 The specified "ExecutionList" attribute contains more than the maximum number of agents allowed. 2632-186 Node name "%1$s" in the specified "NodeInfo" attribute is not a valid node name in the domain. 2632-187 The "%1$s" fence module function "%2$s" returned error number %3$d. 2632-188 The "%1$s" fence module function "%2$s" returned error: %3$s 2632-189 The "%1$s" fence module failed %2$d health check request(s) with error number %3$d. 2632-190 The "%1$s" fence module failed %2$d health check request(s) with error: %3$s 2632-191 The "%1$s" fence module function "%2$s" returned error number %3$d for node %4$s. 2632-192 The fence agent module for type "%1$s" does not support all required operations. 2632-193 The specified fence group "%1$s" is not valid or is unavailable. 2632-194 The "StateValue" value was not provided or is not a valid value. 2632-195 The value "%1$s" for EnableIPv6Support is invalid, specify 0 or 1. 2632-196 The installed RSCT version %1$s on node %2$s does not support IP version 6. 2632-197 Local interface detection is currently in progress. 2632-198 The address "%1$s" is not valid for this interface's address family. 2632-199 The value "%1$s" is not a valid subnet mask. 2632-200 Both an IP address and a subnet mask must be specified. 2632-201 The subnet mask %1$s is incorrect. 2632-202 The address %1$s is not configured on the node. 2632-203 The current active RSCT version of the peer domain does not support IPv6. 2632-204 The node is not onlined to a domain, or the domain does not have an operational quorum. 2632-205 This action requires the RSCT active version of the peer domain to be at least %1$s or equivalent. 2632-206 The value of the "NodeCleanupCriteria" attribute "RetryCount" keyword must be between 0 and %1$d. 2632-207 The value of the "NodeCleanupCriteria" attribute "RetryInterval" keyword must be between 0 and %1$d. 2632-208 Syntax error detected at position %1$d in the "NodeCleanupCriteria" attribute value. 2632-209 The command specified by the "NodeCleanupCommand" attribute value does not exist or is not executable. 2632-210 The operation was rejected because one or more nodes are currently being fenced. 2632-211 Group Services threads have exited but offline processing has not completed. 2632-212 Unrecoverable error from Network Interface harvesting. 2632-213 Quorum coordination for class %1$s failed. 2632-214 Remote Command Execution error: "%1$s" 2632-300 Only 1 or 2 nodes may be specified in NodeNameList when making this type of HeartbeatInterface.2632-301 Completion of the command would result in more than 2 HeartbeatInterfaces with the same name.2632-302 A HeartbeatInterface of Name(%1$s) and NodeId(%2$llx) already exists.2632-303 Completion of the command would result in more than 2 HeartbeatInterfaces with the same CommGroup.2632-304 The NodeNameList nodes must differ.2632-305 At least one specified HeartbeatInterface resource is active.2632-306 The communication group "%1$s" cannot be removed since it is still referenced by one or more heartbeat interfaces. 2632-307 HeartbeatInterface %1$s is currently actively heartbeating so its attributes cannot be changed. 2632-308 Invalid DeviceInfo syntax.2632-309 DeviceInfo does not exist.2632-310 Another HeartbeatInterface resource with the same DeviceInfo and node already exists.2632-311 Invalid MediaType.2632-312 CommGroup '%1$s' does not exist.2632-313 The Name attribute cannot contain spaces.2632-314 The Name attribute cannot be longer than 36 bytes.2632-315 The PingGracePeriodMilliSec attribute cannot be changed for this MediaType.2632-316 Only 1 node may be specified when changing a HeartbeatInterface's NodeNameList.2632-317 Different HeartbeatInterface devices on the same node cannot be in the same CommGroup.2632-318 MediaType %1$d for CommGroup %2$s is invalid. 2632-319 IPv6 addresses or node names that resolve to them cannot be used in this peer domain because IPv6Support is false, or the -6 option to mkrpdomain has not been specified. 2632-320 The specified time limit has been exceeded on '%1$s' from the resource class '%2$s'. 2632-321 Command %1$s returned exit status %2$d. 2632-322 The RSCT version installed on this node does not support this operation. 2632-323 ct_caa_set_disabled_for_migration failed with return $1%d. Migration to CAA cannot continue. 2632-324 Migration to CAA failed at function (%1$s) with result=%2$d. 2632-325 The node (%1$s) cannot be added to the domain because it is already a member of a different cluster type. 2632-326 A node (numbered %1$d) tried to join the cluster but was not found in the CAA configuration. 2632-327 function '%1$s' failed rc(%2$d). 2632-328 command (%1$s) for domain (%2$s) failed exit_code(%3$d) stderr:%4$s stdout:%5$s 2632-329 WARNING: The %1$s action is not valid or not needed in this environment. 2632-215 The Mapped ID "%1$s" is not root and does not match with User ID "%2$s". 2632-330 At least one Communication Group in the peer domain must have UseForNodeMembership set to 1. 2632-331 Interfaces cannot be removed from communication group "%1$s" because it is the only one with UseForNodeMembership set to 1. 2632-332 Cluster-Aware AIX is not supported on this platform. 2632-333 The request to bring the domain "%1$s" online cannot be completed because the specified time limit has been exceeded. 2632-334 The value %1$d is not valid for the IBM.PeerNode CriticalMode class attribute. 2632-335 The command specified by the "NotifyQuorumChangedCommand" attribute value does not exist or is not executable. 2632-336 The value specified for "NamePolicy" attribute is not valid. 2632-337 Name field can be modified only if NamePolicy is 0 . 2632-338 The specified NamePolicy cannot be supported because the version of RSCT on a node is earlier than "%1$s". 2632-339 HostName values should not be specified for nodes in CAA cluster . 2632-340 In CAA, NamePolicy should not be zero[zero->Hostname & Nodename unsync] at define. It can be changed via chrsrc -c IBM.PeerDomain NamePolicy= 2632-341 Mix of NodeName & HostNames are not supported because the active version of RPD is earlier than "%1$s". 2632-342 The target IPv6 addresses or node names are not reachable due to IPv6 communication failure. 2632-343 Peer domain with Security mode "nist_sp800_131a" cannot be created as the RSCT version on the node is earlier than 3.1.6.0 . 2632-344 Peer domain with the compliance mode '%1$s' cannot be created with the node '%2$s' does not have a compliant host key. 2632-345 NIST Compliant Peer domain cannot be created as the security api resulted in an error and the current security mode of the node cannot be determined . 2632-346 Security mode "nist_sp800_131a" is specified for node addition but the current PeerDomain is not in "nist_sp800_131a" mode, hence node cannot be added. 2632-347 Node cannot be added to the PeerDomain which is in SecurityMode "nist_sp800_131a" as the target node does not have "nist_sp800_131a" compliant HBA key. 2632-348 Node cannot be added as the target node is not in "nist_sp800_131a" compliance mode. 2632-349 NIST Security Mode option is not supported because the active version of RPD is earlier than "%1$s". 2632-350 Cluster Security Mode option is not valid in CAA cluster . 2632-351 Invalid Security Mode Specified. 2632-352 CSSKType specified '%1$s' is not a "nist_sp800_131a" compliant CSSKType. 2632-353 Node online request cannot be completed as LiveUpdateCoordination is in progress 2632-354 Another LKU CHECK phase cannot be initiated as LKU is in progress already. 2632-355 LKU operation cannot be initiated on the current node due to a critical resource active on it . 2632-356 stoprpnode in CAA is supported only if LKU is in progress for that node.2632-357 The value of the "LiveUpdateOptions" attribute is not valid. 2632-358 The value of the "LiveUpdateOptions" attribute "OfflineMethod" keyword must be between 0 and %1$d. 2632-359 The operation cannot be performed because it either would result in the domain losing operational quorum, or could result in the domain losing quorum and safe checking has been enabled. 2632-360 TBPriority of the node cannot be updated if the Active TieBreaker of the cluster is Operator / FAIL /SUCCESS 2632-361 TBPriority of the node cannot be updated if the Active TieBreaker of the cluster is non PR [Persistent across Reboot] & allow time gap is not specified 2632-362 TBmove protocol failed. TBPriority of the target nodes can not be updated.2632-363 TBmove protocol is rejected by current TBGL. Hence node join will be rejected .2632-364 TB acquisition attempt by joiner node failed. Hence node join will be rejected .2632-365 The action can't be performed because sub cluster is already operational or has quorum.2632-366 HBA key type (%1$s) is not supported. 2632-367 Syntax error at character %1$d in the "MaintenanceModeConfig" attribute value specified.2632-368 The specified value "%1$d" for "CritDaemonRestartGracePeriod" attribute is not valid.The Communication Group resource class is used to define and control how liveness (heartbeating) checks are performed between the communication resources within the domain.Communication GroupEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted.Resource DefinedBasic (Group 0)Internal (Group 255)Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted.Resource UndefinedAn event is generated each time a resource is deleted.An event is generated each time a new resource is created or discovered.Whenever a persistent attribute of the Communication Group resource class changes, this dynamic attribute will be asserted.Configuration ChangedAn event will be generated when the value of a persistent class attribute changes.Identifies which of the defined class attributes and actions apply to this version of the resource class.VarietyThe dynamic attribute is asserted whenever one or more persistent attribute values change.Configuration ChangedNo configuration is changed.One or more persistent attribute values are changed.Basic (Group 0)Internal (Group 255)The name of the communication group.NameAn internally assigned handle that uniquely identifies a communication group.Resource HandleIdentifies which of the defined resource attributes and actions apply to the resource.VarietyIdentifies the number of missed heartbeats that constitutes a failure.SensitivityThe number of seconds between heartbeats.PeriodIndicates whether broadcast should be used if it is supported by the underlying media.Use BroadcastFalseTrueIndicates whether the source routing should be used.Use Source RoutingFalseTrueA positive integer starting at 1 which is the highest priority.PriorityThe pathname to the Network Interface Module (NIM) that supports the type of adapters in the communication group.NIM PathThe parameters that are passed to the NIM when starting it. If not provided, the parameter values predefined by HATS will be used.NIM ParametersThe type of the media over which the communication group flows.Media TypeUserDefinedIPDiskThe number of milliseconds between heartbeats.PeriodMilliSecValue of the grace period when heartbeats are no longer received, in milliseconds.PingGracePeriodMilliSecUse this communication group for node membership calculationUseForNodeMembershipNot used for node membership calculationUsed for node membership calculationDefault value for UseForNodeMembershipDefaultUseForNodeMembershipRoCEMultiNodeDiskThe Network Interface resource class contains fixed resources in which each resource corresponds to an IP network interface. Each node may have one or more network interfaces and one or more IP addresses may be assigned to a network interface.Network InterfaceEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted.Resource DefinedBasic (Group 0)Internal (Group 255)Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted.Resource UndefinedAn event is generated each time a new resource is deleted.An event is generated each time a new resource is created or discovered.Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted.Configuration ChangedAn event will be generated when the value of a persistent class attribute changes.Identifies which of the defined class attributes and actions apply to this version of the resource class.VarietyRepresents the current state of the network interface.Operational StateUnknownOnlineOfflineFailed OfflineStuck OnlinePending OnlinePending OfflineMixedAn event will be generated when the network interface goes down.A rearm event will be generated when the network interface goes up.Basic (Group 0)Internal (Group 255)This attribute is asserted whenever one or more persistent attribute values of a network interface change.Configuration ChangedNo configuration is changed.One or more persistent attribute values are changed.An internally assigned handle that internally identifies a NetworkInterface.Resource HandleThe name of the network interface, e.g. eth0 on Linux, en0 on AIX.NameIdentifies which of the defined attributes and actions apply to the resource.VarietyAn unique identifier for the node.Node IdentifiersIdentifies the network device that hosts the network interface.Device NameIdentifies the base IP address for the network interface.IP AddressIdentifies the base subnet mask for the network interface.Subnet MaskIdentifies a network segment to which the network interface belongs to. This is derived from IPAddrss and SubnetMask.SubnetIdentifies the name of the communication group to which the network interface is associated.Communication GroupSpecify if the heart beat is active or not. 0 if inactive, 1 if active(default).Heartbeat ActiveInactiveActiveIdentifies all additional addresses that have been assigned to the interface. Assumes the base address is the first one in the list.AliasThe IP address of the alias.IPAddressThe subnet mask for the alias.SubnetMaskThe subnet of the alias.SubnetThis attribute identifies the destination address for point to point connections.Destination AddressThis internal action is used to update the domain configuration when the characteristics of an IP interface change.Update IP ConfigurationThis field defines the type of update to be performed and indirectly the fields to follow.Update IP Config Request CodeThis field defines the type of response and indirectly any fields to follow.Update IP Config Response CodeDevice-specific Adapter SubTypeDeviceSubTypeDevice-specific Switch Adapter Logical IDLogicalIDSwitch Network ID (deprecated)NetworkIDSwitch Network IDNetworkID64Adapter Port IDPortIDGlobal Identifier / Hardware AddressHardware AddressSpecifies the absolute path of the network device that hosts the network interface.Absolute Device Path NameSeconds between collection of IP interface configuration on each node in the peer domain. Harvest IntervalIndicates whether IP version 6 interfaces are supported for use in display, heartbeating and peer domain management.IPv6 SupportIPv6 interfaces are not supportedIPv6 interfaces are supportedInterface address IP versionIP VersionIP Version 4IP Version 6Interface RoleRolePrimaryNormalDeprecatedUsed to set and query the operational characteristics of the RSCT subsystem.RSCT ParametersChange in any RSCT attribute will cause a ConfigChanged event to be generated.Configuration ChangedBasic (Group 0)Internal (Group 255)An event will be generated when the value of a persistent class attribute changes.Identifies which of the defined class attributes and actions apply to the version of resource class.VarietyIdentifies the maximum number of lines that can be written in the log file by the TS daemon.Topology Services log file size.Indicates if the TS daemon should run with a fixed priority to avoid resource starvation and if so the priority it should run with.Topology Services Fixed PriorityValue 0 - do not use fixed priorityValue -1 - do not use fixed priorityIdentifies which regions of the topology services daemon is pinned in memory.Topology Services Pinned RegionsValue 0 implies 0x0000 - pin no regionValue 1 implies 0x0001 - pin TEXT regionsValue 2 implies 0x0002 - pin DATA regionsValue 3 implies 0x0003 - pin both TEXT and DATA regionsValue 4 implies 0x0004 - pin STACK regionsValue 5 implies 0x0005 - pin both TEXT and STACK regionsValue 6 implies 0x0006 - pin both DATA and STACK regionsValue 7 implies 0x0007 - pin all TEXT, DATA and STACK regionsIdentifies the maximum number of lines can be written to the log file by the Group Services daemon.Group Services Log SizeIndicates the group services maximum directory size in kilobytes.Group services maximum directory SizeThe time in seconds since the epoch when the cluster shared secret key was last updated.Cluster Shared Secret Key Last Update TimeAn event will be generated when the time of last update of the cluster shared secret key matches an expressionIdentifies the type of the cluster shared secret key.CSSKTYPE_None: use no cluster shared secret keyCSSKTYPE_DES_MD5: use a DES cluster shared secret key with MD5 digestCSSKTYPE_3DES_MD5: use a 3DES cluster shared secret key with MD5 digestCSSKTYPE_AES256_MD5: use a AES256 cluster shared secret key with MD5 digestThe refresh interval for the cluster shared secret key, in seconds.Cluster shared secret key refresh intervalCluster Shared Secret Key TypeThe domains the node is defined to.Shared Resource DomainIdentifies the names of the nodes to be defined to the domain.Node NamesIdentifies the node number to be assigned to each node.Node NumbersThis field is used to specify the criteria for determining the latest configuration to use for bringing the domain online. Two options are provided: Quorum and all nodes. Quorum is the default option.Online CriteriaThis field is used to specify a timeout value in seconds that determines how long to locate accessible nodes in the domain and determine that latest configuration based on the online criteria. The timeout value is only used if the online criteria is not achieved. Default time out value is 120 seconds.TimeoutQuorum Option: This is the default option. It requires a quorum of nodes must be accessed to determine the latest configuration.All Nodes Option: All the nodes defined in the domain must be contacted to locate the latest configuration in the domain. This option is useful if quorum has been overridden and it is not certain which node has the latest configuration.Description of Force Flag in Undefine option for resource class PeerDomain.ForceDo Not ForceForceDescription of in resource class PeerDomain.Each time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted.Resource DefinedBasic (Group 0)Internal (Group 255)Basic (Group 0)Internal (Group 255)An event is generated each time a new resource is created or discovered.Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted.Resource UndefinedAn event is generated each time a new resource is deleted.Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted.Configuration ChangedIdentifies which of the defined class attributes and actions apply to this version of the resource class.VarietyIdentifies the name of the domain the node is currently online in. Null string if the node is not presently online in any domain.Online DomainThe current state of the resource.Operational StateUnknownOnlineOfflineFailed OfflineStuck OnlinePending OnlinePending OfflineMixedAn event will be generated when the domain goes offline.A rearm event will be generated when the domain comes back online.The dynamic attribute is asserted whenever one or more persistent attribute values change.Configuration ChangedNo configuration is changed.One or more persistent attribute values are changed.An event will be generated when the value of a persistent class attribute changes.An event will be generated when the value of a persistent attribute changes.The name of the domain. A name is required at the domain definition time, usually provided by a system administrator.Domain NameAn internally assigned handle that uniquely identifies this resource.Resource HandleIdentifies which of the defined resource attributes and actions apply to the resource.VarietyIdentifies the version of the RSCT software that is active in the domain. Some nodes may have a later version installed but functionally they will operate at the minimum level existing on any defined nodes in the domain. This value is only updated once all nodes in the domain have the later version installed.RSCT Active VersionIndicates whether there are different versions of the RSCT software installed on the nodes of the domain. If True (1), then there are at least two different versions on the nodes of the domain. If False (0), then all nodes are at the same level. Refer to the IBM.PeerNode class for the RSCT version installed on each node of the domain.Mixed VersionsFalseTrueIdentifies the UDP port number that will be used by Topology Services for daemon to daemon communications within the domain.Topology Services Port NumberIdentifies the UDP port number that will be used by Group Services for daemon to daemon communications within the domain.Group Services Port NumberIdentifies the UDP port number that will be used by RMC for daemon to daemon communications within the domain.RMC Port Number657The list of resource classes in the domain and their minimum level of version numbers.Resource ClassesThe name of the resource class.Class nameThe id of the resource class.Class IdThe minimum level of the version of the resource class in the domain.VersionDescription of PropagateConfig in resource class PeerDomain.Propagate ConfigurationDescription of PropagateConfig in resource class PeerDomain.Propagate Configuration Request CodeDescription of PropagateConfig in resource class PeerDomain.Propagate Configuration Response CodeAll the nodes in a domain share a unique CSSK (Domain Shared Secret Key). This field allows a system administrator to choose a key type for the CSSK that best suits the applications in terms of the degree of data protection, overhead, and performance. The longer the key, the stronger the encryption algorithm. The stronger the algorithm, the slower.Domain Shared Secret Key TypeSEC_C_KEYTYPE_DES_MD5: The default key type to use. The digest is calculated using the MD5 hash and the encryption is performed using DES. The length of the signature is 16 bytes. Recommended if high degree of data protection is not required and good performance with less data overhead is desired.SEC_C_KEYTYPE_3DES_MD5: The digest is MD5 and the encryption is triple DES. The length of the signature is 16 bytes. Compared with the default SEC_C_KEYTYPE_DES_MD5, this type provides added data protection, slower performance but the same data overhead.SEC_C_KEYTYPE_AES256_SHA: The digest is SHA and the encryption is AES 256 bit. The length of the signature is 24 bytes. This type provides the most data protection but less performance and more data overhead.Indicates the interval when ConfigRM will refresh the CSSK in the domain. Default is 30 days.Domain Shared Secret Key Refresh IntervalIdentifies the user id that will be granted read and write authorization to all resource classes on all nodes.Administrator IDIndicates the time when the CSSK in the domain was last updated in seconds since Unix epoch.Domain Shared Secret Key Last UpdateAn event will be generated when the time sends the last key update is greater than 30 days.This internal class action is used to pull the latest configuration from a remote node.Get Configuration DeltasDescription of the DomainName field in GetConfigDeltas resource class action.Domain NameDescription of the ResourceHandle field in GetConfigDeltas resource class action.Resource HandleDescription of the VersionInfo field in GetConfigDeltas resource class action.Version InformationDescription of ConfigData field in output of GetConfigDeltas action.Configuration DataDescription of InformConfigDeltas in resource class PeerDomain.Inform Configuration DeltasDescription of the IPAddress field in resource class PeerDomain.IP AddressDescription of the Result field in resource class PeerDomain.ResultAn internal class action used to harvest information for creating a domainHarvestDescription of Harvest in resource class PeerDomain.HarvestDescription of Harvest in resource class PeerDomain.HarvestThis internal class action is used to plant new domain configuration on a nodeInitiate ConfigurationDescription of DomainName field in input SD for InitConfig action.Initialize ConfigurationDescription of ResourceHandle field in input SD for InitConfig action.Resource HandleDescription of NodeNumber field in input SD for InitConfig action.Node NumberDescription of RSCTVersion field in input SD for InitConfig action.RSCT Active VersionDescription of MixedVersions field in input SD for InitConfig action.Mixed VersionsDescription of TSPot field in input SD for InitConfig action.TS PortDescription of GSPort field in input SD for InitConfig action.GS PortDescription of RMC Port field in input SD for InitConfig action.RMC PortDescription of Configuration Data field in input SD for InitConfig action.Configuration DataDescription of Result field in output SD for RemoveConfig action.ResultDescription of MsgId field in output SD for InitConfig action.Message IdAn internal class action used to forcibly remove a domain configuration from a node.RemoveConfigDescription of DomainName field in input SD for RemoveConfig action.Domain NameDescription of ResourceHandle field in input SD for RemoveConfig action.Resource HandleDescription of Result field in output SD for RemoveConfig action.ResultDescription of MsgId field in output SD for RemoveConfig action.Message IdThis internal class action is used to determine whether there is connectivity between the subnets of two communication groups. Communication groups cannot be combined if there is no connectivity between them.Test ConnectivityThis field of the action input contains a list of IP addresses in the source communication group that is being tested. The destination address in the ping response must match one of these addresses.Source IP AddressesThis field and all subsequent fields of the action input contain a list of IP addresses within a subset of the target communication group.Destination IP AddressesThis field in the action response indicates whether there is direct connectivity between the source subnet and all the subnets of the target CG.Connect StatusThis private action is intended for internal use and is used to bring a node online in the specified domain. Start NodeThis input parameter for the StartNode action specifies the name of the domain that the receiving node is to be brought online into. Domain NameThis input parameter for the StartNode action specifies the resource handle of the domain that the receiving node is to be brought online into. Resource HandleThis output parameter from the StartNode action indicates a non-error result from the target node. Result CodeSpecifies define resource operation should continue even if errors are detected such as when some specified nodes are not network accessible. Continue If ErrorThis attribute reflects whether the associated domain has operational quorum. Operational quorum is achieved whenever a majority of the nodes in the domain are active or exactly half of the nodes are active and a "tie breaker" is held by the domain. The administrator may also disable quorum determination in which case this attribute will always indicate that the domain has quorum. Operational Quorum StateHAS_QUORUMPENDING_QUORUMNO_QUORUMAn event will be generated whenever the operational quorum state changes.This action is used by a software component, typically a resource manager to inform the quorum manager on the local node that there are critical resources active or no longer active. Critical resources are those that are shared by another node such as a twin tailed disk and cannot be accessed by two sub-domains without risking data corruption. Multiple sub-domains may result from a failure or network partition. Special actions such as halting the operating system are taken when critical resources are active and the sub-domain loses quorum. Inform Critical ResourcesThis input parameter indicates an entity that owns critical resources. OwnerThis input parameter indicates whether the associated owner has critical resources that are active(1), has no critical resources that are active(0) or is about to activate critical resources(2).Critical Resources PresentThis output parameter indicates whether the critical resource can safely be brought online. A value of 0 indicates that the action completed successfully and it is safe to bring the critical resource(s) online. SafeThis action is used to complete the migration to a new version of RSCT in the peer domain. As a new version of RSCT is installed on each node of the peer domain, the peer domain will continue to operate at the current active version until this action is invoked with a majority of nodes online in the peer domain. This action will coordinate the transition to a new active version which will be reflected in the RSCTActiveVersion dynamic attribute. Complete MigrationThis parameter is used to pass flags that control the operation of the CompleteMigration action. At this time, no options are defined and this value must be set to zero. OptionsThis action is used by an operator or administrator to resolve a tie situation when the current QuorumTieBreaker for the domain is set to "Operator". An error will be generated if this action is used when QuorumTieBreaker is not set to "Operator" or the node is in IW mode or the domain is not in a tie situation which occurs when exactly half the nodes of the domain are online. The input parameter to this action conveys whether the receiving sub-domain is granted ownership of the tie-breaker or not. The OpQuorumState of the sub-domain that is granted ownership will change to HAS_QUORUM from PENDING_QUORUM. If the sub-domain is not granted ownership, its OpQuorumState will change to NO_QUORUM from PENDING_QUORUM which may in turn cause the CritRsrcProtMethod to be invoked on any nodes that have critical resources active. Exactly one subdomain can be granted ownership of the tie-breaker. Resolve Operational Quorum TieThis input parameter conveys whether the sub-domain should be granted ownership of the tie-breaker or not. If ownership is granted then the OpQuorumState of the sub-domain will change to "HAS_QUORUM". If ownership is denied then OpQuorumState will change to "NO_QUORUM". OwnershipThis input parameter conveys the event sequence number associated with this class action. EventSeqNumThis output parameter is not currently used and will always have a value of zero. ResultUse Local Option: The domain configuration on the node on which the command is run is used to start the cluster. This is useful when changes have been to the domain definition while a majority of nodes were not online. This internal action is used during domain merge to retrieve the state of the other sub-domain. This returned information is then used to determine which sub-domain should survive.Get Domain StateThis input parameter is used to control the operation of the action. The only valid value for this parameter at this time is 0. OptionsThis output parameter conveys the result of the operation and the set of parameters which follow. The only value defined for this parameter is 0. ResultThis parameter is used to select whether all resource managers are to perform the completeMigration action. MigrateAllThis parameter is used to specify a particular new active version. At this time, ConfigRM does not support specification of a particular new active version, and this parameter must be empty. NewActiveVersionThis parameter specifies a timeout for the completeMigration action. TimeOutThis parameter specifies the origin of the completeMigration action. OriginThis output parameter indicates whether the action was successful or not. A value of 0 indicates success and a non-zero value indicates that the migration could not be completed. ResultThis field defines the type of InitConfig Action to be performed.InitConfig Request CodeThis class action is used to sync the domain configuration between nodesSynchronize ConfigurationThe active domain name that is being synchronized.Domain NameSpecifies the resource handle of the domain name that is being synchronized.Resource HandleIdentifies the set of target node(s) that is being synchronized.Target NodesSpecifies SyncDomain operation should continue even if errors are detected such as when some specified nodes are not network accessible. Continue If ErrorThe field is used to control whether SyncDomain should take place even if majority of nodes may not be contacted. ForceDescription of Result field in output SD for SyncConfig action.ResultDescription of MsgId field in output SD for SyncConfig action.Message IdNormalQuickOverrideSANFSLists the quorum types that are available for use. It is a list of two elements: Type Name and Value. Available QuorumTypesThe name of quorum type can one of the following values: 0) "Normal": Quorum is determined based on the majority of the nodes. 1) "Quick": A node can be online to a peer domain without the majority of the nodes. 2) "Override": Quorum will be controlled via SetQuorumState action (OS400 only). 3) "SANFS": SANFS only Type NameThe numeric value for the quorum type name. Type ValueQuorumTypeThis field is used to specify the quorum type that determines the peer domain quorum. Default QuorumType is 0("Normal").This field is used to specify the quorum group name that determines the peer domain quorum. This is valid only with "SANFS" QuorumType.QuorumGroupNameThis field is used to determine the startup quorum.StartQuorumTypeThis action is used by an external entity which manages the quorum. The OpQuorumState of the sub-domain that is granted ownership will change to HAS_QUORUM from PENDING_QUORUM. If the sub-domain is not granted ownership, its OpQuorumState will change to NO_QUORUM from PENDING_QUORUM which may in turn cause the CritRsrcProtMethod to be invoked on any nodes that have critical resources active. Exactly one subdomain can be granted ownership of the tie-breaker. Set Quorum StateThis input parameter conveys the sub-domain OpQuorumState. OpQuorumThis output parameter is not currently used and will always have a value of zero. ResultOptional thread fanout for parallel operations.FanoutOptional thread fanout for parallel operations.FanoutCluster shared secret key type. One of: CSSKTYPE_None, CSSKTYPE_DES_MD5, CSSKTYPE_3DES_MD5, CSSKTYPE_AES256_SHA.CSSKTypeCluster shared secret key refresh interval (in seconds). 0 for none or 1 second to 30 days. Default is 1 day (86400 seconds).CSSK Refresh IntervalThis action updates the cluster shared secret key.Update KeyThe result of the UpdateKey action.ResultOptions for the UpdateKey action.OptionsThis action selects a cluster shared secret key value.Select CSSKSelect CSSK request typeRequest TypeEncrypted proposed key valueEncrypted KeySignature on encrypted proposedon key valueSignatureProposed key's versionKey VersionMap of nodes selecting proposed key valueNodeMapResource handle of peer domain being brought onlineResource HandleIP address of originating nodeOrigin IP AddressNode number of originating nodeOrigin Node NumberResponse to select CSSK requestResponse TypeEncrypted response key valueEncrypted KeySignature on encrypted response key valueSignatureResponse key's versionKey VersionMap of nodes selecting response key valueNodeMapArray of nodes that are quorum nodesQuorum Node ArrayArray of nodes that are preferred Group Services group leadersPreferred Group Services Group Leader ArrayEnable use of IPv6 interfaces for display, heartbeating and peer domain managementEnable IPv6 SupportThis action will verify the peer domain has operational quorumVerify Strong Quorum StateReturn code of VerifyStrongQuorum action executionResult CodeThis resource action enables an authorized user to invoke a command.RunCommandThe Command handle of the command to be runCommandHandleUser Name used to run the commandUserNameCommand options used to run the commandCommandOptionsCommandNameFull path of the command to be runCommand arguments of the command to be runCommandArgumentsCommand run timeout of the command to be runRunTimeoutEnvironment parameters used to run the commandEnvironmentParametersStandard output of the run commandStdoutStandard error of the run commandStderrExit code of the run commandExitCodeStatus of the run commandStatusThis resource action enables an authorized user to run an action on a command.ActOnCommandCommand Handle of the submitted commandCommandHandleAction IDActionIDStatus of the run commandStatusClient Platform IDPlatform IDTarget Platform IDPlatform IDCanonical Exit CodeCanonical Exit CodeVersionClient RCE VersionVersionServer RCE VersionVersionClient RCE VersionVersionServer RCE VersionTarget Node NameNode NameTarget Node NameNode NameVersionServer RCE versionDomain TypeDomain TypeTraditionalCAADisks to be used for CAA cluster definitionCAA Cluster Disk(s)Disks to be used for CAA repository definitionCAA Repository DiskQuorumLessNamePolicy provided at RPD/CAA cluster definitionNamePolicyThis action activates Cluster-Aware AIX for the currently running peer domain.MigrateToCAAReposiory disks for Cluster-Aware AIXReposDisksShared disks that will be used in Cluster-Aware AIXSharedDisksQuorumType that will be used after migrationQuorumTypeNamePolicy that will be used after migrationNamePolicyThis action forcibly reserves TB and provides operational quorum to a subdomain.ForceReservationAction TypeActionReturn code of ForceReservation action executionResultThe PeerNode resource class contains fixed resources, one per node within the domain. A node is defined to be an instance of an operating system and is not necessarily tied to hardware boundaries. The set of nodes comprising the domain will be managed consistently across all nodes so that they all see the same class attributes and node resource attributes.PeerNodeThe field is used to control whether normal behavior should be overridden. The offline operation can be rejected by other subsystems such as the Recovery Manager unless the force option is specified.ForceDo Not ForceForceEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted.ResourceDefinedBasic (Group 0)Internal (Group 255)An event is generated each time a new resource is created or discovered.Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted.ResourceUndefinedAn event is generated each time a new resource is deleted.Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted.Configuration ChangedAn event will be generated when the value of a persistent attribute changes.VarietyIdentifies which of the defined class attributes and actions apply to this version of the resource class.Maximum Node Number AllocatedIdentifies the maximum node number that has been assigned to a node since the domain was created. Last Node Number AssignedIdentifies the last node number which was assigned to a node. The current state of the resource.Operational StateUnknownOnlineOfflineFailed OfflineStuck OnlinePending OnlinePending OfflineMixedAn event will be generated when the node goes offline in the online domain.A rearm event will be generated when the node goes online in the online domain.Basic (Group 0)Internal (Group 255)The dynamic attribute is asserted whenever one or more persistent attribute values change.Configuration ChangedNo configuration is changed.One or more persistent attribute values are changed.The name of the node. It may be specified on define as either an IP address or a DNS name. If a DNS name is specified, it must be resolvable to an IP address.Node NameAn internally assigned handle that uniquely identifies this resource.Resource HandleIdentifies which of the defined resource attributes and actions apply to the resource.VarietyThis array contains one element that identifies the node on which the resource exists. Maximum value is 2047 due to the restriction of HATS. Node 0 is not used because it may have special meaning for some subsystems that may migrate from SP.Node ListA unique identifier for the node.Node IdentifiersIdentifies the version of the RSCT software that is installed on the node. Some nodes may have a later version installed but functionally they will operate at the minimum level existing on any defined nodes in the domain.RSCT VersionAn array indexed by resource class ID for the version of the classes installed in the node.Class VersionsThe current public key associated with the node which is used to provide security for remote operations to that node. Public KeyThe value of this attribute reflects the list of names that the node may be referred to within a Peer Domain through the NodeNameList attribute of any resource resource class.Node NamesSpecifies define resource operation should continue even if errors are detected such as when some specified nodes are not network accessible. Continue If ErrorIdentifies the names of the nodes to be defined to the domain.Node NamesIdentifies the node number to be assigned to each node.Node NumbersThis action is used internally by ConfigRM to update the RSCT version of a node. Update RSCT VersionThis input parameter identifies the node whose version is to be updated. Node IdThis input parameter contains the new version of the node identified by the NodeId parameter. VersionThis attribute identifies the latest RSCT version that has been committed in the domain configuration. There is a period of time when the committed version is being activated which is reflected by the ActiveVersionChanging attribute. When the version has been activated, the RSCTActiveVersion attribute of the PeerDomain class will match the value of this attribute. Committed RSCT VersionThis attribute indicates when a transition to a new RSCT version is in process. Active Version ChangingThis attribute determines whether operational quorum for the domain should be determined from the state of the nodes in the domain or whether the it should be assumed to always have quorum. Operational Quorum OverrideDetermine QuorumForce QuorumThis attribute determines how critical resources are guaranteed to be stopped or protected when quorum is lost in the domain. In order for a sub-domain resulting from a network partition which still has quorum to take control of shared resources, those resources must be terminated in any sub-domain without quorum or else data corruption could result. The methods typically used cannot allow a graceful shutdown of the resource since this could take an arbitrary amount of time and the surviving domain could never know when it is safe to assume control of the shared resources. Critical Resource Protection MethodInheritHard reset & reboot systemHalt systemSync, hard reset & reboot systemSync, halt systemNoneLeave & Rejoin DomainThis attribute determines which tie break from the IBM.TieBreaker resource class is active in the domain. Operational Quorum Tie BreakerThis attribute determines how critical resources are guaranteed to be stopped or protected when quorum is lost in the domain. In order for a sub-domain resulting from a network partition which still has quorum to take control of shared resources, those resources must be terminated in any sub-domain without quorum or else data corruption could result. The methods typically used cannot allow a graceful shutdown of the resource since this could take an arbitrary amount of time and the surviving domain could never know when it is safe to assume control of the shared resources. This attribute is specific for a node but can be inherited from the global value for the domain. Critical Resource Protection MethodThis attribute indicates whether or not critical resources are active on the node. Critical Resources ActiveAn event will be generated whenever the state of critical resources on the node changes. This option indicates whether the operation should continue if a majority of nodes are not online in the domain. ForceDo Not ForceForceQuorumTypeThis attribute determines how following quorums should be determined - startup quorum, configuration change, and operational quorum. NormalQuickOverrideSANFSThis field is used to specify the quorum group name that determines the peer domain quorum. This is valid only with "SANFS" QuorumType.QuorumGroupNameThis input parameter contains the new public key of the node identified by the NodeId parameter. PublicKeyThread fanout for parallel operationsFanoutThis action is used internally by ConfigRM to start (bring online) multiple nodes. Start multiple nodesThis input parameter identifies the set of nodes to be started. List of nodesThis action is used internally by ConfigRM to stop (bring offline) multiple nodes. Stop multiple nodesThis input parameter identifies the set of nodes to be stopped. List of nodesConsult resource managers before bringing nodes offline if this value is 0, force them offline without such consultation if this value is 1. 'Forced' flagThis action is used internally by ConfigRM to delete multiple nodes. Delete multiple nodesThis input parameter identifies the set of nodes to be deleted. List of nodesConsult resource managers before removing nodes if this value is 0, force them to be removed without such consultation if this value is 1. 'Forced' flagExitTurn OffWhether this node is a quorum nodeQuorum NodeWhether this node is preferred to be Group Services group leaderPreferred Group Services Group LeaderName of the active fence group. Operational Fence GroupUsability state of the node. Operational Usability StateNode is not usableNode is usableNode is pending a state changeAn event is generated each time the attribute value changes. Action to set IBM.PeerNode OpUsabilityState dynamic attribute value.SetOpUsabilityStateValue to set an IBM.PeerNode resource OpUsabilityState attributeState ValueNode Cleanup CommandNode fence cleanup commandNode Cleanup CriteriaNode fence cleanup criteriaCAA node ID 64-bit, deprecatedCAA node IDNode UUIDNode UUIDQuorumLessMax startup delay of the node online when QuorumType is configured as QuorumLessQuorumLessStartupTimeoutCritical resource protection mode.Critical ModeForce disable. Critical resource protection will be disabled.Normal. Critical resource protection will be based on active resources and protection method.Force enable. All nodes will be treated as if they have critical resources active.Path of a command to be invoked when OpQuorumState changes.Notify Quorum Changed CommandControl Name field update as per HostName changeName PolicyName field is not updated with HostName.Name field will be changed to HostName whenever HostName is changed.Hostname of the node. It may be specified on define as either an IP address or a DNS resolvable node name. HostnameHostnames of nodes defined to the domainHostNamesSecurity Mode of Peer DomainSecurityModeThis input parameter contains the new host name of the node identified by the NodeId parameter. HostNameThis input parameter contains the new name of the node identified by the NodeId parameter. NameThis input parameter contains the new NodeNames of the node identified by the NodeId parameter. NodeNamesOptions for live update operations.Live Update OptionsThis attribute defines the maintenance mode configuration of the cluster that may be applied to peer nodes. MaintenanceModeConfigThis input parameter manually controls whether the maintenance mode will be applied to the node(s) being taken offline. If the value is 1, then the maintenance mode configuration will be applied and the node(s) will be put into maintenance state. If the value is 0, then the maintenance mode configuration will not be applied. The parameter value overrides the maintenance mode configuration keyword "EnterMaintenanceOnStop".'MaintenanceMode' FlagThis attribute reflects whether node is in maintenance mode or not. MaintenanceStateThe node is not in maintenance mode.The node is in maintenance mode.Action to change IBM.PeerNode MaintenanceState dynamic attribute value.ChangeMaintenanceStateNew value to be set for the IBM.PeerNode resource MaintenanceState attributeNew maintenance state valueThis attribute determines how much grace period time to be allowed for critical daemon to reconnect back to HAGS after restart. If the value is -1 (default value), then use the predefined timeout or behavior determined by the code. If the value is 0, no delay is allowed. If the value is > 0, RMC/ConfigRM will be allowed to reconnect back within this grace period after disconnection.Critical daemon restart grace period class valueThis attribute determines how much grace period time to be allowed for critical daemon to reconnect back to HAGS after restart, and this value overrides class attribute "CritDaemonRestartGracePeriod". If the value is -1 (default value), then use the the grace period of class attribute "CritDaemonRestartGracePeriod". If the value is 0, no delay is allowed. If the value is > 0, this value overrides class attribute "CritDaemonRestartGracePeriod", RMC/ConfigRM will be allowed to reconnect back within this grace period after disconnection.Critical daemon restart grace period resource valuestartrpdomain [-h] [-f] [-T] [-V] domain stoprpdomain [-h] [-f] [-T] [-V] domain 2632-900 %1$s: The specified domain "%2$s" does not exist. 2632-901 %1$s: The following error was detected when issuing the RMC API function %2$s: %3$s2632-902 %1$s: The following error was returned from the RMC subsystem: %2$s2632-903 %1$s: Required memory could not be allocated. The tie breaker resources configured for the peer domain. Tie BreakerEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted. Resource DefinedBasic (Group 0)An event is generated each time a new resource is created or discovered. Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted. Resource UndefinedAn event is generated each time a resource is deleted. Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted. Configuration ChangedAn event is generated each time a persistent class attribute changes. Identified which of the defined class attributes and actions apply to this version of the resource class. VarietyLists the types and the paths of the tie breakers that are available for use. It is a list of two elements: Type Name and Path. Available TypesThe name of the tie breaker type with one of the following values: 1) "Operator": This tie breaker asks for a decision from the system operator or administrator. The operator executes this decision by invoking the ResolveOpQuorumTie action. 2) "Fail": This pseudo tie breaker always fails to reserve the tie breaker. 3) "ECKD": The tie breaker assumes that an ECKD-DASD is shared by all nodes of the peer domain. Tie breaker reservation is done by the ECKD reserve command. This tie breaker is specific to Linux for zSeries. 4) "SCSI": This tie breaker assumes that an SCSI-disk is shared by all nodes of the peer domain. Tie breaker reservation is done by the SCSI reserve or persistent reserve in command. This tie breaker is specific to Linux for xSeries. 5) "Success": This pseudo tie breaker always success to reserve the tie breaker. Type NameThe path to the loadable for the code that implements that type of tie breaker. Up to this release, the path will always be a NULL string since all 4 existing types are built into ConfigRM. The code is setup to allow additional types of tie breakers to be added without changing ConfigRM in later release. PathInternal (Group 255)The dynamic attribute is asserted whenever one of more persistent attribute values change. An event will be generated when a persistent attribute changes.Configuration ChangedNo configuration is changed.One or more persistent attribute values are changed.Basic (Group 0)The name of the tie breaker resource, assigned by a system administrator. This is the value used to set the OpQuorumTieBreaker which activates a tie breaker. NameAn internally assigned handle that uniquely identifies this resource.Resource HandleInternal (Group 255)Identifies which of the defined resource attributes and actions apply to the resource. All existing tie breaker resources have a Variety value of 1. VarietyIdentifies the type of the tie breaker resource. It must be one of the types listed in the AvailableTypes class attribute. TypeTie breaker specific information used to identify the tie breaker device. Tie breakers of the type "Operator", "Fail" and "Success" do not use this attribute and the value is NULL. Device InformationTie breaker specific information used to reprobe for the tie breaker device. Tie breakers of the type "Operator", "Fail" and "Success" do not use this attribute and the value is NULL. Reprobe DataDefines the period to retry if the release of a tie breaker fails. Release Retry PeriodSome tie breakers need to be re-reserved periodically to hold the reservation. This attribute defines how often to retry. If the associated tie breaker type does not support heartbeating, this value will be restricted to 0. Heartbeat PeriodThe amount of time to wait after a tie situation has been determined until an attempt is made to reserve the tie breaker. This applies to all tie breaker types. Pre-Reserve Wait TimeThe amount of time to wait after the ownership of a tie breaker has been determined until the OpQuorumState is updated to reflect this change. Post-Reserve Wait TimeTie breaker specific information on a per node basis. This attribute is an array in which each element of the array corresponds to a node and contains two strings (1) A node name and (2) A string of information that the tie breaker understands. Tie breakers of the type "Operator", "Fail" and "Success" do not use this attribute and the value is NULL. Node InformationThe name of the node. Node NameA piece of information that the tie breaker understands. InformationIBM.ConfigRM daemon has started. The RSCT Configuration Manager daemon (IBM.ConfigRMd) has been started.IBM.ConfigRM daemon has been stopped.The RSCT Configuration Manager daemon(IBM.ConfigRMd) has been stopped.The stopsrc -s IBM.ConfigRM command has been executed.Confirm that the daemon should be stopped. Normally, this daemon should not be stopped explicitly by the user.NonePeer Domain NameError CodeMessage Catalog NameMessage SetMessage IdentifierMessage InsertsDomain Configuration VersionNode Configuration VersionThe operational quorum state of the active peer domain has changed to NO_QUORUM. This indicates that recovery of cluster resources can no longer occur and that the node may be rebooted or halted in order to ensure that critical resources are released so that they can be recovered by another sub-domain that may have operational quorum.The operational quorum state of the active peer domain has changed to PENDING_QUORUM. This state usually indicates that exactly half of the nodes that are defined in the peer domain are online. In this state cluster resources cannot be recovered although none will be stopped explicitly.The operational quorum state of the active peer domain has changed to HAS_QUORUM. In this state, cluster resources may be recovered and controlled as needed by management applications.One or more nodes in the active peer domain have failed.One or more nodes in the active peer domain have been taken offline by the user.A network failure has disrupted communication between the cluster nodes.One or more nodes have come online in the peer domain.Ensure that more than half of the nodes of the domain are online.Ensure that the network that is used for communication between the nodes is functioning correctly.Ensure that the active tie breaker device is operational and if it set to 'Operator' then resolve the tie situation by granting ownership to one of the active sub-domains.The operating system is being rebooted to ensure that critical resources are stopped so that another sub-domain that has operational quorum may recover these resources without causing corruption or conflict.The operating system is being halted to ensure that critical resources are stopped so that another sub-domain that has operational quorum may recover these resources without causing corruption or conflict.The cluster software will be forced to recycle the node through an offline/online transition to recover from an error. Note that this will not guarantee that critical cluster resources are stopped and therefore does not prevent corruption or conflict if another sub-domain attempts to recover these resources.Critical resources are active and the active sub-domain does not have operational quorum.After node finishes rebooting, resolve problems that caused the operational quorum to be lost.Boot the operating system and resolve any problems that caused the operational quorum to be lost.Manually stop any critical resources so that another subdomain may recover them.Resolve any problems preventing other nodes of the cluster from being brought online or resolve any network problems preventing the cluster nodes from communicating.The peer domain configuration manager daemon (IBM.ConfigRMd) is exiting due to the local node's configuration version being different from that of the active domain. The daemon will be restarted automatically and the configuration of the local node will be synchronized with the domain.The domain configuration changed while the node was coming online.A configuration change was applied but could not be committed so the node will be taken offline and back online. During the online processing the configuration will be synchronized if the problem as been cleared.Insufficient free space in the /var filesystem.Ensure there is sufficient free space in the /var filesystem.The peer domain configuration manager daemon (IBM.ConfigRMd) is exiting due to the Group Services subsystem terminating. The configuration manager daemon will restart automatically, synchronize the nodes configuration with the domain and rejoin the domain if possible.The Group Services subsystem detected another sub-domain and is attempting to merge with it.The group services subsystem has failed.No action is necessary since recovery should be automatic.The sub-domain containing the local node is being dissolved because another sub-domain has been detected that takes precedence over it. Group services will be ended on each node of the local sub-domain which will cause the configuration manager daemon (IBM.ConfigRMd) to force the node offline and then bring it back online in the surviving domain.A merge of two sub-domain is probably caused by a network outage being repaired so that the nodes of the two sub-domains can now communicate.No action is necessary since the nodes will be automatically synchronized and brought online in the surviving domain.The node is online in the domain indicated in the detail data.A user ran the 'startrpdomain' or 'startrpnode' commands.The node rebooted while the node was online.The configuration manager recycled the node through an offline/online transition to resynchronize the domain configuration or to recover from some other failure.An error was encountered while the node was being brought online. The configuration manager daemon (IBM.ConfigRMd) will attempt to return the node to an offline state.Failure in a dependent subsystem such as RMC. See the detailed error fields for the specific error.Resolve the problem indicated in the detailed data fields and try bringing the node online via the 'startrpnode' or 'startrpdomain' command.The node is offline.A user ran the 'stoprpdomain' or 'stoprpnode' commands.There was a failure while attempting to bring the node online.If the node is offline due to a failure, attempt to resolve the failure and then run the 'startrpnode' or 'startrpnode' commands to bring the node online.An error was encountered while the node was being taken offline. The configuration manager daemon (IBM.ConfigRMd) will exit and restart in an attempt to recover form this error.Failure in a dependent subsystem such as RMC. See the detailed error fields for the specific error.If the configuration manager daemon (IBM.ConfigRMd) fails to restart after attempting to recover from this error, contact your software service organization.An internal error was encountered in the configuration manager daemon (IBM.ConfigRMd).Failure in the various reasons. See the detailed error fields for the specific error.Resolve the problem indicated in the detailed data fields. Try bringing the node online via the 'startrpnode' or 'startrpdomain' command.The peer domain configuration manager daemon (IBM.ConfigRMd) is exiting due to encountering an error in the process of making a domain online. The configuration manager daemon will restart automatically, synchronize the node configuration with the domain and rejoin the domain if possible.A problem exists with the Group Services or Topology Services subsystem.A problem exists with the System Resource Controller.No action is necessary since recovery should be automatic.The peer domain configuration manager daemon (IBM.ConfigRMd) is exiting due to encountering an error in the process of making a domain online. The configuration manager daemon will be stopped.Correct error situation and restart IBM.ConfigRM subsystem.The system is not healthy as the root disk is not accessible, and the operating system is being halted or the cluster software will be restarted to ensure that critical resources are stopped so that another sub-domain that has operational quorum may recover these resources without causing corruption or conflict.The root disk is no longer accessible.Check and fix the root disk.System is unable to fork a new process to execute a critical user-requested command, and the operating system is being halted or the cluster software will be restarted to ensure that critical resources are stopped so that another sub-domain that has operational quorum may recover these resources without causing corruption or conflict.Unable to fork a new process.Check the system status whether the process table is full.A system service registered to receive quorum changes detected an error while processing a quorum change. The configuration manager daemon (IBM.ConfigRM) will recycle to initiate recovery of cluster operations.A system service detected an error while processing a quorum change.Review the error information for required actions. If the problem recurs or the configuration manager daemon (IBM.ConfigRMd) fails to restart after attempting to recover from this error, contact your software service organization.Fence Error CodeFence Error NameFence Agent TypeNode NameA system service required to perform a critical fence operation is not available on the local node. The configuration manager daemon (IBM.ConfigRM) will recycle to initiate recovery of cluster operations.A system service required for node fencing is unavailable on the local node.Review the error information for required actions. If the problem recurs or the configuration manager daemon (IBM.ConfigRMd) fails to restart after attempting to recover from this error, contact your software service organization.An attempt to fence a cluster node failed. The usability state of the node will be set to UNUSABLE.The fence group configured to perform cluster fence operations failed to fence a node.Review the error information for failure reason and node information.ConfigRM informational messageInformational messageInformational messageNONEDIAGNOSTIC EXPLANATIONConfigRM received Site Split event notificationNetworks between sites may have been disconnectedNetworks between sites may have been disconnectedCheck the network connectivity between sitesDIAGNOSTIC EXPLANATIONConfigRM received Site Merge event notificationNetworks between sites may have been reconnectedNetworks between sites may have been reconnectedVerify the network connection between sitesDIAGNOSTIC EXPLANATIONConfigRM received Subcluster Split event notificationNetworks between subclusters may have been disconnectedNetworks between subclusters may have been disconnectedCheck the network connectivity between clustersDIAGNOSTIC EXPLANATIONConfigRM received Subcluster Merge event notificationNetworks between subclusters may have been reconnectedNetworks between subclusters may have been reconnectedVerify the network connection between subclustersDIAGNOSTIC EXPLANATIONAn attempt to update Peer Domain Cluster table is not successful and RMC will remain as IW mode.Failure is probably due to RMC inoperative or having connection issues.Resolve the problem indicated in the detailed data fields. Try bringing the node online again via the 'startrpnode' or 'startrpdomain' command.ConfigRM is extending VAR FileSystemVAR FileSystem don't have free space to create PeerDomainVAR FileSystem don't have free space to create PeerDomainReview the error information for required actionsThe Node ID changed and different from PeerDomain Cluster Table (SRCNTBL).The operating system fresh installation took place on the node.The node brought up using cloned operating system from other system or alternate disk.The manual change in node id files.The RSCT subsystems reconfigured.The node ID has to be recovered as per PeerDomain Cluster Table (SRCNTBL).The fence agent resources configured for the peer domain. Fence AgentEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted. Resource DefinedBasic (Group 0)An event is generated each time a new resource is created or discovered. Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted. Resource UndefinedAn event is generated each time a resource is deleted. Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted. Configuration ChangedAn event is generated each time a persistent class attribute changes. Identifies which of the defined class attributes and actions apply to this version of the resource class. VarietyInternal (Group 255)List of unique names and the locations of fence agent modules. Available TypesUnique name representing a fence agent load module. Available types are used to associate fence agent resources to an executable module which provides the node fencing functionality. TypePath and file name of the fence agent module. PathThe dynamic attribute is asserted whenever one of more persistent attribute values change. Configuration ChangedInternal (Group 255)Identifies which of the defined resource attributes and actions apply to the resource. All existing fence agent resources have a Variety value of 1. VarietyAn internally assigned handle that uniquely identifies a resource. Resource HandleThe name of the fence agent resource assigned by a system administrator. NameBasic (Group 0)This attribute associates the fence agent resource to a fence module executable. The value must match one of the "Type" fields of the IBM.FenceAgent class "AvailableTypes" attribute. TypeThe maximum time in milliseconds that the agent is allowed to execute a fence operation against an errant node. If the fence agent has not completed within the time out value, the agent is considered to have failed to fence the target node. TimeoutThe time interval in milliseconds between requests to the fence agent module to perform periodic health checks. Healthcheck IntervalAgent specific information required to configure the agent module for operation in the peer domain. Agent InformationNode specific information required to configure the agent module for operation in the peer domain. Node InformationName of the peer node the information applies to. The node name must match the name of an IBM.PeerNode resource in the peer domain. Node NameComma separated list of node specific configuration data as defined by the agent module. Node InformationA unique identifier for the node. Node IdentifiersOption to control the level of checking performed when defining fence agent resources. ForcePerform default checking including verification of agent operability. Restrict checking to validation of attribute value syntax. The fence group resources configured for the peer domain. Fence GroupEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted. Resource DefinedBasic (Group 0)An event is generated each time a new resource is created or discovered. Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted. Resource UndefinedAn event is generated each time a resource is deleted. Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted. Configuration ChangedAn event is generated each time a persistent class attribute changes. Identifies which of the defined class attributes and actions apply to this version of the resource class.VarietyInternal (Group 255)The "ExecutionTypes" attribute contains valid values for the "Type" attribute of IBM.FenceGroup resources. Execution TypesThe dynamic attribute is asserted whenever one of more persistent attribute values change. Configuration ChangedInternal (Group 255)Identifies which of the defined resource attributes and actions apply to the resource. All existing fence group resources have a Variety value of 1. VarietyAn internally assigned handle that uniquely identifies a resource. Resource HandleThe name of the fence group resource assigned by a system administrator. This is the value used to set the "OpFenceGroup" attribute of the IBM.PeerNode class to activate node fencing. NameBasic (Group 0)Execution type of the fence group resource. Type determines whether fence groups containing multiple agents should be executed serially or in parallel, and how the results of multiple agents should be evaluated to determine if a fence attempt has succeeded. The value must match one of the "ExecutionTypes" attribute of IBM.FenceGroup. TypeList of IBM.FenceAgent resources to be executed when a node is fenced. Agents are executed in list order and as defined by the "Type" attribute of the fence group resource. Execution ListAction to be performed when a node cannot be fenced. Possible values are: "USER" The "OpUsabilityState" attribute of the IBM.PeerNode resource will be reset manually by invoking the "SetOpUsabilityState" resource action. Failure ActionFilter used to control which nodes are to be fenced when a problem is detected. Valid values are: "ANY" Any errant node in the peer domain should be fenced. "CRIT" Only nodes with active critical resources should be fenced. Node CriteriaA unique identifier for the node. Node IdentifiersStatus of the peer domain operational fence group ("OpFenceGroup" attribute of IBM.PeerNode class) on the group leader node. Fence Group StatusNode fencing is not enabled.Node fencing is active.One or more active fence agents failed a healthcheck.An event is generated each time the attribute value changes. The Heartbeat Interface resource class contains fixed resources in which each resource corresponds to a non-IP heartbeat interface.Heartbeat InterfaceEach time a resource is created either explicitly or implicitly, this dynamic attribute will be asserted.Resource DefinedAn event is generated each time a new resource is created or discovered.Basic (Group 0)Internal (Group 255)Each time a resource is deleted either explicitly or implicitly, this dynamic attribute will be asserted.Resource UndefinedAn event is generated each time a new resource is deleted.Whenever a persistent attribute of the resource class changes, this dynamic attribute will be asserted.Configuration ChangedAn event will be generated when the value of a persistent class attribute changes.Identifies which of the defined class attributes and actions apply to this version of the resource class.VarietyRepresents the current state of the heartbeat interface.Operational StateUnknownOnlineOfflineFailed OfflineStuck OnlinePending OnlinePending OfflineAn event will be generated when the heartbeat interface goes down.A rearm event will be generated when the heartbeat interface goes up.Basic (Group 0)Internal (Group 255)This attribute is asserted whenever one or more persistent attribute values of a heartbeat interface change.Configuration ChangedNo configuration is changed.One or more persistent attribute values are changed.An internally assigned handle that internally identifies a HeartbeatInterface.Resource HandleIdentifies which of the defined attributes and actions apply to the resource.VarietyThe name of the heartbeat interface.NameUnique heartbeat device-indentifying information.Device InfoSpecify if the heart beat is active or not. 0 if inactive, 1 if active(default).Heartbeat ActiveInactiveActiveIdentifies the name of the communication group to which the heartbeat interface is associated.Communication GroupOptional heartbeat interface device controls.QualifierA unique identifier for the node.Node IdentifiersDescription of force flag in define option for resource class HeartbeatInterface.ForceDescription of force flag in undefine option for resource class HeartbeatInterface.ForceDo Not ForceForceThe type of media underlying the HeartbeatInterfaceMedia TypeUserDefinedIPDiskThe name to refer to the node in the peer domain.SecurityMode of PeerDomain.SecurityModePriority of the node to execute TB operationsTBPriorityMaximum wait time for the quorum notification response from client.QuorumNotificationRespWaitTimeThis class action is used by user to specify priorities for the nodes. ChangeTBPriorityThis input parameter contains TargetNodes on which the TBPriority will be changed. TargetNodesThis input parameter contains value of the priority to be set for TargetNodes. PriorityResult code of ChangeTBPriority clas actionResultDescription of MsgId field in output SD for ChangeTBPriority action.Message IdRoCEMultiNodeDisk