:=ISO8859-1<=%=>H?'w?p>?G@' Ao pB4BKC-DyEeFQ2G gGS6GG8H;HI;HH(IA :Ij!;I#I(J-RKE.OK/,K0(L1.L>2=Lm3^L4=M 5 MH77Ni8KN9kN:OY;>PuFPG@QHHyRIvSKmSzLsSM3T\N#TO7TP.TQ7URgUSSBUT?UU)V>VVhW/W9XBWiYWZXk[X\Y]QZ\^-[_\`5]~a-]bT]ck^7d^e"`ef-ahaiibYjbl.dn3doepfqgfr[h-six|lWylz%mmDnToLp$jpq;q4rrMcsMs-us&uuUv w:y!(y\y_z}zM}0~&5~W8,.A1234k;?R@R A-]BLCBDWE'sF+G^Hn&IOJdKmJLRMZ O9fP=SKT`*U:VNY0Z4F]L{^a_q*`OcdwfgggphtitMjdkd'mnqvrXsrtuw"x|L =/*jjz~WQDv$Ddc&((2 *&?,=f.jX]r^>J_i‰`a~b"cS d\_eƼfPEgǖhYi?j%k$lkm5{n ̱oBpNqsdrEsvλx<@z<}|RϺ~: ]HЦeD1@v`ӷLe)RBޕM- O..E=t,khKo$C+e.~@ &<<$@*e(q+Z/ (;*d6qX8E2 ')Qs}r29Dx~C1;SmT3wa  W liW6.-e) 7 KKbGY68B: 8G: 7"$%*X+7,)-I.45_6A7EG89:I ;&!<C!9="}>?$z??$B2$Ha%-I%%JT&K[( LX)fSO*TX+U}+hVL+W:,3\\,n]*,^,_-`-'aX-Gb3-c,-dB.e?.Df/.k5l68m`6n`7YoH7pg8z9k{Y?:@O@B'D K@LMM5QSfT@UkV~WyXZ4[QE] .aQb`dn|eSsQexRf"fugghbiD/j0jkk!lZ)n|0n\n=p42pryp8q<qX=qDqrrs}tJfuTu}UuKv(<vt=v3vw#`wx]Kyb3yy{0|aC|Xe} ,~<.~i/~1~3~#3.&1b(/-175<R,A7FTK" Pk/U9Z6_k d5xi0n3sVxnjK{k Q5_mL&N0: N.DXdsbilqBvl!88 Bp5(KezaI}ODB5L,f&86JoBD:qCWH:kdZ %f E918kA-#/(2D-+w<AnFZ+KDP,U._4'dX\iCxL}8F:^=IW6eN>$*DO7.y}u<0JsRvPP CkG,*(-U2`7GL<FAKGPvU+FZr_,dXn] p?k\2r;,b6@MJ%TaJT+=1o#6(4;-ep24[7:'<bAFGޢKPCUZa+_`d>i-n|sx,t}f66?7v6?=%yc,A6-@(J.90h-,!i,+{bUi2rjUklmDqnopq2rhsf4tu`{+}y=Bxri iw w pX  ACXKS<ansT7%d4h=Jjyl\n_s|Vx} _<""A9#4-#n4##]$<%)3%f.%"%*%&f&*]v+N,s,A-G<-A-;.d.D ~.nG/(3pV496,8<88i28g8,9=d9j^9,:.-C:.:2<<<3==:0=Wk==>? A1BB1CP-C0D EFrEGH6H I_R eHT*f-TsgVhXx Y6[Y^  ^_`a<bb xObob0c_Ud_6e`Ted7fffIhfdgn4hux.hi4j+k*lV(mn<m[mn0dn0oJ0o{1o/o2p2pA7ptcpsquq qrNsostNlu uyv9 Iv x?)xyYzsdz{2||}\} ~q    O~DVJ98\o 7!S"4X#3$%aI&''-(NU)O**+N,n-.6 793QA8t+u 6  ^y Y _2<>D=S=68>? 7~ 7 * * *D<o$91+-]/21= D^ ?!<"? #`yP,d<>WP   \   5(^a{HMbMUJBG1LyABG K!A",#ô$nĖ%E& K'U(m7)Fʥ*+m-j<.eΧ/ 0i1Љ23q4Q352҅65Ҹ7,8B9K^:&Ӫ;2<.=^3>2Ԓ?@լA8B1(CMZDkڨEMFabGAHAIDHJ9܍KAL; M7ENW}O.PQ1ޑRSqUnV~WJXwKYpZe4[a\]}^_s`#a9bcdefqg-h5ijNk`lMmnopVr<srtufvOwxyzGh{| ?6 "  2G H : KX 7     " P &,9S64/%)-O%}U",<i}DK -!E"i#E1$w%6&}N'(K!P)C!*7!+",H"-S#;.'/(0[(16)D2^){3)4",^5=-6T-4 475K6 6U,6m6;:?g eA iC/DrJD]FjGKG=L1LeN/N`NDOOUOrO P# Q zQ R<RS4T1UV[ \]^g{_ ` a a e, g h8j/:jh/j.j>kDkAk lquv'vxoy9zAz{W{#{{l|K| L} V} T~O G~ ~'$ C.+SFl P D G! `vFZqxF  \n"V|"3c7Ov S#2H4uWAf*++ TW R 2 2 f8z\s22?6vvc? c* r!"#b&$q%z&Wv',(+)h'*N+,-<].</;0;1O236$n-R%491n_^z¾9ſ  [ȓ   .ˆ-̵H&&,зѤ2ӓd Ԅ \ ve  _@Vqחu niXG<ۥݐޓy߱O+{z _g!"#ua$%k&' o*s@]Zl. (@ 6 J NY 9q[  M   !Z3|) Q "$\5&M'9+( () B**,i-4U-C-.8/.G/301 00e 0 4 6 -77*7e789V9IY9g9;b<(+=>>?,4m@M@AMQBMQBBeCVD 3D^ D BE DE F9rFTG==G<G9H 8HGJHCHBIzIRI:JXJbK!`KKgLM rM!NP"PPb#S$7YY%9Y&7Z'\(0\)r\*]f+^y,T_-(_.'` /(`30'`\19`28`39`48a1.AajFaAab5wcc]dufd e: _f g? knp-p/8r <tpVuv<y#|`#%(4-{2/7<+ZAFKP[UZ? _IdhcinP{sx}&qfhOFRv?QP5-^h\m$n\ steN, 1ȵ6x6;ɯ@61Bh0#.lͦ3Tо  L \ Vw 4.z2֭6u٬H}kfއf*U߀-WH o!9>"x#w12oc),.[ `  Z 8 G|jj3\4?b=057Am:  _k Z & L/ 9 ,o fP  '; PzxnXd @3A0J {!0"/#a$v%S&'(Su)R*| + ,!--j!.x"8/i"0#1$2y&3'L4(N5)E6*-7*+8c-@9O-:-;/ <w1=42U>>2?36$U5G6C6I678 9W:d Lg>[?#@L@pAr[BP[B]CCfDxE(E,E,F']FTF"G|7G G!:H")I'#MIQ$`I79JMJ6JJK5LM.MNz hOj P Q `R4 ^RRSbTFTcDTU`VzVWkPW^W#X9CX]mXfYZv[K \ a^!K^x"^#@^$m`#%j`&`'a(\b)c*/d+Dd6,nd{-'d.Pe/zec0be1xfA2f3zg_4'g5h6h7ciF8ci9{j8I;k`lm'xnD]no o p q0 r? stw|}x}wyozsrz|{V`\4gy9s@smiz E+!q"AV#P$E%g/&'(*(NS)*/+7,8Q-./0}112.3456G789z:g;B<2=>V?@eABCuD?E#FGFH/IFJ ZKe9 Rw-"| Z;HWº\ope/ ɕʟ p˿ l0 ^̝ L I΁mT+ҶssW?4L8~8׷od ݿ!"Bc#$C%&'A()w*+c,[-bP./W0^ 1]j234o5#6#789wJ:=;q<wr=q>V\?a@QAigBDoE)F0GC&HjI< A iF 5 , (- U   7 : 1 \P=;YAPQ0j>W 9 E   dd< P]T4,87e3p+B(n)N4,E 7r!", #Y $?!%l!O&!'3"([")\#*C#x+%,%&-(.!(/)=0?)1 *;2;+E3+4,s5;-6.7/e809g0:G2";C2jHighly Available Cluster Multi-Processing/6000 allows two or more RISC System 6000 systems to share resources in a highly available environment.Configure the High Availability Cluster Multi-Processing Version cluster. Allows the user to configure, save, and restore the Cluster Topology and Resources.Start, stop, refresh, or show cluster services. The cluster services are the Cluster Manager (clstrmgr), Cluster SMUX Peer (clsmuxpd), Cluster Information (clinfo), and Cluster Lock Manager (cllockd).Utilities to help recover from cluster failures. Allows the following two options: i) Recovery from script failure ii) Clear SSA Disk Fencing RegistersPowerHA SystemMirror Trace, Error Notification, and Log File facilities.Configure the cluster, node, network, and adapter topology, and to synchronize all nodes from one centralized location.Manage users, groups, and LVM components across cluster nodes.Configure application servers, resource groups, volume groups, filesystems, disks, etc., as part of an PowerHA SystemMirror cluster. Also, set certain run-time parameters, change the Cluster Lock Manager Resource Allocation parameters, and synchronize the PowerHA SystemMirror resource related ODM entries to all cluster nodes.Manage the High Availability Cluster Multi-Processing Version node environment for all nodes. Resource variables used by PowerHA SystemMirror event scripts are placed into the ODM whenever a node environment is configured, and are removed whenever a node environment is deleted.The Cluster Snapshot facility allows the user to save and restore the Cluster Topology and Resource ODM classes.This is the symbolic name for an application server.A unique name for the application up to 64 alphanumeric characters in length. Valid PowerHA SystemMirror names must be at least one character long and can contain characters ([A-Z, a-z]), numbers ([0-9]) and an '_' (underscore). A name cannot begin with a number and an PowerHA SystemMirror reserved word cannot be a valid name.The full path to the script or executable associated with starting the application you wish to keep highly available. This executable will be run whenever the required resources become available, such as when a node joins the cluster.The full path to the script or executable associated with stopping the application you wish to keep highly available. This executable will be run whenever the required resources become available, such as when a node joins the cluster.Manage Application Servers for all defined nodes. Define the name of the application server as known to PowerHA SystemMirror. Define the scripts which are called to start and stop this application server.Show cluster, node, network, and adapter topology.Show the High Availability Cluster Multi-Processing Version cluster and node environment for all nodes.Show the cluster, node, network, and adapter topology.Show the cluster ID and name.Show node, network, and adapter topology sorted by node.Show node, network, and adapter topology sorted by network.Show node, network, and adapter topology sorted by adapter.Verify cluster topology, resources. and custom defined verification method. Also maintains custom defined verification methods.Show resource configuration among nodes.Show resource configuration associated with the node name.Show resource configuration associated with the group name.Recover from a script failure. This is useful if the Cluster Manager is in reconfiguration due to a failed event script. This command should be run after having manually fixed the error condition.Add resource variables used by PowerHA SystemMirror event scripts to the ODM. These are variables read into the environment by the script at script run-time.Delete resource variables used by PowerHA SystemMirror event scripts from the ODM.Add, Change, Show, or Remove a Cluster Definition. Also, reset cluster tunablesAdd, Change, Show, or Remove a Cluster Node.Add, Change, Show, or Remove an Adapter.Add, Change, Show, or Remove a Network Module.Start the PowerHA SystemMirror daemons (clstrmgr and clinfo).Change, or Show Configuration Parameters for the Topology Services and Group Services daemons.Stop the PowerHA SystemMirror daemons (clstrmgr and clinfo).Refresh the PowerHA SystemMirror daemons (clstrmgr, cllockd, clsmuxpd, and clinfo). This will also call the Dynamic Reconfiguration utility which will run the necessary routines to reconfigure the cluster using the configuration currently defined in the PowerHA SystemMirror ODM classes.Displays information about (in)active cluster services.Used to Change parameters the Lock Manager uses while allocating resources.This is the IP Label (hostname) associated with the service adapter of the node on which the script failed.This is the full-path name of the command to execute. This may be '/usr/es/sbin/cluster/topchng.rc', '/usr/es/sbin/cluster/netchng.rc', '/bin/true' or any other script command which will return success (0). Note: Any arguments to the command must be placed in the 'Arguments' field.These are any arguments to the command in the 'Command' field.Remove network adapter and associated parameters from the node's cluster configuration. This adapter will no longer take part in cluster services.Synchronize cluster topology information on all cluster nodes defined in the local topology database. If the Cluster Manager is currently active on the local node, a Dynamic Reconfiguration event will automatically be activated. All nodes joining a cluster must have the same cluster topology and resource information.Add a Cluster ID to the PowerHA SystemMirror environment. This ID must be a unique non-negative integer (0 - 9999999999).Associate a Cluster Name with a Cluster ID. The Cluster Name is a unique alpha-numeric string. Spaces are not allowed.Change the Cluster ID for the cluster. The Cluster ID must be a unique non-negative integer (0 - 9999999999).Change the Cluster Name for the cluster. The Cluster Name is a unique alpha-numeric string. Spaces are not allowed.Node ID of the node with the adapter to be changed.Adapter on this node to be changed.Node within the cluster to list resource configuration.ID of the node to add to the cluster topology.Node within the cluster to allow further configuration.The IP label (hostname) of this adapter, associated with the numeric IP address in the /etc/hosts file.Shared IP label within the cluster to allow further configuration.Assign the adapter function: service, standby, boot, or shared.Node ID of node which owns the resources.A symbolic name of the logical network associated with this adapter. This name has no meaning outside PowerHA SystemMirror, but is used within PowerHA SystemMirror to differentiate between network resources.Node ID of node which takes over the resources.Assign the attribute for this network: public, private, or serial.Assign the standard IP Address for this adapter. This is only required if the host's address information can not be obtained from either the domain name server or the local /etc/hosts file.Path to PowerHA SystemMirror Trace Facility. Enables low-level tracing of PowerHA SystemMirror daemons. Useful for problem solving.Path to PowerHA SystemMirror Notify Method. Allows an application to be notified whenever a selected error or group of errors are recorded in the system error log.Path to PowerHA SystemMirror Log File Viewing. Enables viewing of PowerHA SystemMirror script output files and error log files. Either scanning the file or real-time viewing of events is allowed.Enable/Disable tracing of clstrmgr, clinfo or clsmuxpd daemons. These daemons must be 'told' to generate trace events via the System Resource Controller. To record these events, use: 'Start/Stop/Report Tracing of PowerHA SystemMirror Services'. A short trace monitors vital information, while a long trace monitors detailed information.Start or stop tracing of the clstrmgr, clinfo, and clsmuxpd daemons, plus the Cluster Lock Manager kernel extension, as well as generate trace reports. The clstrmgr, clinfo, and clsmuxpd daemons must first be enabled for trace. The Cluster Lock Manager kernel extension will be traced automatically.Add an Error Notification Method to the ODM. This allows applications to be notified whenever a particular error or group of errors is logged in the system log.Change or Show an existing Error Notification Method.Delete an existing Error Notification Method.Allows the user to scan the PowerHA SystemMirror scripts log files: 'hacmp.out(.x)'.Allows the user to watch the PowerHA SystemMirror script log file 'hacmp.out' as events are appended to it.Choosing 'now' will start the daemons immediately. Choosing 'system restart' will start the daemons after a system reboot by adding an entry to the /etc/inittab file. Choosing 'both' will start the daemons immediately AND after a system reboot. The default daemons are the clstrmgr and clsmuxpd daemons. The cllockd and/or clinfo daemons are started only if so set via 'Startup Cluster Lock Services?' and/or 'Startup Cluster Information Daemon?'.Assign the hardware address for this adapter. This address will be used whenever another adapter assumes the IP address of this adapter. This eliminates the need for arp-cache updates, but increases the time for address swapping. This is not applicable for IPv6 either network or address.Uses the 'wall' command to broadcast startup.Starts the Cluster Lock Manager. Used in conjunction with the 'now, system restart, or both' field, which applies to this daemon as well as clsmuxpd and clstrmgr.Starts the Cluster Information daemon. Used in conjunction with the 'now, system restart, or both' field.Stops clstrmgr, clsmuxpd, clinfo, and cllockd. Choosing 'system restart' will stop the daemons after a system reboot. This will remove an entry from the /etc/inittab file. Choosing 'both' will stop the daemons immediately AND remove an entry from /etc/inittab. The daemons are the clstrmgr and clsmuxpd daemons, as well as the cllockd and clinfo daemon if so set via 'Startup Cluster Lock Services?' or 'Startup Cluster Information Daemon?'.Uses the 'wall' command to broadcast the stop.Graceful - Local machine shuts itself down gracefully. Remote machine interprets this as a graceful down and does NOT takeover resources. Takeover - Local machine shuts itself down gracefully. Remote machine interprets this as a non-graceful down and DOES takeover resources. Useful for system maintenance.Number of lock structures per segment. The default is 16384. Increasing this value initializes and grows the segment in larger blocks, thus reducing the frequency of memory allocation calls.Number of resource structures per segment. The default is 8192. Increasing this value initializes and grows the segment in larger blocks, thus reducing the frequency of memory allocation calls.The number of resource segments to allocate at Lock Manager startup. The maximum value is 256. If the approximate numbers of resources needed is known, setting this number could improve performance.The recalculation rate expresses the number of times a resource has to be referenced (lock, unlock, convert, etc.) before it will be considered for migration. The reference count is cumulative, but no calculations or comparisons are done to determine if the resource should migrate until the reference count reaches the 'recalculation rate'.The decay rate, expressed as a decimal number between 0.0 and 1.0, inclusive, controls how much of the historical reference data is saved for each resource. If the decay rate is 0.0, then only the references that happened during the last 'recalculation rate' period are taken into account when determining if and where a resource should migrate. If the decay rate is 1.0, then all references are cumulative - every reference that has happened since a resource was created is taken into account. If the decay rate is 0.5, then the history count is halved (multiplied by 0.5) and added to the total in the previous cycle. This becomes the history count for the next time the recalculation rate is reached.Allows the user to scan the PowerHA SystemMirror system log file: 'cluster.log' [default location: /usr/es/adm/cluster.log].Allows the user to watch the PowerHA SystemMirror system log file: 'cluster.log' as events are being appended to it. The default location of this log file is /usr/es/adm/cluster.log.Add/Change/Show custom-defined eventsSelect the supported type of resources for the cluster. For more information on the supported configurations, see 'Hardware Configurations' in the 'High Availability Cluster Multi-Processing Version System Overview'Choosing 'Active Node' will set up the environment for the node which owns all shared resources upon integration into the cluster. Choosing 'Standby Node' will set up the environment for the node which owns no shared resources upon integration into the cluster, but will acquire resources upon failure of the 'active' node.Add/Change/Show/Remove custom-defined options like verification methods, events etc.Verify cluster topology, resources, and custom-defined verification methods.Choosing 'Initially Active Node' will set up the environment for the first node to start the Cluster Manager. Choosing 'Standby Node' will set up the environment for the node which joins the cluster while an Active Server exists. This is symbolic (for setup) only, since either node may be the Active Server, but a way to differentiate between nodes is needed.Add/Change/Show/Remove custom-defined verification methods.Add a custom-defined verification method to the ODM.All previously defined custom verification method names will be displayed. After selecting a particular verification method, the method name, description, and filename for the specified method will be displayed. Changes to the selected method can be made.All information about the selected custom-defined verification method will be removed from the ODM.Choosing 'Primary Active Node' will set up the environment for the node which may own some/all shared resources upon integration into the cluster. This node WILL NOT attempt to acquire resources upon failure of its peer. Choosing 'Secondary Active/Standby Node' will set up the environment for the node which may own some/all shared resources upon integration into the cluster. This node WILL attempt to acquire resources upon failure of its peer.Add/Change/Show/Remove custom-defined events.Add a custom-defined event to the ODM.All previously defined custom event names will be displayed. After selecting a particular event, the event name, description, and filename for the specified event will be displayed. Changes to the selected events can be made.All information about the selected custom-defined event will be removed from the ODM.Choosing 'Primary Active/Standby Node' will set up the environment for one node which may own some/all shared resources upon integration into the cluster. This node WILL attempt to acquire resources upon failure of its peer. Choosing 'Secondary Active/Standby Node' will set up the environment for the other node which may own some/all shared resources upon integration into the cluster. This node WILL also attempt to acquire resources upon failure of its peer. Essentially, each node has the same role, but must somehow be distinguishable.Create/Change/Show/Remove custom-defined snapshot methods.Create a custom-defined snapshot method.All previously defined custom snapshot method names will be displayed. After selecting a particular snapshot method, the method name, description, and filename for the specified method will be displayed. Changes to the selected method can be made.All information about the selected custom-defined snapshot method will be removed from the ODM.Choosing 'Primary Active Node' will set up the environment for one node which may own some/all shared resources upon integration into the cluster. This node WILL NOT attempt to acquire resources upon failure of its peer. Choosing 'Secondary Active/Standby Node' will set up the environment for the other node which may own some/all shared resources upon integration into the cluster. This node also WILL NOT attempt to acquire resources upon failure of its peer. Essentially, each node has the same role, but must somehow be distinguishable. Choosing 'Standby Node' will set up the environment for the node which owns no shared resources upon integration into the cluster. This node WILL attempt to acquire resources upon failure of either/both of its peers.Manage, view, and collect PowerHA SystemMirror log files and event summaries.Change/Show a Cluster Log Redirection ODM entry.Enter the full-path name of the new directory that the output log will be redirected to. (Maximum length is 1024 bytes) To avoid problems, logs should only be redirected to filesystems on local disks. Logs redirected to shared filesystems or NFS filesystems might cause unexpected behavior during a fallover.All defined cluster log directory ODM entries will be displayed. After selecting a particular cluster log directory, the log name, description, and directory for the specified entry will be displayed. Changes to the log directory can be made.A value of true is required if the specified directory is on a remotely mounted (e.g. AFS, DFS, or NFS) file system. This choice should be used with extreme care since use of a non-local filesystem for PowerHA SystemMirror logs will prevent log information from being recorded if the filesystem is not available.Links the 'node ID' found in cluster.cf with the role played by the local node. Must be a positive integer between 1 and 4, inclusive.Links the 'node ID' found in cluster.cf with the role played by the remote node. Must be a positive integer between 1 and 4, inclusive.Links the 'node ID' found in cluster.cf with the role played by the remote primary active node. (The primary active node plays the same role as the secondary active node, but must be made distinguishable.) Must be a positive integer between 1 and 4, inclusive.Links the 'node ID' found in cluster.cf with the role played by the remote secondary active node. (The secondary active node plays the same role as the primary active node, but must be made distinguishable.) Must be a positive integer between 1 and 4, inclusive.Links the 'node ID' found in cluster.cf with the role played by the remote standby node. Must be a positive integer between 1 and 4, inclusive.This is the SHARED IP Label (hostname) which is used by the Active Server when performing cluster services. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the service adapter on the local node when performing cluster services. This name must be available in /etc/hosts or a nameserver.The names of the fixed disks which are initially made available by the owner node.The names of the fixed disks which are initially made available by the local node.The names of the fixed disks which are initially owned by the remote node. If this field is filled out, then the cited disks will be made available by the local node upon failure of the remote. If it is left blank, these disks will not be made available by the local node upon failure of the remote.The names of the fixed disks which are initially owned by the active server.The application server(s) which will be started by the owner node.The names of the volume groups which may be accessed by two or more nodes concurrently.The names of the fixed disks which are initially owned by the active primary node. If this field is filled out, then the cited disks will be made available by the local node upon failure of the active primary node. If it is left blank, these disks will not be made available by the local node.The names of the fixed disks which are initially owned by the active secondary node. If this field is filled out, then the cited disks will be made available by the local node upon failure of the active secondary node. If it is left blank, these disks will not be made available by the local node.The names of the fixed disks which are initially owned by the active node sharing the address.The names of the fixed disks to be made available when the node which owns the fixed disks leaves the cluster.The names of the volume groups which are initially varied on by the owner node.The names of the volume groups which are initially varied on by the active node sharing the address.The names of the volume groups to be varied on when the node which owns the volume groups leaves the cluster.The names of the volume groups which are initially varied on by the active server.The names of the concurrent volume groups which are initially varied on by the owner node.The names of the volume groups which are initially varied on by the active primary node. If this field is filled out, then the cited volume groups will be made available by the local node upon failure of the active primary. If it is left blank, these volume groups will not be made available by the local node.The names of the volume groups which are initially varied on by the active secondary node. If this field is filled out, then the cited volume groups will be made available by the local node upon failure of the active secondary. If it is left blank, these volume groups will not be made available by the local node.The names of the filesystems which are initially mounted by the owner node.The names of the filesystems which are initially mounted by the active node sharing the address.The names of the filesystems which are initially mounted by the remote node. If this field is filled out, then the cited filesystems will be made available by the local node upon failure of the remote. If it is left blank, these filesystems will not be made available by the local node upon failure of the remote.The names of the filesystems which are initially mounted by the active server.The names of the filesystems which are initially mounted by the active primary node. If this field is filled out, then the cited filesystems will be made available by the local node upon failure of the active primary. If it is left blank, these filesystems will not be made available by the local node.The names of the filesystems which are initially mounted by the active secondary node. If this field is filled out, then the cited filesystems will be made available by the local node upon failure of the active secondary. If it is left blank, these filesystems will not be made available by the local node.The names of the filesystems which are initially exported by the owner node.The names of the filesystems which are initially exported by the active node sharing the address.The names of the filesystems which are initially exported by the remote node. If this field is filled out, then the cited filesystems will be exported by the local node upon failure of the remote. This implies the local node must be set to mount the filesystem upon failure of the remote. If it is left blank, these filesystems will not be exported by the local node.The names of the filesystems which are initially exported by the active server.The names of the filesystems which are initially exported by the active primary node. If this field is filled out, then the cited filesystems will be exported by the local node upon failure of the active primary. This implies the local node must be set to mount the filesystem upon failure of the active primary. If it is left blank, these filesystems will not be exported by the local node.The names of the filesystems which are initially exported by the active secondary node. If this field is filled out, then the cited filesystems will be exported by the local node upon failure of the active secondary. This implies the local node must be set to mount the filesystem upon failure of the active secondary. If it is left blank, these filesystems will not be exported by the local node.The service adapter IP label to be taken over when the node which owns the IP label leaves the cluster.The names of the filesystems to be mounted when the node which owns the filesystems leaves the cluster.The names of the filesystems to be mounted via NFS. These filesystems must be exported by the node which owns them.The names of the filesystems to be mounted and exported when the node which owns the filesystems leaves the cluster.The names of the application servers to be started when the node which owns them leaves the cluster.The names of the application servers to be initially started by the active node sharing the address.The names of the filesystems which are mounted via NFS by the active primary node after having been exported by the active secondary node.The names of the filesystems which are mounted via NFS by the active secondary node after having been exported by the active primary node.Number of times to attempt an NFS mount of the filesystems listed in the above 'Filesystems to mount via NFS...' list.The name of the network interface associated with the service adapter on the local node.The IP Label (hostname) given to the service adapter on the local node when this node is booting. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the service adapter on the remote node. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the service adapter on the remote standby node. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the service adapter on the remote active primary node. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the service adapter on the remote active secondary node. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the standby adapter on the local node which is used to masquerade as the remote node. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the standby adapter on the local node which is used to masquerade as the remote active primary node. This name must be available in /etc/hosts or a nameserver.The IP Label (hostname) given to the standby adapter on the local node which is used to masquerade as the remote active secondary node. This name must be available in /etc/hosts or a nameserver.The name of the network interface associated with the standby adapter on the local node which is used to masquerade as the remote node.The name of the network interface associated with the standby adapter on the local node which is used to masquerade as the remote active primary node.The name of the network interface associated with the standby adapter on the local node which is used to masquerade as the remote active secondary node.The network mask associated with the service adapter on the local node. This is assumed to be the same throughout the cluster.Choose 'true' if this node will participate in IP Address Takeover. Participation means this node will either 'masquerade' (take over the identity, specifically, the IP address/hostname) as or be masqueraded by another node.Choose 'true' if you want to exercise the logic for disk-fencing. This is useful if there is only one physical network between the active nodes. If a network failure occurs and the nodes remain active but isolated, each will NOT attempt to take over their peers' resources, even if configured to do so.Choose the level of verbose output in the log file: high - Turns on every debug option. standard - Turns on default debug options. These are useful for debugging purposes. The log file is: clstrmgr.debug. [default location: /tmp/clstrmgr.debug] These change will take effect immediately.Choose the level of verbose output in the log file: high - All script execution, warnings and errors are logged. low - Only errors are logged. These are useful for debugging purposes. The log file is: hacmp.out. [default location: /tmp/hacmp.out]Set to true to enable a joining node to takeover resources of any node which has not yet joined the cluster. These resources must be configured as takeover resources by the joining node.If the cluster uses NIS or a Name Server, this field should be set to true. This will allow the scripts to turn off/on these services when they would interfere with reconfiguration.Choose 'Yes' if you want to start the Cluster Lock Manager (cllockd) whenever this node joins the cluster.Choose 'Yes' if you want to start the High Availability Cluster Multi-Processing Version software demo (the Image Server, imserv) whenever this node joins the cluster.The directory containing the images for the High Availability Cluster Multi-Processing Version Image Server (imserv) demo.Enter one or more physical volume identifiers whose fence registers are to be cleared.Select one of these options to tell PowerHA SystemMirror how you want your applications to be monitored and managed by PowerHA SystemMirror: Automatically: If you select this option, PowerHA SystemMirror will bring resource groups online according to the resource groups' configuration settings and the current cluster state. PowerHA SystemMirror will then monitor the resource group(s) and application availability. -Note that PowerHA SystemMirror relies on application monitors to determine the state of any applications that belong to the resource group(s) as follows: If applications are running, the application monitors return an "application is running" return code, PowerHA SystemMirror will just monitor the applications(and not start them again). If there are NO application monitors configured, or if they return an "application is not running" return code, PowerHA SystemMirror will run the start_server script for any configured application servers. In this case you will need to ensure the start_server script correctly handles the case when an application server is already running, depending how you want multiple application instances to be handled. Manually: If you select this option, PowerHA SystemMirror will not take any action on resource groups based on the fact that this node is being brought online. After you start cluster services, you can bring any resource group(s) online or offline, as desired, using PowerHA SystemMirror Resource Group and Application Management SMIT menu.The size of the file system in units of 512-byte blocks. If the value begins with a "+", it is interpreted as a request to increase the file system size by the specified amount. If the specified size cannot be evenly divided by the physical partition size, it is rounded up to the closest number that can be evenly divided.The size of the file system in units of 512-byte blocks, Megabytes, or Gigabytes. If size has the M suffix, it is interpreted to be in Megabytes. If size has a G suffix, it is interpreted to be in Gigabytes. Size without suffix would be considered as 512-byte blocks. If the specified size cannot be evenly divided by the physical partition size, it is rounded up to the closest number that can be evenly divided.A name assigned by the creator of the Error Notification object to uniquely identify the object. The creator will use this unique name when the creator wants to modify or remove the object.Choose 'No' to ensure obsolete error notification objects are deleted when the system is restarted. Choose 'Yes' to allow the object to persist after a system is restarted.Select a Process ID which can be used by the Notify Method, or leave it blank. Objects which have a PID specified should choose 'No' for 'Persist across system restart?'.Choose the appropriate error class. If an error occurs which consists of a matching class, the selected notify method will be run. Valid classes are 'All', 'Hardware', 'Software', and 'Errlogger'. Choose 'None' to ignore this entry.Choose the appropriate error type. If an error occurs which consists of a matching type, the selected notify method will be run. Valid types are 'All', 'PENDing', 'PERManent ', PERFormance, 'TEMPorary', and 'UNKNown'. Choose 'None' to ignore this entry.Select an error ID from the '/usr/include/sys/errids.h' file. If this particular error occurs, the selected Notify Method will be run.Choose 'Yes' to match alertable errors. Choose 'No' to match non-alertable errors. Choose 'All' to match both. Choose 'None' to ignore this entry.Select an error label from the '/usr/include/sys/errids.h' file. If this particular error occurs, the selected Notify Method will be run.The name of the failing resource. For the hardware error class, a resource name is the device name. For the software error class, a resource name is the name of a failing executable.The class of the failing resource. For the hardware error class, the resource class is the device class. The resource error class is not applicable for the software error class.The type of the failing resource. For the hardware error class, a resource type is the device type a resource is known by in the devices object.Enter the full-path name of an executable file to be run whenever an error is logged which matches any of the defined criteria.List all configured application servers.Add a new application server to the configuration.Change an existing application server.Delete an existing application server from the configuration.Stop monitoring an application; or start monitoring an application whose monitor was previously stopped .The Cluster Verify procedure is used to verify the resources of a cluster. The verify procedure will try to ensure that all PowerHA SystemMirror-used resources will be system-configured at boot-time.Specifies a log file to be used. By default, clverify program use the current configuration as defined in the ODM.Select a custom-defined verification method name to be removedSpecifies a log file to store verification output. By default, the output is stored in the smit.log file.Toggle whether pre-installed verification methods should be run. The choices are: - Pre-installed: All verification methods shipped with PowerHA SystemMirror and with PowerHA SystemMirror/XD verification (if applicable or user-provided) will be run. - None: Pre-installed PowerHA SystemMirror verification methods will not be run. Choose this option to run Custom Verification Methods only.Verify Custom-Defined Verification Method. Enter method names in the order in which they are run. Selecting "All' verifies all custom-defined verification methods.By default, the program will continue to run after an error is encountered, allowing a full list of errors to be generated. If an Error Count is specified, the program will terminate after encountering Error Count number of errors.Enter the name of a custom-defined verification method (Maximum length is 64 bytes)Enter the description of a custom-defined verification method (Maximum length is 1024 bytes)Enter the full-path name of an executable file to be run to verify the custom-defined verification method (Maximum length is 1024 bytes)Enter a new custom-defined verification method name (Maximum length is 64 bytes)This is an optional command to run both before and after the event command is executed. The main purpose for this command is to notify the system administrator that a certain event has happened.This is an optional command to run before the event command is executed. This is used to provide pre-event processing by the PowerHA SystemMirror/6000 administrator. Multiple commands are allowed and must be delimited by a comma.This is an optional command to run after the event command is executed. This is used to provide post-event processing by the PowerHA SystemMirror/6000 administrator. Multiple commands are allowed and must be delimited by a comma.This is an optional command to run to attempt to recover from any event command failure. If the retry count is greater than zero and the recovery command succeeds, then the event command is rerun.The number of times to run the recovery command, this field can be used to have the event commands rerun if the recovery command executes without error. The retry count should be set to zero if no recovery command is specified, and set to greater than zero if a recovery command is specified.This is the command or script to be executed when the event happens. It is a required field for each event.Select the event name to allow further configuration.Select the event name to remove.Enter a new custom-defined event name (Maximum length is 64 bytes)Enter the description of a custom-defined event (Maximum length is 1024 bytes)Enter the full-path name of an executable file to be run as the custom-defined event (Maximum length is 1024 bytes)Enter the name of a custom-defined event (Maximum length is 64 bytes)Execute Custom-Defined Snapshot Method. Enter method names in the order in which they are run. Selecting "All" executes all custom-defined snapshot methods.Configure cluster resources (e.g. filesystems, volume groups, etc.) to be owned by, taken over by, or rotated amongst cluster nodes.Remove cluster resource configuration for one or more nodes.Configure PowerHA SystemMirror events for one or more nodes.Configure debug output level, name server support, and takeover for inactive node.Configure the resources to be owned by the specified node.Configure the resources to be shared by the cluster nodes which share the specified IP label.Configure the resources to be taken over by the specified node when the node which owns them leaves the cluster. The resources must have been previously defined to be owned by another node.Every highly-available resource (disk, filesystem, etc.) must belong to a resource group. A single resource may not belong to more than one resource group. Use this selection to manage resource groups.Display or modify PowerHA SystemMirror Events for one or more nodes.Synchronize cluster resource information on all cluster nodes defined in the local topology database. If the Cluster Manager is currently active on the local node, a Dynamic Reconfiguration event will automatically be activated. All nodes joining a cluster must have the same cluster topology and resource information.Use this option to add/change/show/remove a resource group and/or its relation to cluster nodes.Add a Resource Group and configure its Node Relationship and Resource Chain.Enter the desired name. Use no more than 64 alphanumeric characters or underscores; do not use a leading numeric. Do not use reserved words. Duplicate entries are not allowed.Toggle between the following options and select a value from the list that defines the startup policy of a resource group: ONLINE ON HOME NODE ONLY: The resource group will be brought online ONLY on its home node (the first node in the participating node list) during the resource group startup. This requires the highest priority node to be available. ONLINE ON FIRST AVAILABLE NODE: The resource group activates on the first participating node that becomes available. If you have configured the settling time for the resource groups, it will only be used for the resource groups with this startup policy. ONLINE USING DISTRIBUTION POLICY: The resource groups with this startup policy will be distributed at resource group startup. Based on the resource group distribution policy, resource groups will be distributed such that only one resource group is activated by a participating node ('node' based resource group distribution) or one resource group per node and per network ('network' based distribution.) NOTE: The distribution policy can be changed under 'Extended Configuration -> Extended Resource Configuration -> Configure Resource Group Run-Time Policies -> Configure Resource Group Distribution' The 'network' distribution policy is deprecated and will be removed in future PowerHA SystemMirror releases. ONLINE ON ALL AVAILABLE NODES: The resource group is brought online on ALL participating nodes. If you select this option for the resource group, ensure that resources in this group can be brought online on multiple nodes simultaneously.Toggle between Ignore/Prefer Primary Site/Online On Either Site/Online On Both Sites. Prefer Primary Site resources are resources which may be assigned to be taken over by multiple sites in a prioritized manner. When a sites fails, the active site with the highest priority acquires the resource. When the failed site rejoins, the site with the highest priority acquires the resource. Online On Either Site resources are resources which may be acquired by any site in its resource chain. When a site fails, the resource will be acquired by the highest priority standby site. When the failed site rejoins, the resource remains with its new owner. Online On Both Site resources are acquired by both sites. These are the concurrent capable resources. Ignore should be used if sites and replicated resources are not defined or being used.Toggle between the following options and select a value from the list that defines the fallover policy of the resource group: FALLOVER TO NEXT PRIORITY NODE IN THE LIST. In the case of fallover, the resource group that is online on only one node at a time follows the default node priority order specified in the resource group's nodelist. FALLOVER USING DYNAMIC NODE PRIORITY. Before selecting this option, make sure that you have already configured a dynamic node priority policy that you want to use. If you did not configure a dynamic node priority policy to use, and select this option, you will receive an error during the cluster verification process. BRING OFFLINE (ON ERROR NODE ONLY). Select this option to bring a resource group offline on a node during an error condition. This option is most suitable when you want to imitate the behavior of a concurrent resource group and want to ensure that if a particular node fails, the resource group goes offline on that node only but remains online on other nodes. Selecting this option as the fallover preference when the startup preference is not Online On All Available Nodes may allow resources to become unavailable during error conditions. If you do so, PowerHA SystemMirror issues a warning.All previously defined resource names will be displayed. After selecting a particular resource group, the group name, node relationship, and participating nodes will be displayed, and changes may be made.Toggle between the following options and select a value from the list that defines the fallback policy of the resource group: NEVER FALLBACK. A resource group does NOT fall back when a higher priority node joins the cluster. FALLBACK TO HIGHER PRIORITY NODE IN THE LIST. A resource group falls back when a higher priority node joins the cluster. If you select this option, then you can use the delayed fallback timer that you previously specified in the Configure Resource Group Run-time Policies SMIT menu. If you do not configure the delayed fallback policy, the resource group falls back immediately when a higher priority node joins the cluster.Present resource group name. Not modifiable.Enter a new resource group name. Use no more than 64 alphanumeric characters or underscores; do not use a leading numeric. Do not use reserved words. Duplicate entries are not allowed.All information about the selected resource group will be removed from the ODM.Add resources such as filesystems and volume groups to a resource group. These resources will always be acquired and released as a single entity. If it is desired for a set of resources to be acquired by one node and another set acquired by a different node, create separate resource groups for each.Selected resource group name. Not modifiable.Node relationship of selected resource group. Not modifiable.Dynamic Adapter Labels are those which are not associated with a particular node, but may instead be associated with several nodes. For instance, addresses which rotate or may be taken over are considered "dynamic". Enter Adapter Labels which are always associated with the resources in this group.Enter the tool to be used for consistency check of filesystems in the resource group. i.e. fsck or logredo.Enter the mount points of the filesystems which should be mounted when the resource group is acquired. If this field is left blank and volume groups have been specified in the "Volume Groups" field in this SMIT panel, then all filesystems that exist on the volume groups will be mounted when the resource group goes online. This default behavior of mounting all filesystems is only assumed if volume groups have been explicitly specified as resources for this resource group. For any filesystem that is specified in this field the volume group on which it resides will be mounted when the resource group is acquired.Enter the recovery method to be used for acquiring and releasing the file systems. i.e. sequential or parallel.Enter the mount points of the filesystems and/or directories which are exported to all nodes in the resource chain when the resource is initially acquired.Enter the preferred network to NFS-mount the filesystems specified.All nodes in the resource chain will attempt to NFS-mount these filesystems. Please use the following syntax for specifying an NFS cross mount, specify, NFS_Mount_Point;Local_Filesystem. Example: NFS Mount Point: /nfsmount/share1 (location where NFS is to be mounted) Filesystem: /mnt/fs1 (filesystem to export) Specify: /nfsmount/share1;/mnt/fs1 Enter the names of the volume groups containing raw logical volumes or raw volume groups that are varied on when the resource is initially acquired.Enter the Physical Volume IDs of raw disks.Enter the name of the Geo-Mirror-Device (GMD).Enter application servers that will be started. These are the servers defined in the "Configure Application Servers" section.Miscellaneous Data is a string placed into the environment along with the resource group information and is accessible by scripts.Enter the name of the PPRC pair.Inactive Takeover, if set to "true", allows the highest prioritized active node to acquire an inactive resource when any node joins. If set to "false", the highest prioritized node will acquire the resource only if itself joins.Enter the name of the ERCMF VolumeSet.Set to "true" to activate SSA Disk Fencing for the disks in this resource group. Set to "false" to disable. SSA Disk Fencing helps prevent partitioned clusters from forcing inappropriate takeover of resources.Enter Delayed Fallback Timer to use for this Resource Group.Configure event script output level and name server support.Enter the name of the SVC PPRC pair.Choose the level of verbose output in the logfile: high - All script execution, warnings and errors are logged. medium - Only warnings and errors are logged. low - Only errors are logged. These are useful for debugging purposes. The log file is: hacmp.out [default location: /tmp/hacmp.out].Enter the name of the SRDF Device Group.If the cluster uses NIS or a Name Server, this field should be set to "true". This will allow the scripts to turn on/off these services when they would interfere with reconfiguration.This options sets the format of the Event Summary in hacmp.out. Options include Default (no special format), Standard (include search strings), Html 1 (limited html formatting) and Html 2 (full html format). Note that choosing Default (None) will not produce any Event Summaries.Set this field to "Kerberos" to use authenticated security. WARNING: The /usr/es/sbin/cluster/etc/rhosts file must be removed from ALL cluster nodes when the security mode is set to "Kerberos". Failure to do so allows for the opportunity to compromise the authentication server. Once compromised all authentication passwords must be changed. Note: If the security mode is changed, Verification and Synchronization must be run to commit the change.Enter the name of the TRUCOPY Device Group.This should only be used in an emergency situation. If SSA Disk Fencing is enabled, and a situation has occurred in which the physical disks are inaccessible by a node or a group of nodes which need access to a disk, clearing the fence registers will allow access. Once this is done, the SSA Disk Fencing algorithm will be disabled unless PowerHA SystemMirror is restarted from all nodes.This field determines the priority order of nodes to be used in determining the destination of a Resource Group during an event which causes the Resource Group to either move or be placed online. The default value, (As listed), uses the static priority order as is listed in the Participating Node Names field. A list of preset and user-defined dynamic node priority policies, in which the destination node is determined at run-time, is available by pressing F4. The following preset dynamic node priority policies have been provided: cl_highest_mem_free by most available memory cl_lowest_disk_busy by least disk activity cl_highest_idle_cpu by most available CPU cycles cl_highest_udscript_rc by highest return value of the user defined script cl_lowest_nonzero_udscript_rc by lowest nonzero return value of the user defined script Please refer to the PowerHA SystemMirror manuals for more information on Dynamic Node Priority Policies.Enter the name of the TRUECOPY Resource.Enter the name of the GENERIC XD Resource.Enter Cluster ID and Cluster Name to define a cluster.Retrieve a cluster definition. Overwrite the cluster ID, or cluster name, or both to change a cluster definition.This will remove all user-entered cluster configuration information from the local node.Node name is a unique alpha-numeric string, up to 64 characters long.Enter Node Names of the nodes forming the cluster.Retrieve / Modify a Node Name.Detach a cluster node from the cluster.Enter Adapter attributes such as Adapter Label, Network Type, Network Name, Network Attribute, Adapter Function, etc. Associate the adapter to a particular node or none.Retrieve / Modify Adapter attributes such as Adapter Label, Network Type, Network Name, Network Attribute, Adapter Function, etc.Remove an adapter from the node's cluster configuration. This adapter will no longer take part in cluster services.Enter Network Module attributes such as Module Name, Description, Identifier Type, Path, etc. This will define a network module.Retrieve / Modify Network Module attributes such as Module Name, Description, Identifier Type, Path, etc. This will change the definition of a network module.This will delete a definition of a network module.Enter a Cluster ID (non-negative integer 0 - 9999999999).Enter a character string up to 64 characters. The Cluster Name is a unique alpha-numeric string. Spaces are not allowed.Names of all the nodes comprising the PowerHA SystemMirror cluster.Node name selected. This field is not modifiable.Enter a character string, different from the listed Node Name, up to 64 characters.List of persistent IP labels that belong to this node. This field is not modifiable.The IP label (hostname) of the adapter, associated with the numeric IP address in the /etc/hosts file (if the address type is ip address).Type of Network e.g. ether, token ring, rs232, etc.A symbolic name of the logical network associated with this adapter. This name has no meaning outside PowerHA SystemMirror, but is used within PowerHA SystemMirror to differentiate between network resources.Toggle between Public, Private and Serial. PUBLIC: A public network connects from two to eight nodes and allows a client to access cluster nodes. Ethernet, FDDI, Token Ring and SLIP are considered Public Networks PRIVATE: A private network provides point-to-point communication between two nodes; it does not allow client access. SERIAL: A serial network reduces the occurrence of node isolation (and thus spurious disk takeovers) by allowing Cluster Managers to exchange 'keep-alives' even after all TCP/IP based networks have failed. A serial network can be a SCSI-2 Differential bus using target mode SCSI or a raw RS232 line.Toggle between Service, Standby and Boot. SERVICE: Active Adapter; i.e. which furnishes all requests. STANDBY: Adapter that takes over the Service Adapter in case of a fallover. BOOT: Adapter on which the node comes up at Boot time. PERSISTENT: A persistent node-bound IP alias address for administration convenience.This field will be either an IP typed address (if the address type of this adapter is ip address) or the name of a device file (e.g. /dev/tty2).Assign the hardware address for this adapter. This address will be used whenever another adapter assumes the IP address of this adapter. This eliminates the need for arp-cache updates, but increases the time for address swapping. For the case of ATM networks, the adapters must be connected to the same switch because hardware address takeover between ATM switches is not supported. ATM hardware addresses must be 14 hexadecimal digits with the last two digits (selector byte) in the range of 00-06. Furthermore, the last two digits must be unique for each ATM "service" adapter in the cluster.Node to which the Adapter belongs. If no node name is specified here, the adapter must be a service adapter.Enter a character string, different from the adapter label listed, up to 64 characters.Selected Network Module. This field is not modifiable.Enter a character string up to 64 characters.This describes the nature of the network.Toggle between Device and Address DEVICE: Specifies that the adapter (associated with this network) uses a device file. ADDRESS: Specifies that the adapter (associated with this network) uses an IP based address.This specifies the path to the network executable file.This specifies the parameters required when running the network executable.Time period in which, if a network error occurred, it would not be detectedRate at which the Cluster Manager sends a 'keep alive' message to a node in the cluster. This allows the user some control over the time within which a failure of a node will be detected. The heartbeat rate and the failure cycle determine how soon a failure can be detected. The time needed to detect a failure can be calculated using this formula: (heartbeat rate) * (failure cycle) * 2 secondsRate at which to send heart beat packets to other nodes in the cluster.Enter a character string, different from Network Module Name listed, up to 64 characters.Lists complete information for all nodes of the cluster.Lists complete information about the selected node of the cluster.Displays all defined networks along with their attributes.Displays the selected network along with its attributes.Displays all defined adapters along with their attributes.Displays the selected adapter along with its attribute.This should only be used in an emergency situation. If SSA Disk Fencing is enabled, and a situation has occurred in which the physical disks are inaccessible by a node or a group of nodes which need access to a disk, clearing the fence registers will allow access. Once this is done, the SSA Disk Fencing algorithm will be disabled unless PowerHA SystemMirror is restarted from all nodes.Set to "true" to activate SSA Disk Fencing for the disks in this resource group. Set to "false" to disable. SSA Disk Fencing helps prevent partitioned clusters from forcing inappropriate takeover of resources.PowerHA SystemMirror handles node failure by taking over the failed node's IP address(es) and then taking over its filesystems. This results in "Missing File or Filesystem" errors for NFS clients since the clients can communicate with the backup server before the filesystems are available. Set to "true" to takeover Filesystems before taking over IP address(es) which will prevent the above error. Set to "false" to keep the default order.Save the current PowerHA SystemMirror Cluster Topology and Resource information to file.Change or show a cluster snapshot name and description.Remove a cluster snapshot file.The Cluster Snapshot information will be verified, written to the PowerHA SystemMirror ODMs on the local node, then synchronized to the nodes defined therein.Convert an existing Cluster Snapshot file to an PowerHA SystemMirror Definition file that can be edited with your favorite XML editor or Online Planning Worksheets.The name of the snapshot. Two files will be created from this snapshot name. A file with an extension of ".odm" will contain all of the PowerHA SystemMirror ODM entries. A file with an extension of ".info" will contain information which may be useful in problem determination.A brief description of the snapshot configuration. Must be less than 256 characters in length.Allows user to view and modify the snapshot name and description.Allows user to remove a snapshot and associated .odm and .info files.Allows user to change the current PowerHA SystemMirror Cluster Topology and Resource configuration to that described in the specified snapshot.The name of the snapshot (up to 64 characters). Two files will be generated using this name: a file containing all relevant PowerHA SystemMirror ODM data (.odm), and a file containing pertinent general information which may be useful in problem determination (.info). A default path of /usr/es/sbin/cluster/snapshots is prepended to the file name. This path may be modified by setting the environment variable SNAPSHOTPATH.A brief description of up to 255 characters may be added to the snapshot.The new name for the cluster snapshot.During the process of applying the cluster snapshot, a verification utility will be run to verify the snapshot configuration against the current hardware settings. By default, the apply process will exit once the verification fails. If the user wishes to ignore the failure and force the apply, set this setting to 'Yes'.Select 'No' if you do not want verification errors to be corrected, any verification errors that result will be reported and cause verification to fail. Select 'Interactive' to have verification prompt you to correct resulting verification errors. Selecting 'Yes' will correct reported verification errors without prompting. Only certain errors will be corrected during verification and synchronization, please refer to the PowerHA SystemMirror documentation guide: Administration and Troubleshooting Guide.Refresh System Default Configuration with Active Configuration.Restore System Default Configuration from Active Configuration.Enter the realm/service pairs that will be startedDisplay or modify the operational parameters of the Topology Services and Group Services daemons.Specifies the time interval, in seconds, between heartbeat messages. The heartbeat interval and the fibrillate count determine how soon a failure can be detected. The time needed to detect a failure can be calculated using this formula: (heartbeat interval) * (fibrillate count) * 2 secondsSpecifies the number of successive heartbeats that can be missed before the interface is considered to have failed. The fibrillate count and the heartbeat interval determine how soon a failure can be detected. The time needed to detect a failure can be calculated using this formula: (heartbeat interval) * (fibrillate count) * 2 secondsIndicates the maximum length of the Topology Services log file. The value in this field indicates the maximum number of entries, or lines, that can be recorded to the log file. When the log file reaches this limit, the log file is copied to another file, then the log file is cleared, and subsequent entries are recorded to the start of the file.Indicates the maximum length of the Group Services log file. The value in this field indicates the maximum number of entries, or lines, that can be recorded to the log file. When the log file reaches this limit, the log file is copied to another file, then the log file is cleared, and subsequent entries are recorded to the start of the file.Enter the name of a custom-defined snapshot method (Maximum length is 64 bytes)Enter the description of a custom-defined snapshot method (Maximum length is 1024 bytes)Enter the full-path name of an executable file to be run as the custom-defined snapshot method (Maximum length is 1024 bytes)Enter a new custom-defined snapshot method name (Maximum length is 64 bytes)Select a custom-defined snapshot method name to be removedRestore the configuration in the DCD with the current active configuration stored in the ACDConfigure a site for PowerHA SystemMirror.Enter Site Names.Retrieve / Modify a Site Name.Remove a site from the cluster.Enter a site name. Site name is a unique alphanumeric string, up to 64 characters long.Enter the names of the nodes that are at this site.Configure sites within a geographic cluster.Define this site as the Dominant site for site isolation shutdown.Define backup communications type for site isolation detection.Automatically Import Volume Groups, if set to "true", causes the definition of any volume groups entered on the Volume Groups line or the Concurrent Volume Groups line to be imported to any resource group nodes that don't already have it. If this line is set to "false", no importing or updating of volume group information is automatically done. When Automatically Import Volume Groups is set to "true", the final state of the volume group will depend on the initial state of the volume group (varied on or varied off) and the state of the resource group to which the volume group is to be added (online or offline). The possible scenarios and corresponding actions are listed below. a. Volume Group varied on, Resource Group Offline: In this case, the volume group will be imported to nodes in the resource group (if required) and the volume group will be left varied on (same as the initial state). b. Volume Group varied on, Resource Group Online: importvg will be run on nodes in the resource group (if required) and the volume group would be varied off at the end of this process. When the user synchronizes changes to the cluster, the DARE process will vary on the volume group on the node where the resource group is online. c. Volume Group varied off, Resource Group Offline: importvg will be run on nodes in the resource group (if required) and the volume group would be varied off at the end of this process. (same as the initial state). d. Volume Group varied off, Resource Group Online: importvg will be run on nodes in the resource group (if required) and the volume group would be varied off at the end of this process. When the user synchronizes changes to the cluster, the DARE process will vary on the volume group on the node where the resource group is online.Set this flag to true if this network supports gratuitous arp. This will enable the IPAT via IP aliasing feature for this network. This field specifies what kind of NIM to use for the adapter - either adapter card (for a NIM specific to an adapter card) or adapter type (for a NIM to use with a specific type of adapter). This field specifies the next type of NIM to try to use if a more suitable NIM cannot be found. This field specifies the next generic NIM to try to use if a more suitable NIM cannot be found. Set this flag to true if this network supports IP loose source routing. If cluster services were stopped with the forced option, PowerHA SystemMirror expects all cluster resources on this node to be in the same state when cluster services are restarted. If you have changed the state of any resources while cluster services were forced down, you can use this option to have PowerHA SystemMirror re-acquire resources during startup.If set to "true", PowerHA SystemMirror will use a forced varyon to bring the volume group on line in the event that a normal varyon fails due to lack of quorum, and there is at least one complete copy of every logical volume available. This option is only meaningful for volume groups in which every logical volume is mirrored. A super strict mirror allocation policy is recommended; it is unlikely to be successful for other choices of logical volume configuration. See the section "Logical Volume Storage Overview" in the AIX Operating System and Devices manual for a discussion of these concepts, and the PowerHA SystemMirror Administration Guide for a discussion of the appropriate use of this facility.This utility displays the cluster applications and status in an application centric viewThe name of the Cluster Snapshot that will be converted.Specify the path to the file where the HACMP definition will be written. If a relative path name is given, the path name will be relative to the directory /var/hacmp/log. Maximum length is 128 characters.This is an informational field to describe the current cluster. Notes that users specify here will be stored in the PowerHA SystemMirror definition file and will appear in the Cluster Notes panel within Online Planning Worksheets.By default, the verification process will exit if one or more failures is detected. For experienced users in extreme cases, it is possible to ignore the failure and synchronize the configuration by setting this value to 'Yes'.Select a site name to associate with the service IP label. This service IP label will only be activated on nodes belonging to the associated site. Site-specific service IP labels are only activated when their resource group is in the ONLINE PRIMARY state on the associated site.Show the Current State of Applications and Resource GroupsNFSv4 needs a stable storage to save the state information. - The path should belong to a filesystem managed by the resource group. - Don't NFS export this path. This is to avoid accidental corruption of the NFSv4 state by the user. - Any change to this field will force the user to bounce the resource group. If this attribute is set, then all PowerHA SystemMirror Application Servers in the current resource group will run in the specified WPAR. In addition, all service label and filesystem resources that are part of this resource group will be assigned to the specified WPAR. Changes to the WPAR Name attribute will not take affect until the next time that this resource group is brought online. Leave the WPAR Name attribute blank if you do not want this resource group to be WPAR enabled.Select one of these options to tell PowerHA SystemMirror how you want your applications to be monitored and managed by PowerHA SystemMirror: Automatically: If you select this option, PowerHA SystemMirror will bring resource groups online according to the resource groups' configuration settings and the current cluster state. PowerHA SystemMirror will then monitor the resource group(s) and application availability. -Note that PowerHA SystemMirror relies on application monitors to determine the state of any applications that belong to the resource group(s) as follows: If applications are running, the application monitors return an "application is running" return code, PowerHA SystemMirror will just monitor the applications(and not start them again). If there are NO application monitors configured, or if they return an "application is not running" return code, PowerHA SystemMirror will run the start_server script for any configured application servers. In this case you will need to ensure the start_server script correctly handles the case when an application server is already running, depending how you want multiple application instances to be handled. Manually: If you select this option, PowerHA SystemMirror will not take any action on resource groups based on the fact that this node is being brought online. After you start cluster services, you can bring any resource group(s) online or offline, as desired, using PowerHA SystemMirror Resource Group and Application Management SMIT menu. Manual with NFS crossmounts: If you select this option, PowerHA SystemMirror will not take any action on resource groups similar to that of "Manually" option, but this options takes care of enabling NFS cross-mounts if the Resource groups containing file systems are active on any remote node.Select one of the following options based on how custom verification method is written Script - If verification method is a script Library- If verification method is a library that built with the help of verification APIThe full pathname of a script to use for dynamic node priority.The maximum time in seconds that the DNP script execution takes to return. It must be less than the configTooLong time. if the script execution time exceeds the DNP timeout , the script will be killed.Toggle between the following options and select a value from the list that defines the fallover policy of the resource group: FALLOVER TO NEXT PRIORITY NODE IN THE LIST. In the case of fallover, the resource group that is online on only one node at a time follows the default node priority order specified in the resource group's nodelist. BRING OFFLINE (ON ERROR NODE ONLY). Select this option to bring a resource group offline on a node during an error condition. This option is most suitable when you want to imitate the behavior of a concurrent resource group and want to ensure that if a particular node fails, the resource group goes offline on that node only but remains online on other nodes. Selecting this option as the fallover preference when the startup preference is not Online On All Available Nodes may allow resources to become unavailable during error conditions. If you do so, PowerHA SystemMirror issues a warning.Defining a network as "public", which is the default behavior, enables SystemMirror to take full advantage of that network for cluster communications. Designating a network as "private" will prevent SystemMirror from using that network for heartbeat and cluster communications. This option may be useful for applications which need maximum network performance, or for networks where you simply do not want SystemMirror to provide recovery. Note that although a private network is not actively used by SystemMirror, you may still see network events generated.This is helpful when cluster services are running and a Resource Group is in error state due to SCSI Persistent Reserve failure.LVM Preferred read options Roundrobin - Default policy for LVM preferrred read copy, LVM will decide which mirror copy it should read. Favourcopy - Choose if you would like to read from Flash storage irrespective resource group location SiteAffinity - Choose if you would like to read from local storage path always based on the resource group locationChoose associated storage location for the selected physical volumes. Flash Storage - Choose if the associated disks are coming from Flash storage. Site name - Choose associated site location for the mirror pool.LVM Preferred read options Roundrobin - Default policy for lvm Preferrred read copy, LVM will decide which mirror copy it should read. Favour copy - Choose if you would like to read from Flash storage irrespective of resource group location Site Affinity - Choose if you would like to read from local storage path always based on the resource group locationsite name - Choose associated site for the mirror pool. Flash storage - Choose if the asosicated physical volumes are from Flash storage. LVM will decide which mirror copy it should use for read operation.LVM Preferred read options Roundrobin - Default policy for LVM preferred read copy, LVM will decide which mirror copy should be used to read the data. Favorcopy - Choose if you would like to read from Flash storage irrespective of where resource group is online Siteaffinity - Choose if you would like to read from storage at the local site where resource group is onlinesite name - Choose associated site for the mirror pool. Flash storage - Choose if the asosicated physical volumes are from Flash storage. Default - Choose default to remove the previously selected storage location. LVM will decide which mirror copy it should use for read operation.LVM Preferred read options roundrobin - Default policy for LVM preferred read copy, LVM will decide which mirror copy should be used to read the data. favorcopy - Choose if you would like to read from Flash storage irrespective of where resource group is online siteaffinity - Choose if you would like to read from storage at the local site where resource group is online. siteaffinity option is available only for site based clusters.Toggle between the following options and select a value from the list that defines the startup policy of a resource group: ONLINE ON HOME NODE ONLY: The resource group will be brought online ONLY on its home node (the first node in the participating node list) during the resource group startup. This requires the highest priority node to be available. ONLINE ON FIRST AVAILABLE NODE: The resource group activates on the first participating node that becomes available. If you have configured the settling time for the resource groups, it will only be used for the resource groups with this startup policy. ONLINE USING DISTRIBUTION POLICY: The resource groups with this startup policy will be distributed at resource group startup. Resource groups will be distributed such that only one resource group is activated by a participating node. ONLINE ON ALL AVAILABLE NODES: The resource group is brought online on ALL participating nodes. If you select this option for the resource group, ensure that resources in this group can be brought online on multiple nodes simultaneously.When you synchronize cluster topology, PowerHA SystemMirror determines what configuration changes have taken place and checks for various errors before changing the configuration on any node. If you choose to emulate synchronization, PowerHA SystemMirror determines what configuration changes are going to take place and checks for errors, but the new configuration will not take effect.By default, when there has been a change to the cluster resources, the affected resources may be unconfigured and perhaps reconfigured during the period of synchronization via a set of scripts: reconfig_resource_release, reconfig_resource_acquire, and reconfig_resource_complete. However, if the cluster administrator so desires, setting this flag to 'No' will cause the affected resources to be removed from the PowerHA SystemMirror configuration, but will not cause any scripts to be run which would configure/unconfigure a resource.Issuing this command will force any locks set by a Dynamic Reconfiguration event to be released.Start and stop cluster services: Cluster Manager (clstrmgr), Cluster SMUX Peer (clsmuxpd), and Cluster Information (clinfo).Provides a menu for adding, removing, changing, and listing users in the cluster.Provides a menu for adding, removing, changing, and listing groups in the cluster.Allows viewing of the cspoc.log file. This file contains information about cluster-wide command execution and error reporting. Its default location is /tmp/cspoc.log.Allows the user to watch the cspoc.log file as events are appended to it. This log file contains information about cluster wide command execution and error reporting. Its default location is /tmp/cspoc.log.Creates a user account with the login name and other attributes that you specify on all nodes in the cluster. The specified user must not exist on any nodes in the cluster.Shows the attributes for a specific user. If you have the correct access privileges, you can change certain attribute values of the user on all nodes in the clusterRemoves a user account from the cluster. Removing a user account deletes the attributes defined for a user, but does not remove the user's home directory or files the user owns. By answering Yes in the Remove Authentication Information? option (in the displayed dialog), you can remove the user's password and other user authentication information.Displays users already existing in the cluster.Displays groups already existing in the cluster.Creates a new collection of users that can share access to specific protected resources. The group is created on all nodes iin the cluster. The group must not exist on any nodes in the cluster.Shows the attributes for a specific group. If you have the correct access privileges, you can change certain attribute values of the groupRemoves a group from the cluster. Removing a group deletes the group's attributes from the group files, but does not remove the users (who are members of the group) from the cluster. The group must already exist in the cluster and you must have the correct access privileges to remove groups from the cluster. If the group is the primary group for any user, it cannot be removed unless you first redefine the user's primary group (use Change/Show Characteristics of a User option, which alters this information in the /etc/passwd file)Displays information about volume groups.Changes the characteristics of a logical volume.The cl_rmlv command removes a logical volume. The logical volume must be closed. For example, if the logical volume contains a file system, it must be unmounted. Removing the logical volume, however, does not notify the operating system that the file system residing on it has been destroyed. The command cl_rmfs updates the /etc/filesystems file.Provides menus to list, add, change, and remove file systems.Lists all logical volumes, sorted by volume group.Lists all file systems and displays characteristics of file systems: name, mount point, type, size, and automatic mounts.Lets you change or show characteristics of file systems.Lets you choose the nodes on which to stop cluster services.Lets you choose the nodes on which to start cluster services.Executes the command on all nodes participating in a resource group.The cl_updatevg command causes the remote node to export and import the LVM data, whether or not the time-stamp associated with the LVM component is the same.This command can be executed only on inactive volume groups. Lists only volume groups that are inactive everywhere in the cluster (not varied on). An inactive volume group is not available for use.Select this menu to move a resource group to the ONLINE state. PowerHA SystemMirror will activate all the resources associated with the resource group while attempting to bring the resource group online.Select this menu to move a resource group to the OFFLINE state. PowerHA SystemMirror will stop all the resources associated with the resource group while attempting to bring the resource group offline.Select this menu to move a resource group or a set of resource groups from one node to another node. The resource group that will be brought online via the User-Requested rg_move event.The resource group that will be brought offline via the User-Requested rg_move event.Resource group(s) selected to be moved to another node upon user's request.The node on which the resource group will be brought online.The node on which the resource group will be brought offline.The node to which the resource group will be moved.Select this menu to move a resource group or a set of resource groups from one site to another. Select this menu only if you have defined PowerHA SystemMirror sites and resource groups with non-ignore site policy. Select this menu to move a resource group or a set of resource groups from one node to another.Select this menu to move a resource group or a set of resource groups from one node to another. If PowerHA SystemMirror sites are defined, this selection will allow you to move a resource group or a set of resource groups within the site or to the peer site.Resource group(s) selected to be moved to another site upon user's request.The site to which the resource group will be moved.Displays the current states of applications and resource groups per each resource group. - For non-concurrent groups, PowerHA SystemMirror shows only the node on which they are online and the applications states on this node - For concurrent groups, PowerHA SystemMirror shows ALL nodes on which they are online and the applications states on the nodes - For groups that are offline on all nodes, node names are not listed, only the application states are displayed.Create a new filesystem on an existing volume group. The filesystem can be created on an existing logical volume, or a new logical volume can be created to hold it. Not supported when cluster services are active Removes the volume group from all applicable nodes of the cluster. Deletes all logical volume data on the physical volumes before removing the volume group. Note that if any logical volume spans multiple physical volumes, the removal of any of the physical volumes may jeopardize the integrity of the entire logical volume.Path to run the Event Emulation. Emulate a cluster event without affecting the state of the cluster.Path to run the emulation of a node_up eventPath to run the emulation of a node_down eventPath to run the emulation of a network_up eventPath to run the emulation of a network_down eventPath to run the emulation of a fail_interface eventPath to run the emulation of a join_interface eventPath to run the emulation of a swap_adapter eventNode name on which to emulate the node_up eventNode name on which to emulate the node_down eventNetwork name on which to emulate the network_up eventThe node name for a network_up emulation must be available to the network entered.Network name on which to emulate the network_down eventThe node name for a network_down emulation must be available to the network entered.Node name for fail_interface eventIP Label for the fail interface event must be available on the node on which the emulation is taking place.Network name on which to emulate the fail_interface eventNode name on which to emulate the join_interface eventIP Label for the join_interface event must be available on the node on which the emulation is taking place.Network name on which to run the join_interface eventNode name on which to run the swap_adapter eventNetwork name on which to run the swap_adapter eventIP Label to swap must be available on the node on which the emulation is taking place.Service IP Label to swap must be on the same network and available to the same node as the boot-time IP Label.Allow the users to test error notification methods by emulating an error log entry. The OS will run the notification method once the error log entry is created.Add, Change, Show, or Remove a Global Network for a selected Local Network.Enter a new or existing Global Network name to add this Local Network to the specified Global Network. Or erase the Global Network name to remove this Local Network from the Global Network.Retrieve the current cluster security mode setting. You can overwrite the current value to change the authentication method used by the cluster commands.Set this field to Kerberos to use Kerberos authentication security methods. Set this field to Standard to use the following authentication methods: 1. HACMPadapter ODM entries. 2. PowerHA SystemMirror internal rhosts. This file is used for verification if the source IP address is not found in HACMPadapter ODM and resides in /usr/es/sbin/cluster/etc/ directory.Select this option to define and work with Highly Available Communications Links.Select this option to define the Communications Link.Select this option to define a pre-defined communications link to PowerHA SystemMirror, to change the characteristics of the communications link or to remove the pre-defined link from PowerHA SystemMirror's control.This option allows you to place a pre-defined CS DLC under the control of PowerHA SystemMirror.This option allows you to change the way in which the Communications Link is defined to PowerHA SystemMirror.This option removes the Communications Link from PowerHA SystemMirror control.The CS DLC name is used as the HA Communication Link name.Select the CS DLC to be made Highly Available.This is the symbolic name for the Communications Link. It also represents the name of a DLC from CS.If you want any defined CS Ports started automatically they can be defined here. This field is optional.If you want any defined CS Link Stations started automatically they can be defined here. This field is optional.If you want any defined CS Sessions started automatically they can be defined here. This field is optional.If you want any application layer processes to be started that will use the communications link, this field should contain the full path name of the application startup file.Select the symbolic name of the link you wish to change.Select the symbolic name of the link you wish to remove.Enter Communication Links to be started. These are the Communication Links defined in the "Configure Highly Available Communication Links" section.This selection provides you with screens to add, change/show, remove, and list highly available SNA-over-LAN, SNA-over-X.25, or X.25 communication links.This selection provides you with a menu of configuration screens for system level configuration of X.25 adapters and SNA links. You must configure X.25 adapters for the operating system before defining highly available X.25 links in PowerHA SystemMirror; you must configure SNA links in Communications Server for AIX before defining highly available SNA links in PowerHA SystemMirror.Select this option to define the Communications Link.This selection provides you with screens to manage highly available X.25, SNA-over-X.25, or SNA-over-LAN communication links and the adapters that they use.This selection provides you with screens to make your X.25 adapters known to PowerHA SystemMirror, change PowerHA SystemMirror's understanding of them, make them unknown to PowerHA SystemMirror, and to list those that are known to PowerHA SystemMirror. You must make your X.25 adapters known to PowerHA SystemMirror before you can use them in highly available X.25 or SNA-over-X.25 links.This selection provides you with a dialog to make your X.25 adapters, already configured and defined for the operating system, known to PowerHA SystemMirror.This selection provides you with a dialog to show PowerHA SystemMirror's knowledge of a particular X.25 adapter and, if you wish, to change it.This selection provides you with a dialog to remove an X.25 adapter as known to PowerHA SystemMirror.This selection provides you with a list of all the X.25 adapters known to PowerHA SystemMirror, including their characteristics.Enter the name by which you want the adapter to be known to PowerHA SystemMirror throughout the cluster, e.g. ADAP_A0. This name must be unique among all Communication Adapters within the cluster. (Maximum size is 64 characters.)Enter the name of the cluster node on which the adapter is installed and configured. The node must already be a member of the cluster.Enter the device-file name of the driver used by this adapter/port, e.g. hdlc0. The driver must exist on the specified node.Set this field to "true" if you want to be able to enter this adapter as available to more than one highly available communications link. Set this field to "false" if you know that only one communication link should be able to use this adapter.Set this field to one or more names of Highly-Available Communication Links (separated by blanks) if you know that this adapter should only be used by those Links. Otherwise, leave it blank.This is the current name by which PowerHA SystemMirror knows this adapter. This field cannot be edited. Use the "New Name" field to change the name.For cards which use a twd driver (such as the Artic960hx), enter the adapter port number. For cards which use an hdlc this field is not required and will be ignored.This is the name of the cluster node on which the adapter is installed and configured. This field cannot be edited. In order to change the node with which an adapter name is associated, you must first delete the existing adapter definition.This selection provides you with screens to add, change/show, remove, and list highly available SNA-over-LAN, SNA-over-X.25, or X.25 communication links.This selection provides you with a dialog to make your SNA-over-LAN, SNA-over-X.25, or X.25 communication link, already configured and defined for the operating system, highly available.This selection provides you with a dialog to show the characteristics of a particular highly available SNA-over-LAN, SNA-over-X.25, or X.25 communication link, and, if you wish, to change them.This selection provides you with a dialog to make a particular highly available SNA-over-LAN, SNA-over-X.25, or X.25 communication link no longer highly available.This selection provides you with a list of all your highly available SNA-over-LAN, SNA-over-X.25, or X.25 communication links, including their characteristics.This selection provides you with a dialog to make your X.25 communication link, already configured and defined for the operating system, highly available.Enter the name by which you want the link to be known to PowerHA SystemMirror throughout the cluster. This name must be unique among all Highly Available Communication Links, regardless of type, within the cluster. (Maximum size is 64 characters.)This is the service type of this communication link. This field is pre-selected and cannot be edited.Enter the PowerHA SystemMirror names (not the device names) of the Communication Adapters that you want this link to be able to use.Enter the name of the file that this link should use to perform customized operations when this link is started and/or stopped.This selection provides you with a dialog to make your SNA-over-LAN communication link, already configured and defined for the operating system, highly available.Enter the name of the SNA DLC that this link should use.Enter the name of any SNA Ports that this link should start automatically.Enter the name of any SNA Link Stations that this link should use.This selection provides you with a dialog to make your SNA-over-X.25 communication link, already configured and defined for the operating system, highly available.This selection provides you with a dialog to show the characteristics of a particular highly available X.25 communication link, and, if you wish, to change them.This is the current name by which this link is known to PowerHA SystemMirror throughout the cluster. This field cannot be edited.This selection provides you with a dialog to show the characteristics of a particular highly available SNA-over-LAN communication link, and, if you wish, to change them.Enter the name of any SNA Ports that this link should use.This selection provides you with a dialog to show the characteristics of a particular highly available SNA-over-X.25 communication link, and, if you wish, to change them.Select the symbolic name of the HA communication adapter to remove.Select the symbolic name of the HA communication adapter to change/show.Enter the X.25 Port designation you wish to use for this link, for example: sx25a0. The port name must be unique across the cluster.Enter the X.25 Address/NUA that will be used by this link.Enter the X.25 Network ID. The default value is 5, which will be used if this field is left blank.Enter the X.25 Country Code. The system default will be used if this field is left blank.Provides menus to list, create, set the characteristics of, extend, synchronize, reduce, import, mirror, or unmirror a Volume Group.Provides menus to list, add, show, change, or remove Logical Volumes.Synchronize all logical volume mirrors in a Volume Group.Provides menus for extending or reducing a Volume Group.Import a Volume Group on all nodes belonging to a resource group.Mirror all logical volumes in a Volume Group.Unmirror all logical volumes in a Volume Group.Create a new Logical Volume within a Volume Group.Make changes to an existing Logical Volume.Rename a logical volume.Increases the size of a logical volume by adding unallocated physical partitions from within the volume group.Increases the number of physical partitions per logical partition within a logical volume.Deallocates copies from each logical partition in a logical volume.Synchronize by specifying volume group name.Synchronize by specifying logical volume name.Displays information about concurrent volume groups.Provides menus to extend, reduce, import, mirror, or unmirror a Concurrent Volume Group.Provides menus for extending or reducing a Concurrent Volume Group.Import a Concurrent Volume Group on all nodes belonging to a resource group.Mirror all logical volumes in a Concurrent Volume Group.Unmirror all logical volumes in a Concurrent Volume Group.Provides menus to list, add, copy, uncopy, show, change, or remove Concurrent Logical Volumes.Lists all concurrent logical volumes, sorted by volume group.Create a new Concurrent Logical Volume within a Concurrent Volume Group.Make changes to an existing Concurrent Logical Volume.Increases the number of physical partitions per logical partition within a concurrent logical volume.Deallocates copies from each logical partition in a concurrent logical volume.Displays the characteristics and status of the Logical Volume or lists the logical volume allocation map for the physical partitions on the Physical Volume.Removes a Concurrent Logical Volume.Synchronize all logical volume mirrors in a Concurrent Volume Group.Synchronize by specifying concurrent volume group name.Synchronize by specifying logical volume name.Options for managing the critical volume groups including adding,listing,removing and also to configure failure actions.Select a volume group to change it as Critical. This option modifies the volume group to be monitored for continuous access.Lists all the critical volume groups present in the cluster.Removes the Volume group from list of Critical Volume Groups and change the characteristics to same as of non-critical volume group.Actions that can be configured for a critical Volume Group if it is lost.Swap the IP address on a service adapter with the IP address on an available network interface on the same network.The name of the network on which the Swap Communication Interface will take place.The IP address that will be swapped to the available network interface adapter.The IP address of the network interface to which the IP address will be swapped.Fast Connect resources will be displayed as a pick list. This list will include all the file and print shares which are common across all nodes that participate in this resource group. Note: In order Fast Connect file shares to be highly available they must be configured as filesystem resources within a resource group.Create a Volume group by specifying a set of cluster nodes and disks . The cluster nodes and the disks shared by them will be displayed as a pick list.Create a Concurrent Volume group by specifying a set of cluster nodes and disks . The cluster nodes and the concurrent capable disks shared by them will be displayed as a pick list.Specifies the volume group name. The name must be unique across the cluster. The name must follow the rules for volume group names as specified in the documentation for the mkvg command. If you do not specify a name, HACMP will select a unique name of the form 'vgxx' where 'xx' are decimal digits.Specifies the number of megabytes in each physical partition, where the Size value is expressed in units of megabytes from 1 through 1024. The Size value must be equal to a power of 2 (example 1, 2, 4, 8). The default value is 4 megabytes. The default number of physical partitions per physical volume is 1016. Thus, a volume group that contains a physical volume larger than 4.064 gigabytes must have a physical partition size greater than 4 megabytes.Specifies the major number of the volume group. The system kernel accesses devices, including volume groups, through a major and minor number combination. Changing the volume group major number may result in the command not being able to execute successfully on a node that does not have that major number currently available. Please check for a commonly available major number on all nodes before changing this setting.Specifies the PVIDs of disks that are currently available for creating a volume groupSpecifies the PVIDs of concurrent disks that are currently available for creating a volume groupSpecifies the cluster nodes on which the volume group is to be created.Specifies the cluster nodes on which the file system is to be created.Creates a File System by specifying a set of cluster nodes and a logical volume. The cluster nodes and the logical volumes shared by them will be displayed as a pick list.Specifies the logical volume on which the file system is to be created.Add, manage and remove the disks, data path devices, and cross-site LVM mirroring for your PowerHA SystemMirror nodes.Adds a disk to a specified cluster node(s).Removes a disk from a specified cluster node(s). When deleting a disk, you can make the disk unavailable, but still defined, or remove the disk definition from the Customized database.Select the nodes in which you want to remove disks for. This will generate a list of PVIDs of the common disks for the selected nodes.The list of PVIDs of the disks available from the nodes selected on the previous screen.The volume group selected from the prior pick list of volume groups known across the clusterThe resource group (if any) that owns the selected volume groupThe list of cluster node names to which the disk is to be added and is physically attached.Type of device to be added. In this case, a disk.A list of node names and the associated parent device for this disk. The parent device identifies the logical name of the adapter device to which the disk is or will be attached. Each node name is separated from its associated parent adapter by a comma. Each node/parent adapter pair is separated with a space. For example: NodeA,scsi0 NodeB,scsi1 NodeC,vscsi0Automatically configure error notification for certain resources defined to PowerHA SystemMirror.List current error notify settings for resources defined to the PowerHA SystemMirror cluster, rootvg, and the SP Switch. Only those resources configured via automatic error notification are listed.Add error notify entries for resources defined to the PowerHA SystemMirror cluster, rootvg, and the SP Switch. A determination is made as to whether or not each resource is a single point of failure in the cluster. A notification method is selected based on this determination, and an error notify entry is added for this resource.Delete error notify entries for resources defined to the PowerHA SystemMirror cluster, rootvg, and the SP Switch. Only those error notify entries configured via automatic error notification are deleted.Choose this option to add, manage or remove the disks, data path devices and manage mirror pools.Rate at which to send heartbeat packets to other nodes in the cluster. Number of seconds between heartbeats ("keep alive" messages) sent out by the Cluster Manager. A changed value in this field will be ignored if the Failure Detection Rate is not set to "Custom". The minimum value is 10. If a value less than this is entered, the value 10 will be used instead. The adapter failure detection time in tenths of seconds will be computed as Failure Cycle * Heartbeat Rate * 2 Note: The Failure Detection Rates should be kept equal for all NIMs in a cluster. Note: When trying to eliminate deadman switch timeouts, the Failure Cycle should be tuned before changing the Heartbeat Rate.The system is represented in the customized device configuration database as a device named sys0 and is described as the System Object. Various parameters of the operating system appear as attributes to this device. The user can view and modify the I/O pacing high and low water marks in this panel.Change/Show the frequency with which I/O buffers are flushed.The number of seconds between I/O buffer flushes. The OS default is 60. For nodes in PowerHA SystemMirror clusters, the recommended frequency is 10.The default values are to be changed only after a system performance analysis indicates doing so would improve performance. General information: Indicates the high water mark for pending write-behind I/Os per file. This attribute, along with the low water mark for pending write-behind I/Os per file attribute, may be used in a multiprogramming environment to balance and control the I/O activity associated with file system write behind. The high water mark specifies the number of write-behind I/Os per file at which additional blocks write. The low water mark specifies the number of pending write-behind I/Os per file at which previously blocked writes are allowed to proceed. The possible values for the high water mark range from 0 to 32767. The possible values for the low water mark range from 0 to 32766. If set to a value other than zero, the low water mark value must be less than the high water mark value. If the high water mark is set at 0, the low water mark must also be 0. When both values are 0, no controls are placed on file system write-behind activity.The default values are to be changed only after a system performance analysis indicates doing so would improve performance. General information: Indicates the low water mark for pending write-behind I/Os per file. This attribute, along with the high water mark for pending write-behind I/Os per file attribute, may be used in a multiprogramming environment to balance and control the I/O activity associated with file system write behind. The high water mark specifies the number of write-behind I/Os per file at which additional blocks write. The low water mark specifies the number of pending write-behind I/Os per file at which previously blocked writes are allowed to proceed. The possible values for the high water mark range from 0 to 32767. The possible values for the low water mark range from 0 to 32766. If set to a value other than zero, the low water mark value must be less than the high water mark value. If the high water mark is set at 0, the low water mark must also be 0. When both values are 0, no controls are placed on file system write-behind activity.This is the symbolic name for an application server. The application monitor will have the same name.The application Server will launch one or more processes. Enter the names of the processes here. Each Process name must be unique.Monitor Mode: Select the mode in which the application monitor monitors the application: - STARTUP MONITORING. In this mode the application monitor checks that the application server has successfully started within the specified stabilization interval. Select this mode if you are configuring an application monitor for an application that is included in a parent resource group (in addition to other monitors that you may need for dependent resource groups). - LONG-RUNNING MONITORING. In this mode, the application monitor periodically checks that the application server is running. The checking starts after the specified stabilization interval has passed. This mode is the default. - BOTH. In this mode, the application monitor checks that within the stabilization interval the application server has started successfully, and periodically monitors that the application server is running after the stabilization interval has passed. The username of the owner of the processes (usually root).The number of instances of a process to monitor. Default is 1. If more than one process is listed in the Process to Monitor, this must be 1.Stabilization Interval: Specify the time (in seconds) in this field. Depending on which monitor mode is selected, PowerHA SystemMirror uses the stabilization period in different ways: - If you select the LONG-RUNNING mode for the monitor, the stabilization interval is the period during which PowerHA SystemMirror waits for the application to stabilize, before beginning to monitor that the application is running successfully. For instance, with a database application, you may wish to delay monitoring until after the start script and initial database search have been completed. Experiment with this value to balance performance with reliability. - If you select the STARTUP MONITORING mode, the stabilization interval is the period within which PowerHA SystemMirror monitors that the application has successfully started. When the time expires, PowerHA SystemMirror terminates the monitoring of the application startup and continues event processing. If the application fails to start within the stabilization interval, the resource group's acquisition fails on the node and PowerHA SystemMirror begins resource recovery actions to acquire the group on another node. The number of seconds you specify should be approximately equal to the period of time it takes for the application to start. This depends on the application you are using. - If you select BOTH as a monitor mode, the application monitor uses the stabilization interval to wait for the application to start successfully. It uses the same interval to wait until it starts checking periodically that the application is successfully running on the node. In most cases, the value should NOT be zero. The number of times to restart this application server (default is 3).The number of seconds (in addition to the Stabilization Interval) that the application must remain stable prior to resetting the Restart Count. It must be at least 10% longer than (Restart Count)*(Stabilization Interval).The action to take once all restart attempts have been exhausted. Choices are 'notify' and 'fallover'. If notify is selected, no further action will be taken after running the notify method. If fallover is selected, the resource group containing the monitored application will be moved to another node in the cluster.The full pathname of a user defined method to perform notification when a monitored application fails. This method will execute each time an application is restarted, fails completely, or falls over to the next node in the cluster. Configuring this method is strongly recommended.Name of the script to run to stop the application. Default is the application server stop method.Name of the script which starts the application. Default is the application server start method.The full pathname of a method to check the application status.The monitor method will be run periodically at this interval (in seconds). Also if a monitor method takes longer than this interval to determine the status of its application, the monitor method will be stopped.The signal sent to stop the Monitor Method if it doesn't return within Monitor Interval seconds. The default is SIGKILL(9).The number of seconds (in addition to the Stabilization Interval) that the application must remain stable prior to resetting the Restart Count. It must be at least 10% longer than (Restart Count)*(Monitor Interval + Stabilization Interval).Select an Application Monitor from the list.This is the symbolic name for an Application Monitor. The Application Monitor will have the same name.Choose an Application Monitor from the list to Remove.Choose an Application Monitor from the list to Resume.Choose an Application Monitor from the list to Suspend.Choose an Application Server from the list to Monitor.Choose this option to select an Application Monitor to remove.Choose this option to modify or view an Application Monitor.Choose this option to select an Application Server to Monitor. Each Application Server can have only one Monitor definedChoose this option to resume previously suspended Application Monitoring. Note that all configured Application Monitors will be listed, including those which have not been suspended. (It does no harm to Resume a Monitor which is not suspended)Choose this option to suspend Application Monitoring. Note that all configured Application Monitors will be listed, including those which are already suspended. (It does no harm to suspend a Monitor which is already suspended)Choose this option to add, change, view or remove user-defined Application Monitors. Custom monitoring lets you define your own method to test the status of your application. To monitor simply whether an application is running, use a Process Monitor.Choose this option to add, change, view or remove Process Application Monitors. A Process Monitor tests whether an application is running. If you need to test the status of your application in a different way, use a Custom Monitor. if one or more application monitors are configured to monitor an application server, they will also be used for monitoring the successful startup of the application server when the resource group is acquired. If any of the application monitors for an application server reports during resource group acquisition that the application server has not started correctly, then the resource group is marked to be in ERROR state, and selective fallover for it is started, unless selective fallover is disabled.Select this option to define and work with Shared Tape Resources.Add a new tape resource to the configuration.Change / show an existing tape resource.Remove a tape resource from the configuration.This is the symbolic name for the tape resource.Script to execute to start the tape resource.Script to execute to stop the tape resource.Description of the tape resource.The name of the tape special file. If the first character is not a '/', '/dev/' is prepended to the path.Should tape start processing be synchronous?Should tape stop processing be synchronous?Enter tape resources that will be started. These are the tape resources defined in the "Configure Tape Resources" section.Select 'no' if you want to run all verification checks that apply to the current PowerHA SystemMirror configuration. Selecting 'yes' will run only those checks that directly relate to the PowerHA SystemMirror configuration information that you have changed. If you have made changes to the Operating System configuration on your nodes, it is recommended that you select 'no' to ensure all changes that could potentially affect PowerHA SystemMirror are considered during verification. Selecting 'yes' has no effect on an inactive cluster.Verify and Synchronize the cluster configuration. Verify and Synchronize the cluster configuration while cluster services are running. Select the verification mode to use. Select 'Verify' if you only want to verify the configuration but not synchronize the configuration to other nodes in the cluster. Select 'Synchronize' if you have already verified the cluster configuration and want to propagate the unchanged configuration to all nodes in the cluster. Selecting 'Both' will perform both verification and synchronization The Verbose Output option allows you to select whether PowerHA SystemMirror verification should provide verbose console output. Select 'Verbose' if you want to see the display of all output that normally goes to the clverify.log file.Select the Application Server with which this monitor is to be used.Choose this option to resume previously suspended Application Monitoring. Note that all Application Monitors for the selected Application will be resumed, including those which have not been suspended. (It does no harm to Resume a Monitor which is not suspended)Choose this option to suspend Application Monitoring. Note that all configured Application Monitors will be listed, including those which are already suspended. (It does no harm to suspend a Monitor which is already suspended)Choose this option to configure a "Process" type Application Monitor. Process Application monitors let you define a process and process owner. The monitor will detect when that process dies or goes below a (configurable) number of instances. Choose this option to configure a "Custom" type Application Monitor. Custom Application monitors let you define you own monitoring method - PowerHA SystemMirror invokes your method on a regular interval and when your method indicates a problem, PowerHA SystemMirror responds by recovering the application. Choose an Application Server from the list. All monitors for this Application server will be suspended. Choose an Application Server from the list. All monitors for this Application server will be resumed. Select the Application Server with which this monitor is to be used. You can leave this field empty (and specify the server later) or you can select one or more application servers from the list. Select the Application Monitors to use with this Application. Note that you can have more than one monitor for each application. Select only "None" to disable the monitoring for application controller. To Verify and Synchronize the Standard Configuration: After all resource groups have been configured, verify the cluster configuration on all nodes to ensure compatibility. If no errors are found, the configuration is then copied (synchronized) to each node of the cluster. If you synchronize from a node where Cluster Services are running, one or more resources may change state when the configuration changes take effect. It is possible to have multiple instances of an Application Monitor. Select Resource Group associated with the the Application Monitor that you wish to suspendIt is possible to have multiple instances of an Application Monitor. Select Resource Group associated with the the Application Monitor that you wish to resumeThis option lets you control how your application controller start script is executed. The default is to execute the script in the background and ignore the exit status. You can change this option to use foreground execution. With foreground execution your application controller start script will be called synchronously during the node up event, and execution of the event will stop until the controller start script completes. If the start script exits with a non-zero exit code, the node up event will fail. The instance count defines the minimum number of instance of the process that must be active. No action is taken if the number of running instances exceeds the configured instance count. Default is 1. If more than one process is listed in the Process to Monitor, Instance count should be given as 1. If you specify more than one process to monitor, the instance count will be ignored and will be limited to 1 instance.Set of checks related to disk management and to verify inconsistency of AIX level parameters across the cluster nodes.These checks might consume more time for Verification process.The number of times to retry this application monitor before the failure of the custom application monitor. The default value is 0.Use this option to enable monitoring of CPU usage for an application process. Valid values: yes,no Default value: no When the value is set to 'yes', monitoring of CPU usage will be enabled. When the value is set to 'no', monitoring of CPU usage will be disabled.Absolute path of an application process to monitor CPU usage.Interval at which CPU usage of an application process is monitored. Valid range is 1 minute to 120 minutes and default value is 10 minutes.Select the Application Monitors to use with this Application. Note that you can have more than one monitor for each application. Select only "None" to disable the monitoring for application controller. Enable Availability Metrics logging every time the monitor is executed. The default setting is to disable this logging. Choose "Yes" to enable Availability Metrics logging every time the monitor is executed. Choose "No" to disable Availability Metrics logging. The default is No. Creates a File System by specifying a volume group and a set of sharing cluster nodes. The available volume groups and the sharing cluster nodes will be displayed as a pick list.Specifies the list of available volume groups and the sharing cluster nodes on which a file system can be created.Specifies the list of available volume groups to mirror, their owning resource group (if any) and the cluster nodes on which they are known.Specifies the list of available volume groups to unmirror, their owning resource group (if any) and the cluster nodes on which they are known.Provides a menu for changing a user's password in the cluster. A password is a string of characters used to gain access to a system. You must be the root user to change a user's password. When changing a user's password, you must follow the password restrictions and conventions for the system as specified in each user's stanza in the /etc/security/user configuration file.Provides a dialog for changing a user's password in the cluster. You must be the root user to change a user's password.Specifies the user whose password you want to change. A password is a string of characters used to gain access to a system. You must be the root user to change a user's password. When changing a user's password, you must follow the password restrictions and conventions for the system as specified in each user's stanza in the /etc/security/user configuration file. To change the password for a user, type in the user's name or use the List box and select a user from the choices displayed. You will be prompted to enter the user's new password and then prompted again to re-enter the new password.Specifies whether the "force change" flag should be set in the /etc/security/passwd file, which will require the user to change the password on each node at the next login. If set to 'true', the user will be required to change the password again. This is the operating system default behavior. If set to 'false', the user will not be required to change the password on the next log in.The Custom Disk Methods path allows you to manage the methods used by PowerHA SystemMirror to take over disks for which there is no native support.Specify the custom disk methods to be used for a specific disk typeShow and change the custom disk methods to be used for a specific disk typeRemove the specification of custom disk methods to be used for a specific disk typeEnter the identifier for this particular disk type. This is the PdDvLn field of the CuDv entry for this disk. It can be retrieved from the CuDv ODM class with the command: odmget -q "name = " CuDvEnter the full path name of a routine that PowerHA SystemMirror should use to identify ghost disks, or use F4 to select one of the built in methods. Ghost disks are duplicate profiles created during configuration processing. They must be removed in order to allow the operating system to vary on the volume group containing the original disk profiles.Enter the full path name of a routine that PowerHA SystemMirror should use to determine if another node holds a reserve on this disk, or use F4 to select one of the built in methods. A reserve restricts access to the disk to a specific node. HACMP must take special steps to remove that reserve, in the event that the node which established the reserve has failed.Enter the full path name of a routine that PowerHA SystemMirror should use to break a reserve on this disk, or use F4 to select one of the build in methods. A reserve restricts access to the disk to a specific node. HACMP must take special steps to remove that reserve, in the event that the node which established the reserve has failed.Some methods to manipulate disks may be safely run in parallel. This may provide a performance advantage in those configurations with a large number of disks.Enter the full path name of a routine that PowerHA SystemMirror should use to make this disk available, or use F4 to select one of the built in methods. Once a disk is accessible - any reserve has been removed - it must be made available in order for the operating system to access that disk.Select the identifier for the particular disk type whose custom methods you want to remove. This identifier is the PdDvLn field of the CuDv entry for this disk. It can be retrieved from the CuDv ODM class with the command: odmget -q "name = " CuDvSelect the identifier for the particular disk type whose custom methods you want to show or change. This identifier is the PdDvLn field of the CuDv entry for this disk. It can be retrieved from the CuDv ODM class with the command: odmget -q "name = " CuDvDynamic Node Priority Polices set the priority order of nodes to be used in determining the destination of a Resource Group during an event which causes the Resource Group to either move or be placed online. A default value, (none), uses the static priority order as is listed in the Participating Node Names resource field for a Resource Group. Dynamic node priority policies, in which the destination node is determined at run-time, are selected via these menus. Please refer to the PowerHA SystemMirror manuals for more information on Node Priority Policies.Use this selection to add a new Dynamic Node Priority Policy.Use this selection to show the configuration of an existing Dynamic Node Priority Policy or to change that configuration.Use this selection to remove Dynamic Node Priority Policies that were previously configured.Enter a descriptive name of the Dynamic Node Priority Policy being defined. This is the name that will appear in the Node Priority menu when resources for a resource group are configured. This is an alphanumeric string of up to 1024 characters.Enter a description of this Dynamic Node Priority Policy. This is an optional, alphanumeric string of up to 1024 characters.Select a Resource Variable from the list. If the cluster is running, all the available Resource Variables will be listed. Otherwise, only three will be shown. These three are expected to be particularly useful to most administrators. The command /usr/sbin/rsct/bin/haemqvar -H '' on a node with an active cluster manager will give you more detailed information about each variable.Select a condition to order the nodes based on the values of the Resource Variable returned from the nodes. The condition 'largest' will make the node with the highest value of the Resource Variable the most preferred node for a resource group using this policy. The condition 'smallest' will make the node with the lowest value of the Resource Variable the most preferred node for a resource group using this policy.This is the current name of the Dynamic Node Priority PolicyThis is the new name of the Dynamic Node Priority Policy being changed. This is the name that will appear in the Node Priority menu when resources for a resource group are configured. This is an alphanumeric string of up to 1024 characters.Configures ttys for nodes that will participate in pagingRemove node/port pair definition from the ODMAdds the custom event notification method to the ODMAll previously defined custom event notification method names will be displayed. After selecting a particular event method, all the information stored in the ODM for the specified method will be displayed. Changes to the selected method can be made.All information about the selected custom defined event method will be removed from the ODM.Choose the node which you want to define the paging port forNodename which you are defining the paging port fortty that will be used for paging for this nodename for event notification methodEnter description of the method (optional)Enter the nodenames that will send the remote notification for this method. Priority for nodes is from left to right. If the leftmost node is not active then the next leftmost node will try to send the remote notification.For Paging, Enter the number of the pager or the number of paging company along with pager number. If the pager is numeric the input should look like '18007650102,,,,'. Trailing commas are required since there is always some pause between dialing and the time you can send the page by pressing the keys on the phone. If the pager is alphanumeric the input should look like '180007654321;2119999' where 18007654321-the paging company number and 2119999-the number of the actual pager. For Cell Phone Text Messaging via email, enter the address of the cell phone. It is in the format phone_number@provider_address. Consult your provider for the provider_address. Multiple space separated addresses can be used. It may look like 18007654321@provider.net. If you are using a GSM modem to send the message wirelessly, enter the phone number followed by a "#" as in 7654321#. Enter the full path for the file containing the information that will be sent to the pager or cell phone. For numeric paging this file can contain only digits. For alphanumeric paging or cell phone messaging, this file can contain a text message. If during the event the file is not found, then the default message will be sent. See the sample file for more information. (/usr/es/sbin/cluster/samples/sample.txt).Enter event names for the remote method you define. During these events the message will be sent to the remote device.Enter the number of the attempts to send the remote notification, default = 3.Enter the time of waiting for connection for alphanumeric paging or total phone-is-up time for numeric paging, default = 45 secondsEnter new method name if you want change the name for this methodThis option allows you to test a remote notification method.This is read-only field displaying the chosen remote event methodChoose an event name, defined for this method from the listThis screen allows you to send a test remote notification for an already defined remote event methodEnter the name for this PowerHA SystemMirror network. Use alphanumeric characters and underscores; no more than 31 characters.Select one of the following options: *Bring Resource Groups Offline(Graceful). With this option, the local nodes shuts itself down gracefully. Remote node(s) interpret this as a graceful shutdown of cluster and do NOT take over the resources. PowerHA SystemMirror will stop monitoring the applications on the selected node(s) and will also release cluster resources(such as applications) from PowerHA SystemMirror's control. This means that the application will be offline on that node. *Move Resource Groups(Takeover) With this option, the local node shuts itself down gracefully. Remote node(s) interpret this as a non-graceful shutdown and take over the resources. The mode is useful for system maintenance. If you select this option, PowerHA SystemMirror will stop monitoring the applications on the selected node(s) and will attempt to recover the groups and applications to other active nodes in the cluster. *Unmanaged Resource Groups(Forced) The resources on the local node remain active on the node. PowerHA SystemMirror marks the state of such resource groups as UNMANAGED.Provides a dialog to replace a failed cluster disk in a volume group. The volume group must be in a resource group to be listed in the selection list. You must be the root user to perform this task.Select the disk that has failed on the cluster from the list. The first column is the volume group in which the disk is a member. The second column is the hdisk of the disk located on the cluster node. The third column is the PVID of the disk (which is the same on all cluster nodes that have the disk configured). The fourth column is the cluster node that either has the volume group varied on or is the best node to vary on the disk. The best node is either the local node if it is a member of the same resource group as the volume group or the first available node in the resource group list.Select a disk from the list. Each entry references an available sharable disk that has at least the size capacity of the failed disk. The first column is the hdisk a of disk for replacement on the cluster node that was referenced from the previous selection. The second column is the PVID of the disk. The third column is the reference cluster node for the hdisk and the node in which the disk replacement will be performed.Volume group which contains the failed disk.The hdisk name of the failed disk. The source disk name.The PVID of the failed disk. The source disk PVID.The cluster node of the failed hdisk name and the node in which the disk replacement will be performed.Resource group the contains the failed disk.The hdisk name of the disk that will be used to replace the failed disk. The destination disk name.The PVID of the disk that will be used to replace the failed disk. The destination disk PVID.This is a list of PowerHA SystemMirror managed PCI hot pluggable network adapters. Choose the adapter that you wish to replace.This is the PCI hot pluggable adapter device that will be replaced.In the case that you choose to hot replace the last alive heartbeat path on this node, you will have to shut PowerHA SystemMirror down on this node before continuing with the hot replacement, otherwise a partitioned cluster will occur. To have this utility automatically shut PowerHA SystemMirror down on this node when a it is detected that a partitioned cluster will occur, select "true". Otherwise, select "false".Add a new user-defined event to the configuration.Show and change the characteristics of a user-defined event.Remove a user-defined event from the configuration.Enter the name of the event.Enter the full pathname of the recovery program.Enter the resource name. In RMC, this is a resource class as returned by the /usr/bin/lsrsrcdef command. Enter the selection string. In RMC, this is a SQL expression including resource attributes that identifies an instance of the resource in the system.Enter the expression. In RMC, this is the relational expression between a dynamic resource attributes that when true, generates an event.Enter the rearm expression. In RMC, an expression used to generate an event that alternates with an original event expression in the following way: the event expression is used until it is true, then the rearm expression is used until it is true, then the event expression is used, and so on. The rearm expression is commonly the inverse of the event expression (for example, a resource variable is on or off). It can also be used with the event expression to define an upper and lower boundary for a condition of interest.This field specifies the parameters passed to the network interface module (NIM) executable. For the rs232 NIM, this field specifies the baud rate. Allowable values are "9600", "19200" and "38400" (the default).Modify Network Module detection rate using pre-defined values such as Fast, Normal and Slow. This will change the definition of a network module.Modify all Network Module parameters including Module Name, Description, Parameters and Failure detection rate using custom values. This will change the definition of a network module.Show all values for the specified Network Module.Failure cycle is the number of missed heartbeat messages that are allowed before an adapter is declared down. The combination of heartbeat rate and failure cycle determines how quickly a failure can be detected and may be calculated using this formula: (heartbeat rate) * (failure cycle) * 2 seconds Heartbeat rate is the rate at which cluster services sends 'keep alive' messages between adapters in the cluster. The combination of heartbeat rate and failure cycle determines how quickly a failure can be detected and may be calculated using this formula: (heartbeat rate) * (failure cycle) * 2 seconds Select the type of network to configure. IP-based networks include Ethernet, ATM, FDDI, Token-Ring. An IP-based network is built up from subnets. A Non IP-based network does not have subnets. Non IP-based networks are serial networks: RS232, TMSCSI, TMSSA.Enter the network type of the PowerHA SystemMirror network you would like to create. You can either: o select a network type from the list of supported network types, by pressing F4, or o enter a network type directlyEnter the network type of the PowerHA SystemMirror network to create.The type of network, e.g. rs232, tmssa, tmscsi. If the network type is empty, the application will try to determine its type based on the device name. If it is not successful, the adapter definition will fail.Type in the device name in this field, e.g. /dev/tty1.Automatically Import Volume Groups, is set to 'false' by default. Definitions of 'available for import volume groups' are presented here from the file created the last time the information was collected during Discovery of PowerHA SystemMirror-related Information. No updating of volume group information will be done automatically. If reset to 'true', causes the definition of any volume groups entered on the Volume Groups line or the Concurrent Volume Groups line to be imported to any resource group nodes that don't already have it. Automatic import of volume groups is not supported for volume groups containing Data Path Devices (VPATHs.) When Automatically Import Volume Groups is set to 'true', the final state of the volume group will depend on the initial state of the volume group (varied on or varied off) and the state of the resource group to which the volume group is to be added (online or offline). The possible scenarios and corresponding actions are listed below. a. Volume Group varied on, Resource Group Offline: In this case, the volume group will be imported to nodes in the resource group (if required) and the volume group will be left varied on (same as initial state). b. Volume Group varied on, Resource Group Online: importvg will be run on nodes in the resource group (if required) and the volume group would be varied off at the end of this process. When the user synchronizes changes to the cluster, the DARE process will vary on the volume group on the node where the resource group is online. c. Volume Group varied off, Resource Group Offline: importvg will be run on nodes in the resource group (if required) and the volume group will be varied off at the end of this process. (same as the initial state) d. Volume Group varied off, Resource Group Online: importvg will be run on nodes in the resource group (if required) and the volume group would be varied off at the end of this process. When the user synchronizes changes to the cluster, the DARE process will vary on the volume group on the node where the resource group is online. When Automatically Import Volume Groups is set to 'false', the final state of the volume group will depend on the current state of the volume group and the state of the resource group to which the volume group is to be added (as above).This selection determines whether PowerHA SystemMirror will try to use IP aliasing for IP Address takeover. This selection will normally be made automatically by PowerHA SystemMirror when it detects a network configuration capable of operating in this mode, but you can disable the feature by selecting 'No'. IPv6 network supports only IPAT via alias.Used to change the WLM run-time parameters used by PowerHA SystemMirror.Enter the primary Workload Manager class associated with this Resource Group. For resource groups with startup policies 'Online Using Distribution Policy' and 'Online On All Available Nodes', all nodes in the resource group will be reconfigured to use this WLM class. For resource groups with startup policy 'Online On Home Node Only' and 'Online On First Available Node', if no secondary WLM class is specified, all nodes will use the primary WLM class. If a secondary WLM class is specified, only the primary node will use the primary WLM class.Enter the secondary Workload Manager class associated with this Resource Group. Only resource groups with startup policy 'Online On Home Node Only' and 'Online On First Available Node' are allowed to use secondary WLM classes. If no secondary WLM class is specified, all nodes in the Resource Group will use the primary WLM class. If a secondary class is specified, the primary node will use the primary WLM class and all other nodes will use the secondary WLM class.Specify the name of the HACMP WLM configuration. The configuration must be created and contain classes before it can be selected for use by PowerHA SystemMirror. Default is HA_WLM_config.Max. Event-only Duration (in seconds): Enter any positive integer in this field. Enter the maximum time (in seconds) it takes to execute a cluster event. For events that do not manipulate resource groups, the Event Duration Time is equal to the Max. Event-only Duration Time. Example: The Max. Event-only Duration value is 200 and the Resource Group Processing Time is 220. When a swap adapter event occurs, it does not manipulate resource groups and therefore, is considered a fast event. For a swap Adapter event, the Total Time Duration = Max. Event-only Duration Time, which is 200 seconds. Default Event-only Duration is 180 seconds.Max. Resource Group Processing Time: Enter the longest time (in seconds) it takes for a resource group to be acquired or released. You may enter any positive integer or zero in this field. Max. Resource Group Duration Time is added to the Max. Event-only Duration Time for those events that can manipulate resource groups. Example: The Max. Event-only Duration value is 200 and the Resource Group Processing Time is 220. When a node up event occurs, it acquire a resource group. It is considered a slow event. Therefore, total Time Duration = Max. Event-only Duration (200 seconds) + Max. Resource Group Processing Time (220 seconds) = 420 seconds. Default Resource Group Processing Time is 180 seconds.This is the total time cluster manager waits before starting the config_too_long script.Selecting this field allows you manage the display of event summaries. Displaying Event Summary extracts event summary information from hacmp.out and gives you the option to display it on the screen, or save it to a file.Select this option to display event summary information on the screen. Event summary information is obtained from the hacmp.out file. The event summary information can be in plain text or HTML format, depending on the format of the hacmp.out file.Select this option to save event summary information to a file. The event summary is obtained from hacmp.out. The format of the event summary is dependent on the format of the hacmp.out file.Select this option to clear event summary information from the event summary text file. You may choose to do this periodically, to clear out old information from event summary, or before generating a new event summary file.Specify the pathname to store the event summary information.Remove one or more IP based or non-IP based communication Interfaces/IP labels.Show / Modify attributes of Interfaces / IP Labels, such as Network Type, Network Name, Network Attribute, etc.Toggle between Boot, Standby, Service or Persistent. All adapters/IP labels added to the PowerHA SystemMirror configuration in this operation will have the same value for Adapter Function. Use the Change / Show an IP-based Adapter SMIT screen to change the properties of individual adapters if required.Add one or more Service IP Labels to the cluster configuration. Use the Discover IP Topology function prior to using this function. You may modify service labels added through this screen via the Change / Show an Interface / IP Label function. Add IP Labels which require individual configuration. Enter the IP label which is bound to the interface when the operating system boots. Enter name of the node on which the interface resides. Enter netmask as "X.X.X.X", where X in [0-255], to configure an IPv4 network. Enter prefix length in [1-128] to configure an IPv6 network. A symbolic name of the logical network associated with this interface. This name has no meaning outside of PowerHA SystemMirror, but is used within PowerHA SystemMirror to differentiate between network resources. Select this menu item to enter a screen which allows to view/modify the Resource Group Temporal Ordering. You can modify the Serial Acquisition Order and Serial Release Order of the Resource Groups.These Resource Groups will be acquired in parallel. These Resource Groups will be acquired in serial order as shown. Note that the order in which these resource groups will be acquired might change during event processing: resource groups that are acquired during an event are processed before resource groups that perform only NFS-related processing. Enter serial acquisition resource group order. Note that the order in which these resource groups will be acquired might change during event processing: resource groups that are acquired during an event are processed before resource groups that perform only NFS-related processing. These Resource Groups will be released in parallel. These Resource Groups are released in serial order as shown. Note that the order in which these resource groups will be released might change during event processing: resource groups that are released during an event are processed before resource groups that perform only NFS-related processing. Enter serial release resource group order. Note that the order in which these resource groups will be released might change during event processing: resource groups that are released during an event are processed before resource groups that perform only NFS-related processing. Manage and analyze cluster applications.Analyze an application's availability over a period of time.The operating system network interface name currently associated with the selected adapter.Use this selection only if you have changed the network interfaces in relation to the operating system and now need to make the PowerHA SystemMirror cluster aware of these changes.Select the PowerHA SystemMirror communications interface whose network interface you wish to reset.Enter a 4-digit year within the specified range.Enter a 2-digit hour within the specified range.Enter a 2-digit month within the specified range.Enter a 2-digit day within the specified range.Enter a 2-digit minute within the specified range.Enter a 2-digit second within the specified range.Choose the application server that you wish to analyze.# The volume group will be controlled by this # resource group when cluster services are started. # There are currently no resource groups to select. # Press F3 to return and enter a name for a new resource group # Press F3 to return and enter a name for a new resource group # Or select a resource group from the following list: Display the Data Path Device Configuration for selected nodes in the cluster. For each PVID that is or can be a Data Path device, vpath, hdisk, volume group, and configuration information is displayed for each node selected.Display the Data Path Device Status for selected nodes in the cluster. For each PVID that is or can be a Data Path Device, status information is displayed for each node selected.Display the Data Path Device Adapter status for selected nodes in the cluster.Defines and configures all Data Path Devices for use by the operating system for selected nodes in the cluster.For selected nodes in the cluster, dynamically adds more paths to SDD devices while they are in the Available status. Also add paths to Data Path devices belonging to active volume groups.Makes defined Data Path Devices available for use by the operating system for selected nodes in the cluster.For selected nodes in the cluster, a Data Path Device can be removed. The device can be made unavailable but still defined, or have its device definition removed from the Customized database.Converts a volume group from ESS hdisks to SDD vpaths by specifying a set of cluster nodes and a volume group. The volume group must be offline on all nodes where conversions will occur.Converts a volume group from SDD vpaths to ESS hdisks by specifying a set of cluster nodes and a volume group. Use this when you want to configure your applications back to original ESS hdisks, or when you want to remove the SDD from your host system. The volume group must be offline on all nodes where conversions will occur.Provides a menu to manage cluster data path devices. If you select any command from this entry, please make sure that the appropriate level of SDD software is installed on all nodes.Create a Volume group by specifying a set of cluster nodes and data path devices (disks). The cluster nodes and the data path devices shared by them will be displayed as a pick list.Create a Concurrent Volume group by specifying a set of cluster nodes and data path devices (disks). The cluster nodes and the data path devices shared by them will be displayed as a pick list.Specifies the VPATH IDs of disks that are currently available for creating a volume groupSpecifies the VPATH IDs of concurrent disks that are currently available for creating a volume groupIndicates whether the concurrent volume group will be created as an enhanced concurrent mode volume group. This field is displayed with true or false as its value. True indicates that an enhanced concurrent mode volume group will be created. To change this value, use the Tab key to toggle the True/False value. If the version of AIX is 5.2 or greater, then only enhanced concurrent mode volume groups can be created.Add a physical volume (hdisk or vpath) to a volume group. The volume group names listed are resources of a resource group.Remove a physical volume (hdisk or vpath) from a volume group. The volume group names listed are resources of a resource group.Add a physical volume (hdisk or vpath) to a concurrent volume group. The concurrent volume group names listed are resources of a resource group.Remove a physical volume (hdisk or vpath) from a concurrent volume group. The concurrent volume group names listed are resources of a resource group.Converts an SSA or RAID concurrent volume group to an enhanced concurrent mode volume group. For more information, please see the mkvg man page or Appendix D: OEM Disk Accommodation in the Administration Guide.Converts a volume group to an enhanced concurrent mode volume group. For more information, please see the chvg man page or Appendix D: OEM Disk Accommodation in the Administration Guide.Create an enhanced concurrent mode volume group. Enhanced concurrent mode can be used for fast disk takeover, or it can be used in a resource group that is online on all available nodes (OAAN). This field has a choice of three values: 'Fast disk takeover or Disk Heart Beat' indicates that an enhanced concurrent mode volume group will be created. This value must be chosen to allow the disks in the volume group to be used for disk heart beat. It also provides for fast disk takeover. If a resource group name is also given, and that resource group is created as part of this operation, the resource group will be started on only the home node (OHN). 'Concurrent access' indicates that an enhanced concurrent mode volume group will be created. If a resource group name is also given, and that resource group is created as part of this operation, the resource group will be started on all available nodes (OAAN). 'no' indicates that the volume group will not be enhanced concurrent mode. If a resource group name is also given, and that resource group is created as part of this operation, the resource group will be started on only the home node (OHN) Once created, resource group policies can be adjusted through the Change/Show Attributes of a Resource Group SMIT panelThe volume group type is the previously selected value. 'Legacy' corresponds to the '-I' operand on the mkvg command, and creates a volume group that can be imported on older versions of AIX. 'Original' corresponds to standard volume groups 'Big' corresponds to the '-B' operand on the mkvg command, and creates a volume group that can accommodate up to 128 physical volumes and 512 logical volumes 'Scalable' corresponds to the '-S'operand on the mkvg command, and creates a volume group that can accommodate up to 1024 physical volumes, 256logical volumes and 32768 physical partitionsDisplay the characteristics of - options specified when creating - an existing volume group. Some of these characteristics can also be changed.Change the volume group to Enhanced Concurrent Mode. Enhanced concurrent mode can be used for fast disk takeover, or it can be used in a resource group that is online on all available nodes (OAAN)Changes the volume group to Big VG format. This can accommodate up to 128 physical volumes and 512 logical volumes. See the documentation on the mkvg command for more information on the Big VG format.Changes the volume group to Scalable VG format. This can accommodate up to 1024 physical volumes and 4096 logical volumes. See the documentation on the mkvg command for more information on the Scalable VG format.Sets the sparing characteristics for the volume group specified by the VolumeGroup parameter. Either allows or prohibits the automatic migration of failed disks. See the documentation on the mkvg command for more information on hot sparing.Sets the synchronization characteristics for the volume group specified by the VolumeGroup parameter. Either permits or prohibits the automatic synchronization of stale partitions. See the documentation on the mkvg command for more information on synchronization of stale partitions.Increases the number of physical partitions a volume group can accommodate. The PhysicalPartitions variable is represented in units of 1024 partitions. Valid values are 64, 128, 256, 512 768, 1024 and 2048. The value should be larger than the current value or no action is taken. This option is only valid with Scalable-type volume groups.Increases the number of logical volumes that can be created. Valid values are 512, 1024, 2048 and 4096. The value should be larger than the current value or no action is taken. This option is only valid with Scalable-type volume groups.File containing the exact physical partitions to allocate. See the documentation on the mklv command for more information on mapfiles.Turn on or off serialization of overlapping I/Os. If serialization is turned on then overlapping IOs are not allowed on a block range and only a single IO in a block range is processed at any one time. Most applications like file systems and databases do serialization so serialization should be turned off.For big vg format volume groups, this option indicates that the logical volume control block will not occupy the first block of the logical volume. Therefore, the space is available for application data.The size of each of the 'number of units' field, allowing specification of the file system size in whichever is convenient: 512 byte blocks, megabytes or gigabytesThe number of units of the type selected above that determines the size of the file system.The volume group that owns the specified logical volumeThe resource group - if any - that owns the volume group given in the previous lineThe list of nodes on which the volume group is knownThe volume group that owns the selected file systemCreate an enhanced concurrent mode volume group. Enhanced concurrent mode can be used for fast disk takeover, or it can be used in a resource group that is online on all available nodes (OAAN). This field has a choice of three values: 'Fast disk takeover' indicates that an enhanced concurrent mode volume group will be created for fast disk takeover. If a resource group name is also given, and that resource group is created as part of this operation, the resource group will be started on the home node only (OHN). 'Concurrent access' indicates that an enhanced concurrent mode volume group will be created. If a resource group name is also given, and that resource group is created as part of this operation, the resource group will be started on all available nodes (OAAN). Once created, resource group policies can be adjusted through the Change/Show Attributes of a Resource Group SMIT panelUse this option to select how the volume group will be accessed when it is included in a resource group. When a shared volume group is accessed serially by including it in a resource group with a startup policy of 'online on home node only' (OHN), enhanced concurrent mode enables fast takeover of the volume group. If the volume group is included in a resource group that is online on all available nodes (OAAN), enhanced concurrent mode allows the volume group to be accessed concurrently from all the nodes in the resource group. If a resource group name is provided and that group does not exist then that resource group will be created with the startup policy that corresponds to the selection of fast disk takeover or concurrent access. Once created, resource group policies can be changed through the Change/Show Attributes of a Resource Group SMIT panel. Choose 'Yes' to change the specified volume group to Critical Volume group. This volume group is monitored for continuous access Specifies the name of the Mirror pool. Use this option to select the disk in the list to add it to the mirror pool. Use this option to Select the disk in the list to remove it from mirror pool. Use this option to enable LVM encryption. Use this option to add authentication type. It can be either keyserv or pks. Use this option to provide key server ID which is configured as part of keyserver. This field is not required for pks authentication. Use this option to provide authentication method name, this name is used to differentiate when multiple authentication methods are configured to a logical volume. Use this option to remove authentication method name. Configure and manage GPFS cluster and GPFS filesystems.Create a GPFS cluster in the PowerHA SystemMirror cluster environment. NOTE: Before configuring a GPFS cluster, you must: 1. Configure communication paths to all nodes in the PowerHA SystemMirror cluster. 2. Verify and synchronize the PowerHA SystemMirror cluster.List all GPFS filesystems configured in this environment Select this option to create a new GPFS filesystem.Select this option to remove a previously created GPFS filesystemSelect this option to remove the GPFS cluster definitionEnter the name to be used for the GPFS filesystem. Use no more than eight alphanumeric characters and underscores. Enter a mount point for the GPFS filesystemSelect the hdisk(s) to be used for the GPFS filesystemForce creation of GPFS volume group, logical volume and filesystem - overwrite any existing GPFS volume group, logical volume and filesystem. Default is falseEnter the name(s) of the GPFS filesystems to remove. Use F4 to list available GPFS filesystemsThe Force option will force the removal of the filesystem, even if the disks are damagedSelecting 'true' forces deletion of existing GPFS filesystems before deleting the GPFS clusterChange / Show parameters for the Daily Fallback Timer PolicyChange / Show parameters for the Monthly Fallback Timer PolicyChange / Show parameters for the Specific Date Fallback Timer PolicyChange / Show parameters for the Weekly Fallback Timer PolicyChange / Show parameters for the Yearly Fallback Timer PolicyConfigure parameters for a Daily Fallback Timer PolicyConfigure parameters for a Monthly Fallback Timer PolicyConfigure parameters for a Specific Date Fallback Timer PolicyConfigure parameters for a Weekly Fallback Timer PolicyConfigure parameters for a Yearly Fallback Timer PolicyRemove the specified Fallback Timer PolicySelect the Fallback Timer Policy to ChangeSelect the Fallback Timer Policy to RemoveSelect the rate of recurrence for the Fallback Timer Policy.Enter a name for the Fallback PolicyThe name of the Fallback PolicyEnter the Day of the Month on which fallback should occurEnter the Week Day on which fallback should occurEnter the Hour at which fallback should occurEnter the Minute at which fallback should occurEnter the Month during which fallback should occurEnter the Year during which fallback should occurSelect this option to add a new Delayed Fallback Timer PolicySelect this option to change or show a Delayed Fallback Timer PolicySelect this option to configure Delayed Fallback Timer PoliciesSelect this option to remove a Delayed Fallback Timer PolicySelect this option to add/change/remove a Fallback Timer PolicySelect a fallback timer policy. (Hit F4 for a pick-list). If you add a fallback timer policy, this resource group falls back to it's higher priority node only at the time as configured in the policy. Leaving this field empty implies immediate fallback. To create a fallback timer policy use SMIT path 'SMIT PowerHA SystemMirror' -> 'Extended Configuration' -> 'Extended Resource Configuration' -> 'Configure Resource Group Run-Time Policies' -> 'Configure Delayed Fallback Timer Policies'.Define the name of the cluster, and the communication paths to each node in the cluster. PowerHA SystemMirror will attempt to communicate to each node over the defined path to discover the network and storage related components that may be managed, and add them to its configuration database (ODM) as applicable. To change or remove nodes and/or networks from the cluster, please use the PowerHA SystemMirror Extended Configuration SMIT menus.Resources that may be used by PowerHA SystemMirror must first be defined to the Operating System on each node. This path brings you to a menu which allows you to configure various components to the operating system on all of the nodes defined to the cluster using a single interface.Once resources are ready for use by PowerHA SystemMirror, related resources may be assigned to resource groups to be more easily managed as single entities. For example, if an application server depends on a volume group and a Service IP Label/Address, these three resources may be placed in to a single resource group, and managed (e.g., brought online, offline, or moved) as one unit.Performs verification of the cluster components and operating system configuration on all nodes to ensure compatibility. If no errors are found, the configuration is then copied to each node in the cluster. If Cluster Services are running on any node, the configuration changes will take effect, possibly causing one or more resources to change state.The current PowerHA SystemMirror configuration is displayed.A unique name for the cluster up to 64 alphanumeric characters in length. Valid PowerHA SystemMirror names must be at least one character long and can contain characters ([A-Z, a-z]), numbers ([0-9]) and underscores. A name cannot begin with a number and a PowerHA SystemMirror reserved word cannot be a valid name.The non-modifiable set of currently configured nodes in the cluster. To change or remove nodes from the cluster, please use the PowerHA SystemMirror Extended Configuration SMIT menus.Leads to the Operating System configuration menus for that particular node. Each network interface must be defined to the OS before it can be used by PowerHA SystemMirror.Leads to the Operating System configuration menus for that particular node. Each TTY device must be defined to the OS before it can be used by PowerHA SystemMirror.Leads to the Operating System configuration menus for that particular node. Each target-mode SCSI device must be defined to the OS before it can be used by PowerHA SystemMirror.Leads to the Operating System configuration menus for that particular node. Each target-mode SSA device must be defined to the OS before it can be used by PowerHA SystemMirror.This is the IP Label or IP Address over which your services are typically provided, and/or over which clients connect to an application running on a node. This is the IP Label/Address which is kept highly available by PowerHA SystemMirror.Leads to the PowerHA SystemMirror System Management (C-SPOC) configuration menus for the Cluster Logical Volume Manager. Each volume group, logical volume or filesystem must be configured to the Operating System before it can be used by PowerHA SystemMirror.Leads to the PowerHA SystemMirror System Management (C-SPOC) configuration menus for the PowerHA SystemMirror Concurrent Logical Volume manager. Each concurrent volume group or logical volume must be configured to the Operating System before it can be used by PowerHA SystemMirror.Followed by the existing add/change/show/remove menu.With this selection you can select the communication interfaces and devices to be used by PowerHA SystemMirror. If discovery has been run, the discovered communication interfaces and devices displayed will be highlighted. In order to discover interfaces, a communication path must exist between this node and the node on which the network interface is configured to the operating system.With this selection you can change the configuration of the communication interfaces or devices.With this selection you can select the communication interfaces and devices to be removed from use by PowerHA SystemMirror.This utility requires the cluster manager to be down to run. It is to be used whenever changes are manually made by the cluster administrator to the underlying operating system configuration of the mapping of a network interface to an IP Label/Address. This could happen when the nameserver or hosts file has been modified. This utility makes the change known to PowerHA SystemMirror.Will display a list of interfaces and devices that PowerHA SystemMirror has been able to determine are configured to the operating system on a node in the cluster.Will display a list of all communication interfaces and devices supported by PowerHA SystemMirror.Lists IP-based interfaces such as Ethernet and Token-Ring network interfaces.Lists serial devices such as RS232, Target-Mode SCSI and Target-Mode SSA.The network interface associated with the Communication Interface.The name of the node on which this network interface physically exists.The type of network media/protocol (e.g., ethernet, token-ring, fddi, etc.)A name for this logical network up to 64 alphanumeric characters in length. Valid PowerHA SystemMirror names must be at least one character long and can contain characters ([A-Z, a-z]), numbers ([0-9]) and underscores. A name cannot begin with a number and a PowerHA SystemMirror reserved word cannot be a valid name.The IP Label/Address associated with this Communication Interface, and will be configured on the Network Interface when the node boots.The Network Interface associated with the Communication Interface.The name of the node on which this network interface physically exists.The IP Address associated with this Communication Interface, and will be configured on the Network Interface when the node boots.The type of network media/protocol (e.g., ethernet, token-ring, fddi, etc.)A name for this logical network up to 64 alphanumeric characters in length. Valid PowerHA SystemMirror names must be at least one character long and can contain characters ([A-Z, a-z]), numbers ([0-9]) and underscores. A name cannot begin with a number and a PowerHA SystemMirror reserved word cannot be a valid name.The IP Label/Address associated with this Communication Interface, and will be configured on the Network Interface when the node boots.This is the IP Label or IP Address over which your services are typically provided, and/or over which clients connect to an application. This is the IP Label/Address which is kept highly available by PowerHA SystemMirror.Enter the name of the PowerHA SystemMirror network on which this Service IP Label/Address will be configured.By default, a Service IP Label/Address may be acquired by multiple nodes, although it will be configured on only one node at any given time. This is known as IP Address Takeover by a remote node. This will maintain the Service IP Label/Address even when the node which currently owns the Service IP Label/Address fails.By default, a Service IP Label/Address may be acquired by multiple nodes, although it will be configured on only one node at any given time. To maintain this Service IP Label/Address on the same node at all times, select this option. The Service IP Label/Address will be kept highly available as long as there is an active network interface available on this node on the associated network. However, if the node itself fails, or no network interfaces are available, the Service IP Label/Address will become unavailable. Note: If this option is selected, and IP Aliasing is disabled for the network of which this interface is a part, the Service IP Label/Address MUST be configured to the operating system (in the CuAt ODM) of the node on which it is to be maintained.This is the IP Label or IP Address over which your services are typically provided, and/or over which clients connect to an application. This is the IP Label/Address which is kept highly available by PowerHA SystemMirror.Enter the name of the PowerHA SystemMirror network on which this Service IP Label/Address will be configured.The name of the node on which this Service IP Label/Address is to be bound. Note: If this option is selected, and IP Aliasing is disabled for the network of which this interface is a part, the Service IP Label/Address MUST be configured to the operating system (in the CuAt ODM) of the node on which it is to be maintained.This is the IP Label or IP Address over which your services are typically provided, and/or over which clients connect to an application. This is the IP Label/Address which is kept highly available by PowerHA SystemMirror.Enter the name of the PowerHA SystemMirror network on which this Service IP Label/Address will be configured.Retrieves current operating system configuration information from all cluster nodes. This information is displayed in picklists to help the user make accurate selections of existing components. The discovered components are highlighted as such. Pre-defined components (those that are supported but are not discovered) are also made available as selections.Configure the cluster, nodes, networks, and communication interfaces for use by PowerHA SystemMirror.Configure different type of resources that you want to make highly available; e.g. application servers, volume groups, concurrent volume groups, filesystems, disks, tape resources, WAN services, etc. You can also configure resource groups, and assign the resources to them.Extended Event Configuration. allows the user to change/show PowerHA SystemMirror event-related entities.Verify and synchronize cluster topology, resources, and custom-defined verification methods. Also maintains custom-defined verification methods.Set the security mode for the cluster. The cluster security mode is used to identify the method of user/command authentication to use for all nodes in the cluster.The Cluster Snapshot facility allows the user to save and restore the Cluster Topology and Resource ODM classes.Menus to help you tune system high and low water marks, and the syncd frequency.Add, Change, Show, or Remove a Cluster Definition.Add, Change, Show, or Remove a Node from the cluster.Configure sites within a geographic cluster.Add, Change, Show, or Remove communication interfaces and devices.Configure an IP Label/Address that will remain bound to a particular node.Add, Change, Show, or Remove networks.Configure multiple networks into a global network.Add, Change, Show, or Remove a Network Module.Change, or Show Configuration Parameters for the Topology Services and Group Services daemons.Show cluster, node, network, and adapter topology.Enter one resolvable IP label (this may be the hostname), IP address, or Fully Qualified Domain Name for the node. This path will be taken to initiate communication with the node. Examples are: NodeA 10.11.12.13 NodeC.ibm.com.A unique name for the node up to 64 alphanumeric characters in length. Valid PowerHA SystemMirror names must be at least one character long and can contain characters ([A-Z, a-z]), numbers ([0-9]) and underscores. A name cannot begin with a number and a PowerHA SystemMirror reserved word cannot be a valid name. It is not required that the node name be the same as the hostname of the node.Enter (or add) one resolvable IP label (this may be the hostname), IP address, or Fully Qualified Domain Name for the node. This path will be taken to initiate communication with the node. Examples are: NodeA 10.11.12.13 NodeC.ibm.com.Node name selected. This field is not modifiable.Enter a character string, different from the listed Node Name up to 64 alphanumeric characters in length. Valid PowerHA SystemMirror names must be at least one character long, and can contain characters ([A-Z, a-z]), numbers ([0-9]) and underscores. A name cannot begin with a number and an PowerHA SystemMirror reserved word cannot be a valid name. It is not required that the node name be the same as the hostname of the node. However, in certain circumstances (e.g., if the hostname might move to another node or change), the chosen node name may be different from the hostname.List of Persistent Node IP Label/Address that remain configured on this node. This field is not modifiable.Configure one or more IP Alias which will remain bound to a particular node.Change and/or display the configuration of an IP Alias which remains bound to a particular node.Remove one or more IP Alias which are bound to a particular node.The name of the node on which the IP Label/Address will be bound.The name of the network on which the IP Label/Address will be bound.The IP Label/Address to keep bound to the specified node.The name of the node on which the IP Label/Address will be bound.The IP Label/Address currently bound to the specified node.The IP Label/Address to be bound to the specified node.The name of the network over which the Node IP Label/Address will be highly available.The Node IP Label/Address which to be removed.Defaults to a Class .C. network. Set the Netmask Class for this particular network. See your Network Administrator for this information.IP Address Offset for Heart-beating over Aliases.This selects the mechanism by which an IP Address will be configured onto a network interface. By default, if the network and selected configuration supports adding an IP Alias to a network interface, it is set to True. Otherwise it is False. You may also set this value to False if you want IP Replacing to be used instead. IP Replacing is the mechanism by which one IP address is first removed from, and then another IP address is added to, the same network interface.Certain types of resources must be properly defined before they can be added to the Resource Group configuration.Certain types of extended resources must be properly configured before they can be added to a Resource Group.Allows the user to configure policies which will be evaluated at the time the Resource Group is being brought on or off-line.Enables the user to modify the current Events run by the Cluster Manager.The User-Defined Events path allows to specify the recovery program to be run in response to a specific cluster event.The Configure Pre/Post-Event Commands menu allows to configure Pre- and Post-Event Commands for cluster events.The User-Defined Event Notification Methods path allows you to define remote notification parameters for sending a message using a pager, email or SMS text message when events occur. A pager message will be sent through the attached modem. A cell phone message will be sent through email via the TCP/IP internet connection, or through an attached GSM modem.The Change/Show Time Until Warning menu allows to configure how long it is expected to take to run a cluster event. After the total duration time expires, config_too_long messages will be output to /tmp/hacmp.out and to the console, indicating that the event has taken longer time than expected. The config_too_long warnings are informational messages.Allows access to the SMIT interface on the node to be selected in the next step. This is simply a shortcut to the top-level SMIT screen on the specified node.Allows the user to open a SMIT session on a particular node, and use the SMIT fastpath for the type of network device chosen.This utility requires the cluster manager to be down to run. It is to be used whenever changes are manually made by the cluster administrator to the underlying operating system configuration of the mapping of a network interface to an IP Label/Address. This could happen when the nameserver or hosts file has been modified. This utility makes the change known to PowerHA SystemMirror.Allows the user to check the PowerHA SystemMirror configuration against the current state of the cluster and related devices. Configuration errors should be corrected before PowerHA SystemMirror Services are started. The verification facility may be extended, that is, the user may add verification routines to determine the correctness of customized hardware/software.Displays the state of the nodes, communication interfaces, and resource groups, then displays the local Event Summary for the last 5 events. This is especially useful if the cluster is in an error state in determining what component failed, and the effect of the failure on other components.Contains utilities to display or manage logs maintained by PowerHA SystemMirror. These include the log file named 'hacmp.out', which keeps a record of all of the local cluster events as performed by the PowerHA SystemMirror event scripts. These PowerHA SystemMirror event scripts automate many common system administration tasks, and in the event of a failure, will manage PowerHA SystemMirror and system resource to provide recovery.The Cluster Snapshot facility allows the user to restore the Cluster Configuration Database (ODM) with the values from the currently active configuration. This will automatically save any of your changes in the Configuration Database as a snapshot with the path: ./usr/es/sbin/cluster/snapshots/UserModifiedDB. before restoring the Configuration Database with the values actively being used by the Cluster Manager.This dialog allows you to change the selective fallover operation for certain cluster resources. Select the cluster resource to modify. This dialog allows you to change the selective fallover operation for certain cluster resources. This is the name of the resource which will be changed. This is the action PowerHA SystemMirror should take when this resource fails. Selecting notify will cause PowerHA SystemMirror to run the notification method but make no attempt to recover the resource. Selecting fallover will cause PowerHA SystemMirror to try and recover the resource by running a cluster event. The recovery operation and event run will depend on the type of resource and the availability and location of any backup resources. The full pathname of a user-defined method to perform notification when this resource fails. Configuring this method is strongly recommended when the failure action of "notify" is used.This dialog allows you to change the selective fallover operation for certain cluster resources. The next screen will prompt you for the name of the resource which will be changed.The PowerHA SystemMirror Extended Resources Configuration menu allows to configure resources for PowerHA SystemMirror. Resources that can be configured under this path include Service IP Labels, Application Servers, Application Monitors, Tape Resources, Communication Adapters and Links, Custom Disk Methods.The PowerHA SystemMirror Extended Resource Group Configuration menu allows to configure resource groups for PowerHA SystemMirror, and assign resources to configured resource groups.This option allows to open a "smitty" session on a PowerHA SystemMirror node.An PowerHA SystemMirror logical network can be added to the configuration with this option. An PowerHA SystemMirror network is a collection of PowerHA SystemMirror interfaces and Service IP Labels. It can be specified whether the new network should use IPAT via IP Aliasing or IP Replacement, the type of the network, and the netmask for the network.An PowerHA SystemMirror logical network can be changed with this option. An PowerHA SystemMirror network is a collection of PowerHA SystemMirror interfaces and Service IP Labels. It can be specified whether the new network should use IPAT via IP Aliasing or IP Replacement, the type of the network, and the netmask for the network.An PowerHA SystemMirror logical network can be deleted from the configuration with this option. Note that all PowerHA SystemMirror interfaces and Service IP Labels that that belong to the network to be deleted will be removed from the configuration, as well.The Add/Change/Show an PowerHA SystemMirror Cluster option allows a user to add an PowerHA SystemMirror cluster definition, and to change/show the name of an existing PowerHA SystemMirror cluster definition.The PowerHA SystemMirror Verification menu allows the user to perform an PowerHA SystemMirror verification on the current configuration, and to define custom verification methods.Change/Show the IP Label or IP Address over which your services are typically provided, and/or over which clients connect to an application. This is the IP Label/Address which is kept highly available by PowerHA SystemMirror.Remove the IP Label or IP Address over which your services are typically provided, and/or over which clients connect to an application. This is the IP Label/Address which is kept highly available by PowerHA SystemMirror.Allows you to create an PowerHA SystemMirror Definition File to be used by Online Planning Worksheets application.Select this menu to customize the resource groups fallover behavior in the event of a resource failure. This menu also allows you to customize resource groups' inter-site fallover if you have configured HACMPsites.Select this menu to change the resource group's inter-site fallover policy. By default, the resource groups fallover to the backup site when the primary site cannot host the resource group. This menu option allows you to change the default fallover behavior during a resource failure (such as a volume group loss) that affects the availability. NOTE: This change does not stop a resource group fallover to its peer site when the failure is caused due to node failure (node_down event.) This is the action PowerHA SystemMirror takes when a resource group cannot be brought to ONLINE state on its primary site. The default action is to fallover the resource group to its backup site. Select 'notify' if you would like PowerHA SystemMirror to run the notify method instead of inter-site fallover during a resource failure. Note that this option will put the resourcegroup into ERROR state during a failure.This menu allows to change the resource group's inter-site fallover policy. By default, the resource groups fallover to the backup site when the primary site cannot host the resource group. This menu option allows you to change the default fallover behavior during a resource failure (such as a volume group loss) that affects the availability. NOTE: This change does not stop a resource group fallover to its peer site when the failure is caused due to node failure (node_down event.) Provides the ability to configure a cluster based on a configuration file created via the Online Planning Worksheets application (a.k.a. a ".haw" file). *** THE EXISTING CLUSTER CONFIGURATION WILL BE DELETED. *** It is strongly recommended that a snapshot of the current cluster definition be taken *before* performing this operation. After the existing cluster definition has been deleted, the import operation will attempt to create a new cluster definition on the local node. This is accomplished by creating a cluster definition, then adding nodes, adapters, global networks, resource groups, and resource associations, finishing with modifying cluster events. All these operations are performed, as needed, in the indicated order. The information required to perform each operation comes from the provided Online Planning Worksheets output file. Once the cluster is fully defined on the local node, an attempt will be made to synchronize the cluster topology and cluster resources.The name of an existing site within the cluster, selected from the list presented when F4 is pressed. When a node is added to a linked cluster, a site name must be provided. change or show the maximum log file size for the group services daemon.This is the action PowerHA SystemMirror takes when a resource group cannot be brought to ONLINE state on its primary site. The default action is to fallover the resource group to its backup site. Select 'notify' if you would like PowerHA SystemMirror to run the notify method instead of inter-site fallover during a resource failure. Note that this option will not change the resource group state. The name of an existing site within the cluster, selected from the list presented when F4 is pressed. When a node is added to a stretched cluster, a site name must be provided. Select Node IP Label/AddressSelect a Node and IP Label/AddressThe IP Label/Address associated with this Communication Interface, and will be configured on the Network Interface when the node boots.The network type of the chosen network. This field cannot be modified.The name of the device, a 64 character alphanumeric cluster unique name.The name of the network on which the device will be bound.The full path to the device, as an example for an rs232 device this would be: /dev/tty0The name of the node on which the device will be bound.A unique name for the node up to 64 alphanumeric characters in length. It is not required that the node name be the same as the hostname of the node.Enter one resolvable IP Label/Address (may be the hostname), IP address, or Fully Qualified Domain Name for the node. This path will be taken to initiate communication with the node. Examples are: NodeA, 10.11.12.13, and NodeC.ibm.com.Select a Node to Change/ShowSelect one or more nodes to deleteA unique name for the resource group up to 64 alphanumeric characters in length.Select a Resource Group to Change/ShowChange/Show Resources and Attributes for a Resource GroupSelect a Communication Interface/Device to Change/ShowSelect a Dynamic Node Priority Policy to Change/ShowSelect a Dynamic Node Priority Policy to RemoveRemove a Dynamic Node Priority PolicySelect the cluster snapshot to change or showSelect the cluster snapshot to removeSelect a cluster snapshot to restore, this will overwrite your current configuration.Select a Custom Snapshot MethodSelect a Custom User-Defined EventSelect a Custom User-Defined Event to removeThis option allows the user to add the most typical components of a cluster to the PowerHA SystemMirror configuration database (ODM) in a few steps. Discovery of the components within the cluster is automatically performed once each of the nodes is properly identified and working communications paths exist. The user then configures the resources to be made highly available, and assigns resources that are to be managed together into resource groups. This path significantly automates the discovery and selection of configuration information, and chooses default behaviors. It is recommended that first-time users take this path. (For experienced users requiring more detailed or manual configuration, choose the "Extended Configuration" path.) The following are the typical, ordered steps to be taken when defining an PowerHA SystemMirror cluster: Add Nodes to an PowerHA SystemMirror Cluster lets PowerHA SystemMirror know how to communicate with the nodes that are participating in the cluster. Configure Resources to make Highly Available enables you to configure resources that are to be shared among the nodes in the cluster, such that if one component fail, another component will automatically take its place. Configure PowerHA SystemMirror Resource Groups enables you to collect related or dependent resources into easy-to-manage groups. Once you have configured PowerHA SystemMirror on one node, use the Verify and Synchronize PowerHA SystemMirror Configuration SMIT option to validate your configuration against the current software and hardware, and then to automatically commit and distribute your changes to all of the specified nodes.Allows the user to configure extended parameters related to the Topology, Resources, Events, Security, and Performance Tuning of an PowerHA SystemMirror cluster. This path assumes expertise with the operating system and familiarity with PowerHA SystemMirror System Administration.Allows the user to administer many aspects of the cluster and its components from one single point of control. Using these selections will ensure that each component is properly configured to work in an PowerHA SystemMirror environment. It greatly simplifies the process of configuring resources that are shared among nodes, such as users and groups, volume groups and filesystems, resource groups and applications, logs, etc.Gives the user a rich set of tool for trouble-shooting and recovering from problems which may arise in a cluster environment.This utility requires the cluster manager to be down to run. It is to be used whenever changes are manually made by the cluster administrator to the underlying operating system configuration of the mapping of a network interface to an IP Label/Address. This could happen when the nameserver or hosts file has been modified. This utility makes the change known to PowerHA SystemMirror.Manage the communication interfaces of existing cluster nodes using C-SPOC.Performs a user-requested resource group migration operation from either one node to another, or from one site to the other. It is possible to move the primary or the secondary instance of either one resource group, or of several resource groups. Also select this menu to analyze cluster applications.Provides a menu for adding, removing, changing, and listing users in the cluster, as well as changing the cluster security level.Manage the physical disks, volume groups, logical volumes and filesystems attached to your cluster nodes.Manage the concurrent logical volumes attached to your cluster nodes.Allows access to the SMIT interface on the node to be selected in the next step. This is simply a shortcut to the top-level SMIT screen on the specified node.Suspend or Resume cluster wide Application Monitoring.Allows the user to open a SMIT session on a particular node, and use the SMIT fastpath for the type of network device chosen.This utility requires the cluster manager to be down to run. It is to be used whenever changes are manually made by the cluster administrator to the underlying operating system configuration of the mapping of a network interface to an IP Label/Address. This could happen when the nameserver or hosts file has been modified. This utility makes the change known to PowerHA SystemMirror.This utility allows the user to hot replace a failed network interface cardView the PowerHA SystemMirror scripts, system and C-SPOC log files.Change/Show logging levels on a per cluster node basis.Enter one resolvable IP Label/Address. This IP Label/Address can be configured in a resource group to make the IP address highly available across multiple nodes. F4 provide a picklist of IP labels found in /etc/hosts.Allows to change or show the security mode used by PowerHA SystemMirror.Select 'Configurable on Multiple Nodes' if you want this Service IP Label/Address to be able to survive both a local interface failure and a complete node/network failure, by being automatically configured on a different, available node. This is known as IP Address Takeover. In this case, the Service IP Label/Address must be configured as a resource in a Resource Group, so that PowerHA SystemMirror knows on which nodes it may be configured if IP Address Takeover were to occur. Typically, this will be the same resource group as an associated application server. Select 'Bound to a Single Node' if you want this Service IP Label/Address to be able to survive only a local interface failure. It will be kept highly available on the node on which it is originally configured, as long as there is an appropriate and available network interface on which it can be configured. In this case, the Service IP Label/Address cannot be configured as a resource in a Resource Group if multiple nodes participate in that resource group. Typically, it will not be associated with any resource group.Either select a particular network, or select 'ALL' to display a list of all networks on which this communications interface may be configured.Lets you view the security settings for inter-node communications in the cluster. Cluster security provides two types of authentication: connection authentication mode and message authentication mode.Allows the user to select the directory in which an PowerHA SystemMirror log is maintained.Change/Show clstrmgr.debug logging level on this node.Allows the user to select the directory in which all PowerHA SystemMirror logs are maintained.Enter the netmask in range [1-32] or as "X.X.X.X", where X in [0-255] for IPv4 service IP, or enter prefix length in range [1-128] for IPv6 service IP. Please follow below for more information. a. With IPv4 service IP and underlying network IPv6, netmask is must. b. With IPv6 service IP and underlying network IPv4, prefix length is must. c. With service IP and underlying network both are IPv4, netmask can be blank, however, netmask will not be considered even though specified. d. With service IP and underlying network both are IPv6, prefix length can be blank, however, prefix length will not be considered even though specified.Enter a unique name for the resource group. This name can include alphanumeric characters. PowerHA SystemMirror manages all the resources in a resource group as a unit. This name uniquely identifies the group and will be used in the status displays and user interface for managing groups. Allows the user to enable/disable AIX Live Update operations.Allows the user to change Shutdown option for the Oracle Database instance. Choose any one of the available options: a. SHUTDOWN IMMEDIATE (default) Performs an immediate, consistent shutdown of the target database, with the following consequences: - Current client SQL statements being processed by the database are allowed to complete. - Uncommitted transactions are rolled back. - All connected users are disconnected. b. SHUTDOWN ABORT Performs an inconsistent shutdown of the target instance, with the following consequences: - All current client SQL statements are immediately terminated. - Uncommitted transactions are not rolled back until next startup. - All connected users are disconnected. - Instance recovery will be performed on the database at next startup. c. SHUTDOWN NORMAL Performs a consistent shutdown of the target database with normal priority, which means: - No new connections are allowed after the statement is issued. - Before shutting down, the database waits for currently connected users to disconnect. - The next startup of the database will not require instance recovery. d. SHUTDOWN TRANSACTIONAL Performs a consistent shutdown of the target database while minimizing interruption to clients, with the following consequences: - Clients currently conducting transactions are allowed to complete, that is, either commit or terminate before shutdown. - No client can start a new transaction on this instance; any client attempting to start a new transaction is disconnected. - After all transactions have either committed or terminated, any client still connected is disconnected. The reference node for any node specific value, such as hdisk device name. This node is chosen automatically and is usually either the first node in the resource group or the node that has the volume group varied on.Enter the names of the nodes that can own or take over this resource group. Enter the node with highest priority first, followed by the nodes with lower priorities, in the desired order. Leave a space between node names.The name of the resource group being changed/displayed.These are the participating nodes for this group, in the order of priority.The resource group typeInter-site Management Policy for this group.Startup policy for this resource group: ONLINE ON HOME NODE ONLY. The resource group should be brought online ONLY on its home (highest priority) node during the resource group startup. This requires the highest priority node to be available. ONLINE ON FIRST AVAILABLE NODE. The resource group activates on the first node that becomes available. If you have configured the settling time for resource groups, it will only be used for this resource group if you use this startup policy option. ONLINE USING DISTRIBUTION POLICY: The resource groups with this startup policy will be distributed at resource group startup. Based on the resource group distribution policy you select, groups will be distributed such that only one resource group is activated by a participating node ('node' based resource group distribution) or one resource group per node and per network ('network' based distribution.) NOTE: The distribution policy can be configured under 'Extended Configuration -> Extended Resource Configuration -> Configure Resource Group Run-Time Policies -> Configure Resource Group Distribution' The network-based distribution policy is deprecated and will be removed in future PowerHA SystemMirror releases. ONLINE ON ALL AVAILABLE NODES. The resource group is brought online on ALL nodes.The Fallover policy for this resource group: FALLOVER TO NEXT PRIORITY NODE IN THE LIST. In the case of fallover, the resource group that is online on only one node at a time follows the default node priority order specified in the resource group's nodelist. FALLOVER USING DYNAMIC NODE PRIORITY. Before selecting this option, make sure that you have already configured a dynamic node priority policy that you want to use. If you did not configure a dynamic node priority policy to use, and select this option, you will receive an error during the cluster verification process. BRING OFFLINE (ON ERROR NODE ONLY). Select this option to bring a resource group offline on a node during an error condition. This option is most suitable when you want to imitate the behavior of a concurrent resource group and want to ensure that if a particular node fails, the resource group goes offline on that node only but remains online on other nodes. The Fallback policy for this resource group: NEVER FALLBACK. A resource group does NOT fall back when a higher priority node joins the cluster. FALLBACK TO HIGHER PRIORITY NODE IN THE LIST. A resource group falls back when a higher priority node joins the cluster. If you select this option, then you can use the delayed fallback timer that you previously specified in the Configure Resource Group Run-time Policies SMIT menu. If you do not configure the delayed fallback policy, the resource group falls back immediately when a higher priority node joins the cluster.Enter the nodes from one of the sites that acts as the primary site for this resource group. All nodes that you enter must be from a single site and that site becomes the primary site for the resource group. Enter the node with highest priority first, followed by the nodes with lower priorities, in the desired order. Leave a space between node names.Enter the nodes from one of the sites that acts as the secondary site for this resource group. All nodes that you enter must be from a single site and that site becomes the secondary site for the resource group. Enter the node with highest priority first, followed by the nodes with lower priorities, in the desired order. Leave a space between node names.Configure Settling time for the resource groupsEnter or change the settling time in seconds. Enter a ZERO to remove the settling time. In this case, the resource group does not wait for any time interval before PowerHA SystemMirror activates it on a node. If you enter a positive number, then if the currently available node that integrated into the cluster is not the highest priority node, the resource group waits for settling time duration to see if a better priority node may join the cluster. NOTE: The Settling time is only valid for the resource groups that have the 'Online On First Available Node' startup policy.Select this option to add/change/remove the Settling Time parameter for the Resource Groups.Select this option to change the cluster-wide resource group distribution policy for the resource groups.Select a distribution policy (node- or network-type resource group distribution) from the picklist. The distribution policy that you select affects the startup behavior of the resource groups with startup policy 'Online Using Distribution Policy'. "node" - If you select the node distribution policy for a resource group, PowerHA SystemMirror distributes the resource groups so that only ONE resource group is activated on a participating node. This option is the default. "network" - If you select the network distribution policy for a resource group, only one resource group is activated on a node, per cluster network. Depending on the number of configured networks, PowerHA SystemMirror may activate more than one resource group on the same node. NOTE: Selecting "network" as a distribution policy imitates the repelling behavior of rotating resource groups in releases before PowerHA SystemMirror 5.2. If you had rotating resource groups configured, then upon upgrading to PowerHA SystemMirror 5.2, the network distribution policy is used for them. Once the migration is complete, it is recommended to switch to the node distribution policy, because the 'network' distribution policy will be removed from future PowerHA SystemMirror releases.Configure the cluster-wide resource group distribution policyEnter the base address of a range of addresses for hacmp to use for heartbeat. This address range must be unique and must not conflict with any other subnets on the network. Specify a valid IPv4 or IPv6 address for an IPv4 or IPv6 network respectively. The IPv6 address must be a valid unicast address.Use this option at the direction of IBM Support to collect cluster log files when reporting problems.Directory where cluster logs will be collected.Collection pass number: 1 = calculates the amount of space needed. 2 = collects the actual data.Comma separated nodenames to collect data from. Default is all nodesSkip collection of rsct dataTurn on DebuggingThis option will collect cluster log files from all nodes and save them in the snapshot. Saving log files can significantly increase the size of the snapshot.Use this option to reset all the tunables (customizations) made. This will return all options to their default values but will not change the configuration. The cluster will need to be synchronized after this operation. This option determines what messages are displayed during the reset function. Selecting "Verbose" displays all messages. Selecting "Standard" will cause only error messages to be displayed. This option will cause a cluster snapshot to be created before any values are changed. This option is highly recommended. This option will cause the cluster to be synchronized when the reset operation is complete. Synchronization is required after the reset function and before the cluster is restarted. Specify the path to the file where the HACMP definition will be written. If a relative path name is given, the path name will be relative to the directory /var/hacmp/log. Maximum length is 128 characters.This is an informational field to describe the current cluster. Notes users specify here will be stored in the HACMP definition file and will appear in the Cluster Notes panel within Online Planning Worksheets.Specify the path to the Online Planning Worksheets output file (a.k.a the ".haw" file) that the new cluster configuration is to be imported from. If a relative path name is given, the path name will be treated as relative to the "/var/hacmp/log" directory. The maximum length for this path is 128 characters.Enter the base address of a range of addresses for hacmp to use for heartbeat. This address range must be unique and must not conflict with any other subnets on the network. Specify a valid IPv4 or IPv6 address for an IPv4 or IPv6 network respectively. The IPv6 address must be a valid unicast address.The Two-Node Cluster Configuration Assistant will configure a two-node HACMP cluster. Before you start, complete the following task: - Connect and configure all IP network interfaces. - Install and configure the application to be made highly available. - Add the application's service IP label to /etc/hosts on all nodes. - Configure the volume groups that contain the application's shared data on disks that are attached to both nodes. Before you start, collect the following information: - An active communication path to the takeover node. - A unique name to identify the application to be made highly available. - The full path to the application's start and stop scripts. - The application's service IP label. The cluster created by this assistant will contain a single resource group configured to come online on the local node at cluster start, to fall over to the remote takeover node if the local node fails, and to remain on the remote takeover node when the local node rejoins the cluster.Specify one of the following for communication from the local node to the remote takeover node: - an IP address, - a fully-qualified domain name, or - a resolvable IP label. The communication path will be used for IP network discovery and automatic configuration of your HACMP topology. The local node will be configured as the primary owner of your highly available application server, and the takeover node will acquire the application server if the local node fails.Specify the name for the highly available application server. The name may be any label that uniquely identifies the application server, and will become part of both the cluster name and the resource group name.Specify the full path to the executable program that is used to start the application server, along with any additional arguments that should be passed to the program.Specify the full path to the executable program that is used to stop the application server, along with any additional arguments that should be passed to the program.Specify the service IP label to be used by the highly available application server. The service IP label must be defined in the /etc/hosts file on both cluster nodes and cannot be a Fully Qualified Domain Name or IP address. This service IP label will be configured in the HACMP resource group along with the highly available application server and all shareable volume groups.The HACMP Two-Node Cluster Configuration Assistant takes you through the cluster configuration process in four steps: 1) Topology Configuration 2) Application Server Configuration 3) Resource Configuration 4) Verification and SynchronizationBefore you start:Complete the following tasks: - Connect and configure all IP network interfaces. - Install and configure the application to be made highly available. - Add the application's service IP label to /etc/hosts on all nodes. - Configure the volume groups that contain the application's shared data on disks that are attached to both nodes. Collect the following information: - An active communication path to the takeover node. - A unique name to identify the application to be made highly available. - The full path to the application's start and stop scripts. - The application's service IP label. The cluster created by this assistant will contain a single resource group configured to come online on the local node at cluster start, to fall over to the remote takeover node if the local node fails, and to remain on the remote node if the local node rejoins the cluster.Specify one of the following for communication from the local node to the remote takeover node: - an IP address, - a fully-qualified domain name, or - a resolvable IP label. The communication path will be used for IP network discovery and automatic configuration of your HACMP topology. The local node will be configured as the primary owner of your highly available application server, and the takeover node will acquire the application server if the local node fails.Specify the details of the highly available application server. The server name may be any label that uniquely identifies the application server, and will become part of both the cluster name and the resource group name. The server start and stop scripts must provide the full path to the executable program that is used to start/stop the application, along with any arguments that should be passed to the program.Specify the service IP label to be used by the highly available application server. The service IP label must be defined in the /etc/hosts file on both cluster nodes and cannot be a Fully Qualified Domain Name or numeric IP address. This service IP label will be configured in the HACMP resource group along with the highly available application server and all shareable volume groups.Enter an active communication path to the takeover node.Enter the name of the highly available application server.Enter the full path to the server start script.Enter the full path to the server stop script.Enter the service IP label required by the application server.Output from the cluster verification and synchronization operations.Use this option to access HACMP Configuration Assistants and Smart Assists. If you have installed the Smart Assist software for DB2, Oracle, WebSphere, or other applications, you can access those Smart Assists from this menu as well.INFORMATION: ------------- This configuration assistant can be used to configure a two-site,two-node (one node per site) HACMP cluster and to setup GLVM (sync) mirroring for all non-rootvg Volume Groups that meet the following criteria: - Volume Group must be be varied-on on the Local node - Volume Group should be of 'Scalable' type - Volume Group should not already include any RPV disks IMPORTANT: ---------- This tool performs configuration changes to eligible Volume Groups. It is highly recommended that you take a backup of all non-rootvg Volume Groups before proceeding any further. Before you proceed further: --------------------------- Complete the following tasks: - Make sure that you have the right filesets installed on the local as well as the takeover (remote) node. This functionality requires the following filesets: - bos.rte (6.1.2.0 or higher) - glvm.rpv.client (6.1.0.0 or higher) - glvm.rpv.server (6.1.0.0 or higher) - cluster.xd.glvm (6.1.0.0 or higher) - Also ensure that the takeover node has sufficient number of free disks that can hold mirror data from the Volume Groups that will be configured for GLVM mirroring by this tool. - Connect and configure all IP network interfaces. - Install and configure the application to be made highly available. - Add the application's service IP label to /etc/hosts on all nodes. Collect the following information: - An active communication path to the takeover node. - A unique name to identify the application to be made highly available. - The full path to the application's start and stop scripts. - The application's service IP label. Note about Network Configuration: --------------------------------- This tool will automatically discover the various communication paths between the Local and Takeover node and configure them in the HACMP cluster configuration. If the Local and the Takeover node have IP aliases associated with them (and are reachable from the Local node) then they will be automatically configured as HACMP Persistent IP addresses. If you prefer to supply the values of the Persistent IP addresses for the Local and Takeover node then you will be able to do so. Press Enter to specify additional information or F3 to go back. Enter the Persistent IP Label for the Local node. A 'Persistent IP' is an alias-ed IP address that is bound to a node and is maintained as a highly available entity. Enter the Persistent IP Label for the Takeover (Remote) node. A 'Persistent IP' is an alias-ed IP address that is bound to a node and is maintained as a highly available entity. Specify the name of the remote node(s) to be included in the cluster. Specify - a resolvable IP label (e.g. a hostname) - a fully-qualified name (host plus domain name) or - an IP address HACMP will use this connection to discover information about the remote node and automatically configure the cluster. The local node will be configured as the primary owner of your highly available application server, and the takeover node will acquire the application server if the local node fails.Configure and manage a list of files to be kept in sync across the cluster. Updates to a file in a collection on one node can be automatically synchronized (if set to do so). Two file collections are configured by default: Configuration_files and HACMP_files.Add, change, show, or remove an HACMP file collection or adjust the time for automatic file collection updates.Add, view, or remove files from an HACMP file collection.Select one or more file collections to propagate immediately. Files in the selected file collections will be copied from this node and synchronized across the cluster.Create a new HACMP File Collection and specify its name and synchronization parameters.Change a previously defined HACMP File Collection's name and synchronization parameters. NOTE: Make changes to a file on only one node within the time limit set for automatic updates (default is 10 minutes).Delete an HACMP file collection (the files remain). NOTE: You cannot delete the HACMP_files file collection.Show or change the amount of time (minutes) HACMP should wait between checks on file collections. The default is 10 minutes; the maximum is 1440 minutes (24 hours). The timer affects all file collections that use automatic change propagation.Add files to an HACMP File Collection to be synchronized across the cluster.Remove files listed in an HACMP File Collection to be synchronized across the cluster.Specify a unique name for the new File Collection. Maximum length of 64 characters.This is an informational field to describe the defined File Collection.If this flag is set to 'yes', all files listed in this File Collection will be automatically copied from the node from which you run cluster synchronization to all other cluster nodes. The default is 'no'.If this flag is set to 'yes', each cluster node will periodically check if any file in the collection has been changed (the default is every 10 minutes). If a node detects that a change has been made to a file, it will copy the updated file to all the other cluster nodes. The default is 'no'.Current name of the File Collection.A list of files to keep in sync for this File Collection. This field cannot be modified. Use 'Add/Remove Files to File Collection' SMIT entries to perform any modifications. Use F4 to list the files.A user can enter the time (default is 10 minutes) for each node to look for updated files in each file collection that has automated propagation set to "yes". The minimum is 10 minutes and the maximum is 1440 minutes (24 hours).Add new files, one at a time. They should be valid file names and should not participate in any of the existing file collections. The filename must begin with a /, and must not be a file in /dev or /proc, one of /etc/objrepos/Cu*, a socket, a link, or pipe.This option allows a volume group to be enabled or disabled for Cross-Site LVM Mirroring. When a volume group is enabled for Cross-Site LVM Mirroring cluster verification will ensure that the volume group and logical volume structure is consistent and there is at least one mirror of each logical volume at each site. The volume group must also be configured as a resource in a resource group. Cross-Site LVM Mirroring supports two-site clusters where LVM mirroring through a Storage Area Network (SAN) replicates data between disk subsystems at geographically separated sites.This option if set to "true" will enable a volume group for Cross-Site LVM Mirroring. When a volume group is enabled for Cross-Site LVM Mirroring cluster verification will ensure that the volume group and logical volume structure is consistent and there is at least one mirror of each logical volume at each site. The volume group must also be configured as a resource in a resource group. Cross-Site LVM Mirroring supports two-site clusters where LVM mirroring through a Storage Area Network (SAN) replicates data between disk subsystems at geographically separated sites. If this option is set to "false" then cluster verification will not check the volume groups Cross-Site LVM Mirroring configuration. Changing a volume group from "true" to "false" will not change the characteristics of the volume group.This option provides management of Disk/Site mapping for Cross-Site LVM Mirroring. The disk information is retrieved from the disk discovery file, it may be necessary to do "Extended Configuration, Discover HACMP-related Information from Configured Nodes" to have an updated disk discovery file. Cross-Site LVM Mirroring supports two-site clusters where LVM mirroring through a Storage Area Network (SAN) replicates data between disk subsystems at geographically separated sites. This provides a method for manually associating disks to a site. When disks are properly mapped to a site the configuration of LVM mirrors is easier to manage because pick lists for disks will show the site location.This option provides definition of Disk/Site mapping to allow a volume group to be mirrored with at least one mirror copy on each site. Cluster verification will check for the correct configuration of volume groups that have "Enable for Cross-Site LVM Mirroring" attribute set to "true". Cross-Site LVM Mirroring supports two-site clusters where LVM mirroring through a Storage Area Network (SAN) replicates data between disk subsystems at geographically separated sites. This provides a method for manually associating disks to a site. When disks are properly mapped to a site the configuration of LVM mirrors is easier to manage because pick lists for disks will show the site location.View or modify the current mapping of disks to a Site for Cross-Site LVM Mirroring.This option allows a disk to be removed from Disk/Site mapping for Cross-Site LVM Mirroring. The characteristics of the volume group will not change as a result of removing disks from the Disk/Site mapping configuration.Site where the disks defined for Cross-Site LVM Mirroring are located.Using the list function select the PVIDs of the disks to be mapped to the site for Cross-Site LVM Mirroring.Specifies the volume group(s) to enable or disable for Cross-Site LVM Mirroring.Site where the disk defined for Cross-Site LVM Mirroring is located.PVID of the disk to be mapped to the site for Cross-Site LVM Mirroring.Select "Default AIX System Command" if you want to restore the AIX system command to the base AIX installation of /usr/bin/passwd. Select "Link to Cluster Password" if you want to utilize the cluster password facility in place of the AIX /usr/bin/passwd utility. The cluster password facility allows authorized users to change their password cluster wide. If using NIS or another facility, do not use the "Link to Cluster Password" setting.Select the nodes where the /bin/passwd command should be changed by selecting a cluster resource group. Only the participating nodes of that resource group will be changed. Entering no value will change the cluster password on all nodes in the cluster.The participating nodes of the resource group specified will be where the password is changed for the specified user. The user account whose password to change. This entry cannot be changed, if you wish to change another user's password use the SMIT menu Change a User's Password in the Cluster. Users whose names appear in this list will be allowed to change their passwords cluster wide. Select ALL_USERS only if you want to grant all cluster users the ability to change their password cluster wide. Allows the system administrator to change the password of any cluster user in the system. If authorized by the system administrator, allows the current user to change his or her password cluster wide. A user logged in as root may also use this to change the root password without prior authorization. As the administrator, add or remove users from the list of users who are allowed to change their password cluster wide. Change the system password utility to the cluster password utility, or restore the base AIX /bin/passwd utility. Not to be used with secondary password systems such as NIS. As the administrator, list the users who are allowed to change their password cluster wide. Path to HACMP Cluster Test Tool. Test the cluster configuration by emulating events. Run the automated process that will execute a pre-defined set of tests for the cluster and each one of the resource groups. Execute the tests specified by a custom Test Plan. Select the Cycle Log File option to use a new log file to store the Test Tool's execution details. Select the Verbose Logging option to include additional information in the log file that may be useful in judging the success or failure of some tests. Enter the full path to the Test Plan that specifies the events to be executed. Enter the full path to the Variables File that specifies the variable definitions required to process the event file. Select the Abort on Error option to terminate testing after the first failed test. Select this menu item to configure dependencies between resource groups.Select this menu item to configure dependency between two resource groups. You will be asked to first choose a parent resource group and then a child.Select this menu item if you want to show, or to change an already configured dependency between two resource groups.Select this menu item if you want to delete the dependency between two resource groups.Select this menu item if you want to display an overall view of all dependencies currently configured.The parent resource group provides services another resource group, the child, depends on. When acquiring, the parent resource group will be acquired before the child resource group is acquired. When releasing, the child resource group will be released before the parent resource group is released.The child resource group depends on services another resource group, the parent, provides. When acquiring, the parent resource group will be acquired before the child resource group is acquired. When releasing, the child resource group will be released before the parent resource group is released.Select the resource group in parent role. The parent resource group provides services another resource group depends on. When acquiring, the parent resource group is guaranteed to be acquired before the child resource group is acquired. When releasing, the child resource group will be released before the parent resource group is released.Select the resource group in child role. The child resource group depends on services another resource group provides. When acquiring, the parent resource group is guaranteed to be acquired before the child resource group is acquired. When releasing, the child resource group will be released before the parent resource group is released.Select a resource group dependency to change/show.Select a resource group dependency to delete. Deleting a dependency between two resource groups will not delete the resource groups themselves.Parent/child dependency establishes a processing order so that the parent is always ONLINE before the child. The parent contains resources necessary for the child.An Online on the Same Node dependency establishes a location policy so that the selected resource groups are started up and kept ONLINE on the same node.An Online on Different Nodes Dependency establishes a location policy so that the selected resource groups are started up and kept ONLINE on different nodes.An Online on the Same Site dependency establishes a location policy so that the selected resource groups are started up and kept ONLINE on the same site.When acquired, the resource groups will be brought ONLINE on the same node. On fallback and fallover, the resource groups will be processed simultaneously and brought ONLINE on the same node.Select this menu item if you want to show, or to change an already configured location dependency between resource groups.Select this menu item if you want to delete the location dependency between resource groups.Add resource groups to be ONLINE on the same site.Add resource groups to be ONLINE on the same node.Selected list of Resource Groups to be ONLINE on the same node.High Priority resource group is brought ONLINE before lower priority resource group and they cannot reside on the same node. In case of fallback or fallover the resource group with High Priority takes precedence and is processed before lower priority resource group. If a High Priority resource group is acquired on a node where a lower priority resource group is ONLINE, the lower priority resource group is released in favor of bringing the HIGH Priority resource group ONLINE on that node.Intermediate Priority resource group is brought ONLINE on the first available node with no HIGH priority resource group. In case of fallback or fallover the resource group with Intermediate Priority is handled after the HIGH Priority resource group and before the LOW Priority resource group is released in favor of bringing the Intermediate resource group ONLINE that node.The resource group with Lower Priority is acquired on nodes where a no higher priority resource group is ONLINE. On fallback and fallover the resource group(s) with High and Intermediate Priority take precedence. If a higher priority resource group is acquired on a node, the LOW priority resource group is moved or taken OFFLINE and higher priority resource group is brought ONLINE. When acquired, the resource groups with equal priority cannot come ONLINE on the same node.Selected list of Resource Groups to be ONLINE on the same site.Start After dependency establishes a processing order so that the target resource group is always ONLINE before the source resource group. The target contains resources necessary for the source to start, but is not required to continue running once the source is started.Stop After dependency establishes a processing order so that the target is always brought OFFLINE before the source. The target contains resources necessary for the source to stop, but is not required to be running when the source is started.Select this menu item to configure Start after dependency between two resource groups. You will be asked to first choose a source resource group and then a target.Select this menu item if you want to show, or to change an already configured Start after dependency between two resource groups.Select this menu item if you want to delete the Start after dependency between two resource groups.Select this menu item if you want to display an overall view of all Start after dependencies currently configured.Select this menu item to configure Stop after dependency between two resource groups. You will be asked to first choose a source resource group and then a target.Select this menu item if you want to show, or to change an already configured Stop after dependency between two resource groups.Select this menu item if you want to delete the Stop after dependency between two resource groups.Select this menu item if you want to display an overall view of all Stop after dependencies currently configured.The source resource group depends on services provided by the target resource group only during acquiring the resource group. Once source resource group becomes online, it no longer depends on target resource group. When acquiring, the target resource group will be acquired before source resource group is acquired. there will be no dependency while releasing resource groups.The Target resource group provides services another resource group, i.e source resource group, depends on target only during acquiring source resource group. When acquiring, the Target resource group will be acquired before the Source resource group is acquired. When releasing, There will be no special order while releasing resource groups.Select the resource group in target role. The target resource group provides services another resource group depends on during acquiring. When acquiring, the target resource group is guaranteed to be acquired before the source resource group is acquired. When releasing, there will be no dependency.Select the resource group in source role. The source resource group depends on services another resource group provides during acquiring. When acquiring, the target resource group is guaranteed to be acquired before the source resource group is acquired. When releasing, there will be no dependency.The stopafter source resource group will be released only after taget resource group is brought offline.The Target resource group will be brought offline before source resource groupSelect the resource group in source role. During acquiring there will be no dependency between source and target. During release target resource group will be released first and then source.Select the resource group in target role. During acquiring there will be no dependency between source and target. During release target resource group will be released first and then source.Select a Start After Resource Group Dependency set to modifySelect a Start After Resource Group Dependency set to removeSelect a Stop After Resource Group Dependency set to modifySelect a Stop After Resource Group Dependency set to DeleteWhen acquired, the resource groups will be brought ONLINE on the Different node. On fallback and fallover, the resource groups will be processed simultaneously and brought ONLINE on the Different node.Change/Show An Online on Different Nodes Dependency policy so that the selected resource groups are started up and kept ONLINE on different nodes.Remove An Online on Different Nodes Dependency policy.The Automatic Cluster Configuration Monitoring path allows for periodical automatic cluster verification run and configure its parameters.If this feature is enabled, HACMP will automatically verify the cluster configuration every 24 hours and report results throughout the cluster. Select the appropriate option using F4 or Tab.Select a cluster node to execute the cluster verification and report its results. This node will contain the full log report of the verification in clverify.log file. This field only has to be filled if Automatic Cluster Configuration Monitoring is enabled. By default, the first node in alphabetical order will verify the cluster configuration. This node will be determined dynamically every time the automatic cluster verification occurs.Enter the Hour at which Automatic Cluster Configuration Verification should occur.Allows to define the Connection Authentication ModeAllows to define the Message Authentication ModeAllows to change/show the message authentication security mode used by HACMP. Message authentication provides security for internode communications through digital signature and encryption.Allows user to generate and distribute a key of the following types: MD5_DES, MD5_3DES, MD5_AESAllows to enable/disable automatic key distribution. If you allow automatic key distribution you should be aware of the fact that the key can be intercepted by a malicious user and used to break into your system. Also a malicious user can send his own key and this key may be accepted by the communication daemon and later this user can break into your system by using this key.Allows to activate a new key on all HACMP cluster nodes so that the next communication session will use a new key. The key activation must be performed just after the key distribution (manual or automatic).Allows to choose the connection authentication mode. Standard: based on IP address and port verification. Kerberos: Kerberos based authentication. It is available on SP-systems only.Allows to turn on persistent labels use only. It can be used to secure all cluster communications via VPN tunnels. The VPN tunnels may be built between the persistent labels and the HACMP communication daemon will be using the persistent labels only if this option is set to Yes.Allows to choose message authentication mode: MD5_DES: MD5 hashing algorithm will be used for message digest(signature) and DES algorithm will be used for signature encryption. MD5_DES3: MD5 hashing algorithm will be used for message digest(signature) and DES3 algorithm will be used for signature encryption. MD5_AES: MD5 hashing algorithm will be used for message digest(signature) and AES algorithm will be used for signature encryption. Enables encryption. If this field is set to Yes the messages will be encrypted by one of the following algorithms: DES, DES3, AES depending on Message Authentication field selection. If Message Authentication selection was None then this option does not have any effect. For the changes to take effect the HACMP configuration must be synchronized.Allows to generate a key for the message authentication. By default the field will be populated by the value selected in Configure Message Authentication Mode menu and the corresponding key will be generated.If set to Yes the corresponding key will be automatically distributed to all nodes in the cluster. If you choose Yes in this field you must first enable the Automatic Key Distribution on all nodes.Allows to disable automatic key distribution. When set to Yes the cluster communication daemon will reject a key distribution request. If you Enable the automatic key distribution you should be really careful since it can be used to break into your system. See the Users Documentation for more details.Executing the clsnap command to collect cluster logs will execute commands on the local and remote nodes and collect a considerable amount of data. Before using this option you should consult with IBM support personnel to understand all the actions that will be performed and how to plan accordingly. Select a resource group which is in error state. Resource Group is in error state due to failure in registering the persistent reserve key on one or more disks of at least one volume group which is part of the resource group in error state. Please look hacmp.out for detailed information on failure of scsi persistent reserves. Choose this option to configure Application Servers, Application Monitoring and Dynamic LPAR and Capacity Upgrade on Demand Resources for Applications.Choose this option to establish a communication path between a node, a Managed System, and one or more Hardware Management Consoles (HMCs). If a communication path is not established for a node, the node is considered not DLPAR capable.Choose this option to establish a communication path between and node and one or more Hardware Management Console and a Managed System. If a communication path is not established for a node, the node is considered not DLPAR capable.Choose this option to configure CPU and memory resource requirements for an Application Server that runs in a cluster that uses DLPAR nodes.Choose this option to add Hardware Management Console (HMC) and Capacity Upgrade On Demand Console IP addresses for a node. The node will use this IP address to send DLPAR requests to the HMC.Choose this option to add Hardware Management Console (HMC) IP addresses for a node. The node will use this IP address to send DLPAR requests to the HMC.Choose this option to modify or view the Hardware Management Console (HMC) and Capacity Upgrade On Demand Console IP addresses for a node.Choose this option to modify or view the Hardware Management Console (HMC) IP addresses for a node.Choose this option to remove the Hardware Management Console (HMC) and Capacity Upgrade On Demand Console IP addresses for a node.Choose this option to remove the Hardware Management Console (HMC) IP addresses for a node.Select a node name to associate with one or more Hardware Management Console (HMC) IP addresses and a Managed System.Enter one or more space-separated IP addresses for the Hardware Management Console (HMC). If multiple HMCs are entered, HACMP will try to communicate with each HMC until a working path is found. Once found, the dynamic logical partition commands will be executed on that HMC.Enter the name of the managed system that runs the logical partition that represents the node.Enter the IP Address for the Capacity Upgrade on Demand console.Choose this option to configure the CPU and memory Dynamic LPAR and CUoD requirements for an Application Server.Choose this option to modify or view the CPU and memory Dynamic LPAR and CUoD requirements for an Application ServerChoose this option to remove the CPU and memory Dynamic LPAR and CUoD requirements for an Application Server This is the application server for which you will configure Dynamic LPAR and CUoD resource provisioning.Enter the minimum number of CPUs to acquire when the Application Server starts. If this amount of CPUs is not satisfied, HACMP will take resource group recovery actions to move the resource group with this application to another node.Enter 'yes' if the application server should not start when there are insufficient CPUs available. Enter 'no' if it should start regardless if enough CPUs are available.Enter the maximum amount of CPUs HACMP will attempt to allocate to the node before starting this application. HACMP may allocate fewer CPUs if there are not enough available.Enter the amount of memory to acquire when the Application Server starts. If this amount of memory is not satisfied, HACMP will take resource group recovery actions to move the resource group with this application to another node. Enter the value in multiples of 256 MB. For example, 1024 would represent 1 GB.Enter 'yes' if the application server should not start when there is insufficient memory available. Enter 'no' if it should start regardless if enough memory is available.Enter the maximum amount of memory HACMP will attempt allocate to the node before starting this application. HACMP can allocate less memory if there is not enough available. Enter the value in multiples of 256 MB. For example, 1024 would represent 1 GB.Enter 'yes' to have HACMP use Capacity On Demand (CUoD) to obtain enough resources to fulfill the minimum amount requested. Using CUoD requires a activation code to be entered on the Hardware Management Console (HMC) and may result in extra costs due to usage of the CUoD license.'Yes' must be entered for this field to acknowledge that you understand that there might be extra costs when using CUoD.Choose this option to configure Application Servers and Application Monitoring.Choose this option to configure communications paths to Hardware Management Consoles (HMC) and to configure available or "Capacity Upgrade on Demand" (CUoD) CPU and memory resources for Application Servers that run on dynamic logical partitions (DLPARs).This is the name of the network for which this preference will be applied. Only IP based networks using IPAT via IP aliasing are supported. This field specifies the distribution preference to be used for managing all service labels on the selected network. Available preferences are: Anti-Collocation - this is the default - service labels are mapped across all available boot interfaces using a "least loaded" selection process. Collocation - All service labels will be mapped to the same physical interface. Collocation with Persistent Label - service labels will be mapped to the same physical interface that currently has the persistent IP label for this network. Anti-Collocation with Persistent Label - service labels will NOT be mapped to the same physical interface that currently has the persistent IP label unless there is no other interface available. This selection allows you to specify a preference for how HACMP will manage alias service labels across the available physical interfaces. This selection allows you to specify a preference for how HACMP will manage cluster resources. Using Capacity on Demand Resources to Support Highly Available (HA) Applications PowerHA SystemMirror allows you to use Capacity on Demand (CoD) resources to support application fallover to a system where insufficient resources are available. When you specify processor and memory resource requirements for an HA application (using the Configure HACMP for Dynamic LPAR and CUoD Resources SMIT menu) , you can optionally specify that if available installed system resources are insufficient to support the application, CoD resources should be activated to satisfy the requirement. This assumes that CoD enablement keys have already been activated on the target system, and that those CoD resources are not currently activated or in use by any other logical partition. It is important to note that if CoD resources are currently activated or unreturned on the target managed system, activation of additional CoD resources to support an HA application will fail. This means that application fallover may be unsuccessful when CoD resources are requested, and any of the following are true: - Trial CoD resources are activated or unreturned - Utility CoD resources are activated or unreturned - Reserve CoD resources are activated or unreturned - On/Off CoD resources are activated or unreturned It is important that if you intend for CoD resources to be used to support application fallover in a PowerHA SystemMirror cluster, that CoD resources are not used for any other purpose on the system.Enter the minimum number of processing units to acquire when the Application Server starts. Processing units are specified as a decimal number with two decimal places, ranging from 0.01 to 255.99. This value is only used on nodes which support allocation of processing units. If this amount of processing units cannot be acquired, HACMP will take resource group recovery actions to move the resource group with this application to another node.Enter the maximum amount of processing units HACMP will attempt to allocate to the node before starting this application. Processing units are specified as a decimal number with two decimal places, ranging from 0.01 to 255.99. This value is only used on nodes which support allocation of processing units. HACMP may allocate fewer CPUs if there are not enough available.This field specifies the distribution preference to be used for managing all service labels on the selected network. Available preferences are: Anti-Collocation - this is the default - service labels are mapped across all available boot interfaces using a "least loaded" selection process. The First service label is the source address on an interface Anti-Collocation with Source - service labels are mapped using the Anti-Collocation preference. If there are not enough adapters, more than one service label can be placed on one adapter. This choice will allow one label to be chosen as source address for outgoing communication. Collocation - All service labels will be mapped to the same physical interface. Collocation with Source - service labels are mapped using Collocation preference. This choice will allow to choose one service label as source for outgoing communication. The service label chosen in the next field is source address. Collocation with Persistent Label - service labels will be mapped to the same physical interface that currently has the persistent IP label for this network. The persistent label will be the source address. Anti-Collocation with Persistent Label - service labels will NOT be mapped to the same physical interface that currently has the persistent IP label unless there is no other interface available. The first label placed on the adapter will be the source address. Anti-Collocation with Persistent Label and Source - service labels will be mapped using the Anti-Collocation with Persistent preference. One service address can be chosen as a source address for the case when there are more service addresses than the boot adapters. This field allows to choose a Service or persistent address to be used as a source address on the selected network. All the service labels and persistent labels will be shown as choices. Appropriate choice must be made depending on the Distribution Preference chosen above. A service label must be chosen if Distribution Preference is Collocation with Source or Anti-Collocation with Source. This field specifies the distribution preference to be used for managing all service labels on the selected network. Available preferences are: Anti-Collocation - this is the default - service labels are mapped across all available boot interfaces using a "least loaded" selection process. The first service label placed on the interface will be the source address for all outgoing communication on that interface. Anti-Collocation with Source - service labels are mapped using the Anti-Collocation preference. If there are not enough interfaces, more than one service label can be placed on one interface. This choice will allow one label to be chosen as source address for outgoing communication. Collocation - All service labels will be mapped to the same physical interface. The first service label placed on the interface will be the source address for all outgoing communication on that interface. Collocation with Source - service labels are mapped using Collocation preference. This choice will allow you to choose one service label as source for outgoing communication. The service label chosen in the next field is source address. Collocation with Persistent Label - service labels will be mapped to the same physical interface that currently has the persistent IP label for this network. The persistent label will be the source address. Collocation with Persistent Label and Source- service labels will be mapped to the same physical interface that currently has the persistent IP label for this network. This choice will allow you to choose one service label as source for outgoing communication. The service label chosen in the next field is source address. Anti-Collocation with Persistent Label - service labels will NOT be mapped to the same physical interface that currently has the persistent IP label unless there is no other interface available. The first service label placed on the interface will be the source address for all outgoing communication on that interface. Anti-Collocation with Persistent Label and Source - service labels will be mapped using the Anti-Collocation with Persistent preference. One service address can be chosen as a source address for the case when there are more service addresses than the boot interfaces. Specifies whether HACMP is to start at system restart, whether to Broadcast that HACMP is starting, whether to start the Cluster information (clinfo), and whether or not to perform cluster verification prior to starting HACMP. Setting this value to true will start the daemons after a system reboot by adding an entry to the /etc/inittab file. Choosing false will remove the entry from the /etc/inittab file and will not automatically start cluster services at system restart. Uses the 'wall' command to broadcast information to the console indicating that cluster services are starting. Starts the Cluster Information daemon, which allows clstat and xclstat, or any user application written against the clinfo API to read changes in cluster state. Setting this value to 'true' the default, means that prior to starting cluster services HACMP will synchronize if required, and verify your cluster configuration automatically. It is recommended that this value always be set to 'true'. Setting this value to 'false' means verification and any necessary synchronization will not occur prior to staring cluster services. Setting 'Ignore verification errors' to 'false' (default) means that nodes attempting to start cluster services that have verification errors will not start cluster services. If the error is related to a node that is not attempting to start cluster services, then the verification error will be ignored regardless of the 'Ignore verification errors' setting. Setting 'Ignore verification errors' to 'true' will ignore any verification errors regardless of which node caused the error. Cluster services will start on all specified nodes. Select 'Interactively' to have verification prompt you to correct resulting verification errors. Selecting 'Yes', will correct reported verification errors without prompting. Only certain erros will be corrected when verifying the cluster configuration prior to starting cluster services. Select 'No' if you do not want verification errors to be corrected. This option is only available if he automatic verification and synchronization option has been enabled in the Extended Cluster Service Settings SMIT panel. Please refer to the HACMP documentation guide: Administration and Troubleshooting Guide. Stops the HACMP cluster daemons. Choosing 'system restart' will remove any entry from the /etc/inittab file and HACMP will not restart after reboot. Choosing 'both' will stop the daemons immediately AND remove any entry from /etc/inittab.Uses the 'wall' command to broadcast the stop.To add a custom volume group method, in SMIT select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure Custom Volume Group Methods > Add Custom Volume Group Methods and press enter. Then you can enter the various fields to define the Volume Group Methods. Enter the identifier for the particular volume group type. By default, it is 'logical_volume/vgsubclass/vgtype' for AIX/LVM Volume Groups. This is the value of the PdDvLn field of the CuDv entry for the volume group. By default, this is the following command: odmget -q 'name = ' CuDv. By default, this is the AIX 'lsvg' command. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. The method takes no input, and must output a list of the volume group names, one per line. It is expected that the method returns '0' for success and non-'0' for failure.By default, the method used is to compare the Physical Volume Identifiers (PVID) as output by the AIX 'lspv' command, of the disks which compose the volume group. If the keys seen by one host match the keys as seen by other hosts, then the volume group is considered to be 'shared' among the hosts. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure. The output of the method is the 'key(s)' to compare. The input of the method is the volume group name. By default, this is the AIX 'lspv' command. This is used to determine which disks must be made available to the host prior to bringing the volume group online. The method takes as input the volume group name, and must output a list of the hdisk names, one per line. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure.By default, this is the AIX 'varyonvg' command. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. The method takes as input the volume group name. It is recommended that this method encapsulate the code to 'force' the volume group online, if such a feature exists and is desired. It is expected that the method returns '0' for success and non-'0' for failure.By default, this is the AIX 'lsvg -o' command. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. The method takes as input the volume group name. It is expected that the method returns '0' for offline, '1' for online, and '2' for command failure.By default, this is the AIX 'varyonvg' command. The command is normally run as a background process. The method takes as input the volume group name. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure.By default, this is the AIX 'varyoffvg' command. The method takes as input the volume group name. It is recommended that this method encapsulate the code to 'force' the volume group offline, if such a feature exists and is desired. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure.By default, HACMP provides verification of configuration for AIX LVM volume groups only. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. This method will be run on each cluster node. It is expected that the method returns '0' for success and non-'0' for failure. One or more space-separated directories may be listed. If found, the files in each directory will be copied by the HACMP 'snap -e' command for troubleshooting. To Change/Show Custom Volume Group Methods, in SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure Custom Volume Group Methods > Change/Show Custom Volume Group Methods and press enter. Then select the name of a particular volume group type and press enter. SMIT will display the current information You can then view current information or enter new information. To remove Custom Volume Group Methods, in SMIT select Extended Configuration > Extended Resources Configuration > Configure Custom Volume Group Methods > Remove Custom Volume Group Methods. SMIT displays a list. You can then select a type/method that you want to remove and press enter. HACMP offers Smart Assists to help you rapidly configure HACMP for use with DB2, WebSphere and Oracle, however, the fileset containing these Assistants is not installed. If you have purchased the Standard or Enterprise Version then simply install the cluster.es.assist fileset from the installation media. The 2 node configuration assistant helps you configure a basic 2 node cluster which can keep an application and an IP address highly available. You will need to supply a few basic bits of information and HACMP will automatically setup the cluster for you. Enter the identifier for the particular filesystem type. By default, it is the value of VFS field of /etc/filesystems for the volume. This value can be retrieved from /etc/filesystems with the following command:'lsfs'. By default, this is the AIX 'lsfs' command. This method takes no input and must output a list of the names of filesystem of the given type. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure. By default, the method used is to compare the AIX logical volume associated with the filesystem with the AIX logical volume that is part of the volume group. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. The method takes as input the filesystem name, and must output a list of the volume group names, one per line. It is expected that the method returns '0' for success and non-'0' for failure. By default, this is the AIX 'mount' command. The method takes as input the filesystem name. It is recommended that this method encapsulate the code to 'force' the filesystem online, if such a feature exists and is desired. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure.By default this is the AIX 'unmount' command. The method takes as input the filesystem name. It is recommended that this method encapsulate the code to 'force' the filesystem offline, if such a feature exists and is desired. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure.By default, this is the AIX 'mount' command. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. The method takes as input the filesystem name. It is expected that the method returns '0' for offline, '1' for online, and '2' for command failure.By default, HACMP provides verification of configuration for AIX filesystem groups only. This method will be run on each cluster node. The user is expected to enter the full path name of the routine, or use F4 to select one of the built in methods. It is expected that the method returns '0' for success and non-'0' for failure. To add a Custom Filesystem Method, in SMIT select, Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure Custom Filesystems Methods > Add Custom Filesystems Methods and press enter. Then enter the various fileds that define the filesystem. To change a Custom Filesystem Method, in SMIT select Extended Configuration> Extended Resource Configuration > HACMP Extended Resources Configuration> Configure Custom Filesystem Methods > Change/Show Custom Filesystems Methods and press Enter. When the list is displayed, changes can be made. To Remove a Custom Filesystem Method, in SMIT select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure Custom Filesystem Methods> Remove Custom Filesystem Methods and press Enter. SMIT displays a list. You can then remove the Filesystem Methods that you want. Use this option to use HACMP Smart Assists to make your applications highly available. Base HACMP supplies a general-purpose Smart Assist. If you have installed the Smart Assists for DB2, Oracle, WebSphere or other applications, you can access them from this menu as well. This is the application installed on one or more nodes of the cluster you wish to make highly available. Show or Modify the HACMP configuration that makes your application highly available. Stop your application from being made highly available with HACMP. If the Smart Assist used to make your application highly available has any unique configuration tasks, they will be found here. Add resources such as filesystems and volume groups to a resource group. These resources will always be acquired and released as a single entity. If it is desired for a set of resources to be acquired by one node and another set acquired by a different node, create separate resource groups for each. Test the availability of your application using the Cluster Test Tool. Enter the primary node for the application to run. Enter the takeover node(s) for the application. The General Application Smart Assist will configure an HACMP cluster to make any application highly-available. Before you start, complete the following task: - Connect and configure all IP network interfaces. - Install and configure the application to be made highly available. - Add the application's service IP label to /etc/hosts on all nodes. - Configure the volume groups that contain the application's shared data on disks that are attached to both nodes. Before you start, collect the following information: - A unique name to identify the application to be made highly available. - The full path to the application's start and stop scripts. - The application's service IP label. The cluster created by this assistant will contain a single resource group configured to come online on the local node at cluster start, to fall over to the remote takeover node if the local node fails, and to remain on the remote takeover node when the local node rejoins the cluster. The General Application Smart Assist will configure an HACMP cluster to make any application highly-available. The cluster created by this assistant will contain a single resource group configured to come online on the local node at cluster start, to fall over to the remote takeover node if the local node fails, and to remain on the remote takeover node when the local node rejoins the cluster. This option is to use NFS Configuration Assistant. It's add menu option configures a new resource group with NFS exports. It's Change/Show menu option offers interface to display/modify attributes of a resource group. It's delete menu option removes a resource group with NFS exports. The primary node for the application to run. The takeover node(s) for the application. This will help you to configure the selected application for high availability by reading environment details about the application from supplied XML file. Each Smart Assist will provide a template XML file which can be found under config folder at /usr/es/sbin/cluster/sa/. This template file will help you to figure out what environment details need to be provided. Smart Assist ID of the previously selected Smart Assist.Absolute path to the XML File containing environment details about the application. Show or Modify the SAP Instance specific attributes and SAP Globals that are configured. You can configure most applications manually using an XML file. The field above has been pre-filled with the path to a sample XML file provided with the product. You can edit the sample file or rename it, then fill in the specifics of your application. To proceed with the manual configuration you must enter the full path to the file in the field above. Specify the logical host name that is used to configure this application server instance. This IP Label will be added as a Service IP alias to PowerHA SystemMirror based on the network selection. If you want SystemMirror to keep the IP label highly available, then choose one of the SystemMirror networks from the list. The IP label will be added as a service IP label in the SystemMirror configuration. If you do not want SystemMirror to keep the IP label highly available, then choose the keyword "LOCAL" from the list. If you choose "LOCAL", the IP label will not be added as a service IP and will not be part of the resource group. This menu allows you to stop the RSCT services on the local node. Please note that HACMP automatically manages the RSCT services. You may proceed to stop the RSCT services using this menu only when suggested by IBM's technical support. Stop RSCT services.Stop RSCT services on the local node. Note that you may stop RSCT services only if the local node is stopped by selecting "Unmanage" resource groups option. Select "yes" if you want to stop the RSCT services even if the node has the enhanced concurrent volume groups active. Note that enhanced concurrent mode volume groups use the RSCT services. Thus stopping RSCT services will varyoff the enhanced concurrent volume groups. Allows the user to configure applications, application monitoring and dependencies between the applications. Allows the user to configure the applications and their dependent resources. Choose this option to add, change, view or remove user-defined Application Monitors. Custom monitoring lets you define your own method to test the status of your application. To monitor simply whether an application is running, use a Process Monitor. Allows the user to configure parent/child dependencies between the applications. Allows the user to start/stop an application or an application dependency group. Allows the user to update the configuration. This option should be used when the user makes changes to the topology configuration like adding or removing a network interface. Allows the user to add an application to the configuration and add dependent resources and monitors. Allows the user to change/show configured applications and their dependent resources. Allows the user to remove a configured application.Choose this option to configure a "Custom" type Application Monitor. Custom Application monitors let you define you own monitoring method - the system invokes your method on a regular interval and when your method indicates a problem, the system responds by recovering the application. Choose this option to modify or view a Custom Application Monitor.Choose this option to select a Custom Application Monitor to remove.Select this menu item to configure dependency between two applications. You will be asked to first choose a parent application and then a child.Select this menu item if you want to show, or to change an already configured dependency between two applications.Select this menu item if you want to delete the dependency between two applications.Allows the user to start an application and its dependencies.Allows the user to stop an application and its dependencies.Allows the user to start an application dependency group.Allows the user to stop an application dependency group.A unique name for application, up to 64 alphanumeric characters in length.The full path to the script or executable to start the application.The full path to the script or executable to stop the application.Enter the name of Custom Application Monitor. These are the monitors defined in the "Custom Application Monitors" section.The application start script will launch one or more processes. Enter the names of the processes here. Each Process name must be unique.The username of the owner of the processes (usually root).The number of instances of a process to monitor. Default is 1. If more than one process is listed in the Process to Monitor, this must be 1.Name of the script to run to stop the application. Default is the application server stop method.Name of the script which starts the application. Default is the application server start method.Enter the dependent network name. Network name has the following format - NetworkType_interface1_interface2 etc, where interface1 and interface2 are the interfaces on the same physcial network.The IP label (hostname) of this adapter, associated with the numeric IP address in the /etc/hosts file.Enter the names of the volume groups containing raw logical volumes or raw volume groups that are varied on before the application gets started.Enter the mount points of the filesystems which should be mounted before the application gets started. If this field is left blank and volume groups have been specified in the "Dependent Volume Groups" field in this SMIT panel, then all filesystems that exist on the volume groups will be mounted before the application gets started. This default behavior of mounting all filesystems is only assumed if volume groups have been explicitly specified as resources for this application. For any filesystem that is specified in this field the volume group on which it resides will be mounted before the application gets started.A new unique name for application, up to 64 alphanumeric characters in length.Monitor Mode: Select the mode in which the application monitor monitors the application: - STARTUP MONITORING. In this mode the application monitor checks that the application server has successfully started within the specified stabilization interval. Select this mode if you are configuring an application monitor for a parent application. - LONG-RUNNING MONITORING. In this mode, the application monitor periodically checks that the application server is running. The checking starts after the specified stabilization interval has passed. This mode is the default. - BOTH. In this mode, the application monitor checks that within the stabilization interval the application server has started successfully, and periodically monitors that the application server is running after the stabilization interval has passed. Stabilization Interval: Specify the time (in seconds) in this field. Depending on which monitor mode is selected, the system uses the stabilization period in different ways: - If you select the LONG-RUNNING mode for the monitor, the stabilization interval is the period during which the system waits for the application to stabilize, before beginning to monitor that the application is running successfully. For instance, with a database application, you may wish to delay monitoring until after the start script and initial database search have been completed. Experiment with this value to balance performance with reliability. - If you select the STARTUP MONITORING mode, the stabilization interval is the period within which the system monitors that the application has successfully started. When the time expires, system terminates the monitoring of the application startup and begins recovery process. The number of seconds you specify should be approximately equal to the period of time it takes for the application to start. This depends on the application you are using. - If you select BOTH as a monitor mode, the application monitor uses the stabilization interval to wait for the application to start successfully. It uses the same interval to wait until it starts checking periodically that the application is successfully running on the node. In most cases, the value should NOT be zero. This is the unique name for custom application monitor.Select the application in parent role. The parent application provides services another application depends on. When starting, the parent application is guaranteed to be started before the child application is started. When stopping, the child application will be stopped before the parent application is stopped.Select the application in child role. The child application depends on services another application provides. When starting, the parent application is guaranteed to be started before the child application is started. When stopping, the child application will be stopped before the parent application is stopped.Enter name of application dependency group. All applications in the group can be started together, however they will be started in temporal order according to their parent/child dependency.Select an application dependency to change/show.Select this menu item if you want to show, or to change an already configured dependency between two applications.The parent application provides services another application, the child, depends on. When starting, the parent application will be started before the child application is started. When stopping, the child application will be stopped before the parent application is stopped.The child application depends on services another application, the parent, provides. When starting, the parent application will be started before the child application is started. When stopping, the child application will be stopped before the parent application is stopped.Select this menu item if you want to delete the dependency between two applications.Allows the user to start an application.Allows the user to stop an application.Select the name of application to start.Select the name of application to stop.Allows the user to start an application dependency group.Allows the user to stop an application dependency group.Select the name of application dependency group to start.Select the name of application dependency group to stop.Select this option to add a new Resource Group with NFS exports. Select this option to Change/Show a Resource Group with NFS exports. Select this option to Remove a Resource Group with NFS exports. Use this option when you have a volume group already configured to HACMP that you want to use for multi-node disk heartbeat. This option will create the multi-node disk heartbeat network on the selected volume group.This option lets you configure a volume group and logical volume from scratch and use it for multi-node disk heartbeat.Use this option when you have a volume group already configured to HACMP that you want to use for multi-node disk heartbeat. This option lets you add a logical volume to the volume group and use that volume for multi-node disk heartbeat.This option displays information about all the configured multi-node disk heartbeat networks.This option will remove a multi-node disk heartbeat network from the selected volume group and volume.Use this option to select the action to take when a node loses access to the multi-node disk heartbeat network. The available options are: Halt the node - HACMP will halt the node when access is lost Bring the Resource Group offline - HACMP will take the resource group and all resources offline on this node Move the Resource Group to a backup node - HACMP will move the resource group to another node in the node list.Use this option to manage the nodes which participate in the multi-node disk heartbeat network.Data divergence is a state where each site's disks contain data updates that have not been mirrored to the other site. In other words, each site's copy of the data reflects logical volume writes that are missing from the other site's copy of the data. When you configured your asynchronous geographically mirrored volume group and added it to a resource group, you selected to allow data divergence or not. If you selected to allow data divergence to occur, you can now choose to keep the data that exists at one site and back out, in other words throw away, the non-mirrored updates that occurred at the other site, before the entire volume group can be merged back together. If you believe data divergence has occurred, and you wish to override the default site for recovery, specify the site name here. Data at the selected site will be preserved, and any non-mirrored updates at the other site will be thrown away in order to bring the copies of the volume group back together.Answer"no" if you are not willing to allow data divergence to occur. If the varyonvg command determines that no non-mirrored data updates were in the production site cache at the time of the site failure, then the volume group will be varied online, and you will not need to worry about data divergence. However, if the varyonvg command determines that non-mirrored data updates may have been in the production site's cache, then the varyonvg command will fail, in which case you will need to recover the production site in order to avoid data divergence. Answer "yes" if you are willing to allow data divergence to occur. Just keep in mind this can allow the data at each site to reflect transactions that are not mirrored to the other site. Later, after you recover the production site, the data divergence recovery processing will only allow you to keep the data that resides at one site, and any non-mirrored transactions that are at the other site will be lost. However, if you are certain that the data on the production site disks has been destroyed, then you should answer "yes" because any non-mirrored transactions that were on the production site disks are lost, so whatever data resides on the disaster recovery site disks is now the latest surviving copy of the data.List, create, change and remove volume groups and logical volumes used for multi-node disk heartbeat networksUse this option to select the action to take when a node loses access to the Critical Volume Group. The available options are: Notify Only - PowerHA SystemMirror will not take any action on nodes except notification Halt the node - PowerHA SystemMirror will halt the node when access is lost Fence the node - PowerHA SystemMirror will fence this node from the disks Shutdown Cluster Services and bring all Resource Groups Offline - PowerHA SystemMirror will stop the cluster services and bring all resources offline on this node.Specifies the user whose attributes you want to change or view. The user must already exist on the system. To change a user's attributes, you must have the correct access privileges. Type in the name of an existing user, or use the List box and select a user from the choices displayed. When you select the Do button or press Enter, the user's attributes are displayed. Note: You cannot change a user's name in this attribute dialog. Defines a unique decimal integer string to associate with this user account on the system. It is strongly recommended to let the system generate the user's ID to incorporate all the security restrictions and conventions that may apply to your system. To have the system generate the ID, leave this field blank. Select "true" if this user will be an administrative user. Otherwise, select "false". The group name the user will have when the user logs in for the first time. Groups are collections of users that can share access authority to protected resources. Groups can be formed for users who access the same applications or hardware resources, perform similar tasks, or have similar needs for information. A user can be a member in up to 64 groups. However, you can specify only one primary group for a user. To specify the primary group name, type the name of an existing group, or use the List box and select the group from the choices displayed. If you do not specify or list any groups, the system assigns the user to the primary default group specified in the usr/lib/security/mkuser.default file. Note: To make this user a member of other groups, use the Group Set field. The groups in which the user is a member. Groups are collections of users that can share access authority to protected resources. Groups can be formed for users who access the same applications or hardware resources, perform similar tasks, or have similar needs for information. A user can be a member in up to 64 groups. The groups must already exist on the system. To specify the user's group set, type a string containing the group names (each group name can contain one to eight bytes. Separate the names with commas), or use the List box and select the names from the choices displayed (as you select, the string of names is displayed in the field in the correct format). If you do not specify any groups, the system assigns the user to the default groups specified in the /usr/lib/security/mkuser.default file. Specifies the nonadministrative groups for which the user is an administrator. The attributes of a nonadministrative group can be modified by its administrators and the root user. This is different than the attributes of an administrative group which can be modified by only the root user. A user can be the administrator of more than one group. The groups must already exist on the system. To enter the groups, type in a string containing the group names each group name can contain one to eight bytes; separate the names with commas), or use the List box and select the names from the choices displayed (as you select, the string of names is displayed in the field in the correct format) If you do not enter or list any groups, the system checks the defaults in the /etc/security/user file for default administrator definitions. It is possible that the user may not be made an administrator of any groups. Specifies the users ability to su to another user's account. Select "true" if the user should be authorized to su to another user's account Select "false" if the user is not authorized to su to another user's account Specifies the groups that can su (switch user) to this user's account. You may want groups, such as a group with administrative privileges, to be able to access this user's account to update the user's system configuration or change some attribute values, such as print queues or host names. More than one group can su to a user account. The groups must already exist on the system. To enter su groups, type in a string containing the group names each group name can contain one to eight bytes; separate the names with commas), or use the List box and select the names from the choices displayed (as you select, the string of names is displayed in the field in the correct format) You can specify the keyword ALL to indicate all groups or place an exclamation point) in front of a group name listed in the field to exclude a specific group. If you do not enter or list any groups, the system allows the groups listed as default su groups in the /etc/security/user file to switch to this user account with the su command. Specify the full path to the user's home directory. Specify the initial program to execute when the user performs a login. This is typically a shell program, such as /bin/ksh Specify descriptive information about the user.Specifies an expiration date for the user's password. When the expiration date expires, the user will be required to change their the password. Indicates whether the user's account is locked, which prevents the user from logging in. True indicates the account is locked and the user cannot login. False indicates the account is not locked and the user can login. This option will not unlock a user's account that was locked as a result of too many failed login attempts. Note: To unlock a user's account that was locked because of too many failed logins, the system administrator can use the Reset User's Failed Login Count menu item under the Users menu item of the Security & Users menu. Indicates whether the user can log into the system with the login command. This field is displayed with True or False as its value. True indicates that the user can log in to the system. To change this value, use the Tab key to toggle the True/False value. Indicates whether the user can log into the system remotely. This field is displayed with True or False as its value. True indicates that the user can log in to the system remotely. To change this value, use the Tab key to toggle the True/False value. Specifies the time of day and days of the week the user is allowed to login to the system. Any attempt to access the system outside of these times is not allowed. The value is a comma-separated list of day and time periods. An ! (exclamation point) in front of the time indicates the user is not allowed to log in during that time. If this attribute is not specified, the user can log in at all times. Refer to the chuser command documentation for details. The number of consecutive unsuccessful login attempts the user is allowed. If this number is exceeded, the account is locked and the user cannot login. If 0 is specified this feature is disabled. Note: To unlock a user's account that was locked because of too many failed logins, the system administrator can use the Reset User's Failed Login Count menu item under the Users menu item of the Security & Users menu. Specifies the methods through which the user must authenticate successfully before gaining access to the system. The default word compat indicates that normal login procedures will be followed. Therefore, compat allows local and NIS users access to the system. Specifies the list of terminals that can access this user account. When a user tries to access the account, the system attempts to match the terminal from which the access request is made with a terminal listed in this field. The system works through the list of ttys in the order specified in this field and grants access to the account to the first tty that it matches. If the system cannot find a match, the user cannot log in to the account from the terminal. To enter a list of valid ttys for this user account, type in the full path names to each terminal (separating each path name with a comma) Note: As shortcuts, type in the keyword ALL to indicate that all ttys known to the system are valid for the account, or prefix a tty's path with an ! (exclamation point) to exclude it from a list of entries. You can even combine the two shortcuts. For example, dev/tty0,ALL means that all ttys available to the system can access this user account except for tty0. If you do not enter a list of valid ttys, the system uses the defaults from the /etc/security/user file. Specifies the number of days prior to the expiration of the user's password when a warning message is issued. The value is a decimal integer string. The message appears each time the user logs in during this warning period, and gives the date when the user's password expires. List of administrator-supplied methods for checking the user's new password during a password change. The value is a comma-separated list of program names, which must be specified using absolute pathnames or a path relative to /usr/lib. If the password does not meet the requirements of all the methods specified, the password change will not be allowed. List of dictionary files containing words that cannot be used as passwords. The value is a comma-separated list of files, which must be specified using absolute pathnames. The number of previous passwords that the user will not be able to reuse. The value is a decimal integer string. The interpretation of this value may depend on the value of the WEEKS before password reuse attribute. If 0 is specified, any previous password can be reused as long as the WEEKS before password reuse time has elapsed. The number of weeks that must pass before a user is able to reuse a password after it has been selected as the user's current password. The value is a decimal integer string, and the recommended number of weeks is 26 (six months). The interpretation of this value may depend on the value of the NUMBER OF PASSWORDS before reuse attribute. If 0 is specified, any previous password can be reused as long as the NUMBER OF PASSWORDS before reuse attribute has been satisfied. The number of weeks after the user's password expires (reaches its maximum age) during which the user can still change the password. If this time period passes without a password change, the user account no longer allows logins until an administrator resets the password. The value is a decimal integer string. If 0 is specified, logins will be prevented at the time the password expires. If -1 is specified, this feature is disabled. If Password MAX. AGE is 0, any value entered here is ignored. Defines the maximum age (in weeks) for the user's password. When the password reaches this age, the system requires it to be changed before the user can login again. The value is a decimal integer string. If you specify "0", this feature is disabled. You can specify a number from 0 to 52. Defines the minimum age (in weeks) for the user's password before it can be changed. The value is a decimal integer string. If you specify "0", the password can be changed at any time. You can specify a number from 0 to 52. The minimum number of characters that the user's password must contain. The value is a decimal integer string. If you select "0" there is no minimum length. You can select a number from 0 to 8. The minimum number of alphabetic characters that must be included in the user's password. The value is a decimal integer string. If you select "0", no minimum number of alphabetic characters is required. You can select a number from 0 to 8. The minimum number of characters other than alpha characters that must be included in the user's password. The value is a decimal integer string. If you specify "0", no minimum number of other characters is required. You can select a number from 0 to 8. The maximum number of times that a character can be repeated within the user's password. The value is a decimal integer string. If you specify "8", any number of characters can be repeated. You can select a number from 0 to 8. The minimum number of characters required in the user's new password that were not in the old password. The value is a decimal integer string. If you specify "0", no minimum number of different characters is required. You can select a number from 0 to 8. Specifies the authentication mechanism through which the user is administered. It is used to resolve a remotely administered user to the locally administered domain. This situation might occur when network services unexpectedly fail or network databases are replicated locally. Select from the possible values "files" or Defines the largest soft file size, in 512-byte blocks, that a process invoked by this user can create or extend. A user can change this value, but not beyond the hard limit value. To enter the file size, type in a decimal integer string for the appropriate number of blocks. The minimum value is 8192 blocks. The default value is set in the /etc/security/limits file. Defines the largest soft value of system unit time (in seconds) that a user's process can use. A user can change this value, but not beyond the hard limit value. To enter the CPU time, type in a decimal integer string for the appropriate number of seconds. The default value is set in the etc/security/limits file. Defines the largest soft data segment size, in 512-byte blocks, for a user's process. A user can change this value, but not beyond the hard limit value. To enter the segment size, type in a decimal integer string for the number of blocks. The minimum value is 1272 blocks. The default value is set in the /etc/security/limits file. Defines the largest soft process stack segment size, in 512-byte blocks, for a user's process. A user can change this value, but not beyond the hard limit value. To enter the stack size, type in a decimal integer string for the number of blocks. The default value is set in the etc/security/limits file. Defines the largest soft core file size, in 512-byte blocks, that a user's process can create. A core file contains a memory image of a terminated process. The system creates core files in the current directory when certain system errors (commonly called core dumps) occur. A user can change this value, but not beyond the hard limit value. To enter the core file size, type in a decimal integer string for the number of blocks. The default value is set in the etc/security/limits file. Defines the UMASK to use when creating files Specifies a variable that defines the user's access to the trusted path. The system uses this variable when the user tries to invoke the trusted shell or a trusted process, or enters the secure attention key (SAK) sequence. The system recognizes the following values: o nosak - SAK is disabled for all processes run by the user. Select this option if this user transfers binary data that may contain the SAK sequence. o notsh - User cannot invoke the trusted shell on a trusted path and is logged off if the user enters the SAK during the current session. o always - User can only execute trusted processes (and implies the user's initial program is the trusted shell or another trusted process) o on - User has standard trusted path characteristics and can invoke the trusted shell with the SAK. This field is displayed with one of the values in place. Use the List feature to get a list of valid options. A login name that identifies this user account on the system. The system uses the user name to set the correct environment and access privileges for the user during login. Information in this field is required. A user name is specified as a string. The maximum length depends on the configuration of the individual nodes, and can be queried using the lsattr command to view the max_logname attribute of the sys0 device, but all AIX systems will accept a length of up to 8 characters. See the documentation on the AIX mkuser command for more information. You can use letters, numbers, and some special characters in the name. The string cannot start with a hyphen (-), plus (+), tilde (~), or at sign (@). The string cannot contain any spaces or any of the following characters: colon (:), double quote ("), pound sign (#), comma (,), asterisk (*), single quote ('), equal sign (=), newline (\n), tab (\t), back slash (\), forward slash (/), question mark (?), back quote (`), or the key words "ALL" or "default". Each authorized user has a login name and password to access a user account. One person can have several authorized user accounts on a system but each account must be identified with a unique login name to preserve a secure environment. It is a good idea to use names that are meaningful to the users on the system. For example, using actual names helps users identify each other for electronic mail, or using a task name helps identify the user account with its purpose.Specifies the user you want to remove from the system. The user must already exist on the system and you must have the correct access privileges to remove the user. When you remove this user, the system deletes the user's attributes from the user files, but does not remove the user's home directory or files the user owns. The system does not remove the user's password and other user authentication information unless you answer Yes in the Remove Authentication Information? field. Type in the name of an existing user account, or use the List box and select a user from the choices displayed.Indicates if the system should delete the user's password and other user authentication information from the /etc/security/passwd file. This field is displayed with Yes or No as its value. Yes instructs the system to remove password and other user authentication information. To change this value, use the Tab key to toggle the Yes/No values.One or more system users who can access and work with protected resources. Information in this field is required. The system uses groups to control access to files and resources by users who do not own them. When a user starts a process, the system associates the process with the user's ID and the group IDs of the groups the user belongs to. If the user owns the resource or is a member of a group that can access it, the system grants read write, or execute access to it according to the access control list of the resource or file. A group name is specified as a string. The maximum length depends on the configuration of the individual nodes, and can be queried using the lsattr command to view the max_logname attribute of the sys0 device, but all AIX systems will accept a length of up to 8 characters. See the documentation on the AIX mkuser command for more information. You can use letters, numbers, and some special characters in the name. The string cannot start with a hyphen (-), plus (+), tilde (~), or at sign (@). The string cannot contain any spaces or any of the following characters: colon (:), double quote ("), pound sign, comma (,), asterisk (*), single quote ('), equal sign (=), newline (\n), tab (\t), back slash (\), forward slash (/), question mark (?), back quote (`), or the key words "ALL" or "default". Information in this field is required.Indicates if the group is an administrative group. Only the root user can modify the attributes of an administrative group. This field is displayed with False or True as its value. True indicates that group is an administrative group. False indicates that it is a nonadministrative group (its attributes can be modified by the group's specified administrators and the root user). To change this value, use the Tab key to toggle the True/False values.The system assigns a unique ID associated to the group name. The group IDs are stored in the /etc/group file.Specifies the names of the users that belong to this group. The members of a group can access (that is, read, write, or execute) a resource or file owned by another member of the group as specified by the resource's access control list. To enter the user members of this group, type in their names (separated by commas), or use the List box and select the users from the choices displayed (the users are displayed in the field in the correct format. Note: A user cannot be removed from the user's primary group unless you first redefine the user's primary group (use Change/Show Characteristics of a User option, which alters this information in the /etc/passwd file.Specifies the members that can work with the group attributes (for example, add new members to the group or remove members from it) if the group is a nonadministrative group. Note: The group attributes of an administrative group can be modified by only the root user; so if the group is an administrative group (specified in the ADMINISTRATIVE group attribute), no administrators can be defined in this field. To enter the administrators, type in their user names (separated by commas), or use the List box and select the users from the choices displayed (the users are displayed in the field in the correct format.Specifies the group whose attributes you want to change or view The group must already exist on the system. To change a group's attributes, you must have the correct access privileges. Type in an existing group name, or use the List box and select a group from the choices displayed. When you select the Do button the group's attributes are displayed.Specifies the group you want to remove from the system. The group must already exist on the system and you must have the correct access privileges to remove groups from the system. When you remove this group, the system deletes the group's attributes from the group files, but does not remove the users (who are members of the group) from the system. Type in the name of an existing group, or use the List box and select a group from the choices displayed.Defines the largest hard file size, in 512-byte blocks, that a process invoked by this user can create or extend. A user can not change this value.Defines the largest hard value of system unit time (in seconds) that a user's process can use. A user can not change this value.Defines the largest hard data segment size, in 512-byte blocks, for a user's process. A user can not change this value.Defines the largest hard process stack segment size, in 512-byte blocks, for a user's process. A user can not change this value.Defines the largest hard core file size, in 512-byte blocks, that a user's process can create. A core file contains a memory image of a terminated process. The system creates core files in the current directory when certain system errors (commonly called core dumps) occur. A user can not change this value.The volume group list can be restricted to only those that are currently vary'd on some node in the cluster, or include all volume groups known across the cluster, whether or not they are active.The "AIX Tracing for Cluster Resources" Facility enables you to collect AIX trace data for cluster resources during event script execution on the local node. AIX kernel trace data, and if applicable, component trace data can be collected.Enables AIX Tracing for Cluster Resources. AIX Trace data collection will be performed while event scripts are active on the local node.Disables AIX Tracing for Cluster Resources. Trace data collection will stop immediately if currently active.A Command Group for AIX Tracing for Cluster Resources is a set of AIX Trace IDs, Event Groups and methods to enable component tracing for a cluster resource type such as LVM.Displays a list of Command Groups that are defined.Creates a Command Group with the name, description, event groups and IDs and commands for component tracing that you specify. If you wish, you can choose a template Command Group from which to base your new Command Group.Shows the attributes of a Command Group, i.e. name, description, event groups, event IDs and commands to enable and disable component tracing. If the Command Group is not reserved, you can change the attributes for the Command Group.Removes a Command Group from the cluster. You cannot remove reserved Command Groups.A Command Group is a collection of AIX trace event group IDs and event IDs as well as methods to enable or disable AIX component tracing related to a cluster resource type, such as LVM.Type a maximum duration in hours, greater than zero and no larger than 24. Enablement of trace data collection will persist until disabled by a command or up to the specified maximum duration. Trace data will be collected only while event scripts are active on the local node. Enablement of trace data collection will not persist across reboot or a complete stop of the clstrmgrES subsystem. If event IDs or groups are specified, the trace daemon will be started in the mode of delayed start of trace data collection which will be enabled while event scripts are active on the local node.Specify Event Group IDs in addition to the ones in specified Command Groups for AIX tracing.Specify Event IDs in addition to the ones in specified Command Groups for AIX tracing.Specify Event Group IDs to be excluded from tracing.Specify Event IDs to be excluded from tracing.Type the name of an existing Command Group to use as the template from which to base your new Command Group, or select it from the List box. Select the Do button or press the Enter key to display the Command Group creation template. Information in this field is optional. If you specify nothing or if you specify a Command Group that does not exist, a template is not provided.Type a name for the Command Group. You can use letters, numbers, and some special characters. The name cannot contain underscores ("_").Type a string of characters that describes this Command Group. You can use letters, numbers, and some special characters. Blanks or commas are not allowed.Specify Event Group IDs to be part of the Command Group. Separate the IDs with spaces. You can also use the List key to select the IDs from the displayed choices.Specify Event IDs to be part of the Command Group. Separate the IDs with spaces. You can also use the List key to select the IDs from the displayed choices.Specify a command to enable a trace method for this command group. The full pathname of the script or executable to enable tracing needs to be specified.Specify a command to disable a trace method for this command group. The full pathname of the script or executable to disable tracing needs to be specified.The Command Group ID whose attributes you want to change or view. The Command Group must already exist in the cluster. You cannot change a reserved command group. You can type the name of an existing Command Group or select it from the List box. When you select the button or press the Enter key, the Command Group's name, description, Events Group and IDs and the commands to enable and disable AIX component tracing are displayed.The name of the Command Group that you are changing or viewing. You cannot change the displayed information from this SMIT Screen.Type the name of an existing Command Group that you want to remove from the system, or select a Command Group from the List box. You cannot specify or select a reserved Command Group to remove. Select the Do button or press the Enter key. If you confirm your choice in the displayed confirmation box, the Command Group is removed from the system. If you do not confirm your choice, you return to this SMIT Screen.Environment variable assignments for enablement of debug data collection that will be set in event scripts.Type a list of space-separated environment variable assignments in format "environmentVariable=value".The mount point, which is the directory where the file system is available or will be made available. Sets the permissions for the file system. The security-related mount options. You can specify the following values: o nosuid - Prevents execution of setuid and setgid programs on this mount. o nodev - Prevents open system calls of devices on this mount. Enables disk accounting on this file system. Specify the file system fragment size in bytes. Decreasing the fragment size below a full block (4096 bytes) allows partial blocks to be allocated at the end of a file. You can set the fragment size to 512, 1024, 2048, or 4096 bytes. If a file system is compressed, you must specify a fragment size of 512, 1024, or 2048 bytes. The ratio of file system size, in bytes, to the number of i-nodes. Increasing the number of bytes per i-node (NBPI) decreases the total number of i-nodes in a file system. The Allocation Group size determines the allowed range of NBPI values for a file system. See the AIX documentation for the allowable combinations of NBPI, allocation group size and resulting filesystem size. Compression algorithm The compression algorithm for the file system. You can select the following choices: no - Creates a file system that does not use data compression. The default is no. LZ - Create a file system in which all data is automatically compressed using LZ compression before being written to disk and is automatically uncompressed when read from disk. Requires a fragment size of less than 4096 bytes. NOTE: A file system that has been created using LZ cannot be changed to no. The name of the file system, expressed as a mount point. Selecting yes removes the mount point (directory) where the file system is normally made available. NOTE: The directory is removed only if it is empty. Removes a file system, any logical volume on which it resides, and the associated stanza in the /etc/filesystems file. PowerHA SystemMirror is used to keep applications highly available on a group of two or more POWER systems running the AIX operating system.Use this option to access Application Configuration Assistants (Smart Assists). If you have installed the Smart Assist software for DB2, Oracle, WebSphere, or other applications, you can access those Smart Assists from this menu.Add a cluster by specifying the cluster name. You will have to add at least one node, network, adapter, and a repository disk to be able to verify and synchronize the configuration.Configure the PowerHA SystemMirror cluster, nodes, and networks using the menus and dialogs in this SMIT path.Use typical or common settings and smart defaults to initially configure a cluster from this menu.This dialog allows you to create a cluster with smart defaults and network configuration discovery. Networks and nodes will be named automatically. Cluster nodes will be given their hostname as a node name (with some modification if necessary to avoid special characters not allowed for node names). If you would like to create a cluster without defaults or discovery, use the custom initial setup path instead.Define the repository disk and cluster IP address in this dialog. The repository disk is a dedicated disk that is shared by all nodes in the cluster and is not used for any other purpose. The cluster IP address is a multicast address that is used for internal cluster communication and monitoring.This set of menus allows you to manage the cluster. You can show the cluster topology, change or show the cluster attributes, or remove the cluster definition.Add a node to the cluster. A new node should share access to the repository disk if one has already been specified from the Define Repository Disk and Cluster IP Address dialog.Use the dialogs under this menu to add, change, show, or remove a cluster node. You will also find the option to add persistent IP node labels/addresses under this menu path.The repository disk is used to store cluster information and must be accessible by all nodes in the cluster. It may not be used for any other purpose and cannot belong to an existing volume group. It is strongly recommended that the repository disk is RAID protected when possible. See the Planning Guide for more considerations and best practices associated with the repository disk.The cluster IP address is a multicast address used for internal cluster communication and monitoring. This address must be in the multicast range, 224.0.0.0 - 239.255.255.255. By default, AIX will generate an appropriate address for you if you do not specify one here. An address will be chosen which does not currently have any network traffic (and therefore is assumed to be unused by any other application in your environment). You should only specify an address manually if you have an explicit reason to do so, but are cautioned that this address cannot be changed once the configuration is synchronized. See the Planning Guide for more information concerning the cluster multicast IP address.Add, change, show, or remove persistent IP labels and addresses for nodes in the cluster.Manage networks and network interfaces from these menus.This dialog provides some information about how the repository disk and cluster IP address are used. For detailed information you should consult the PowerHA SystemMirror Planning Guide.This cluster has already been synchronized and the values shown on this dialog can only be viewed and not changed. Use the Manage menus to add, remove, or change the nodes, networks and interfaces in the cluster. This is the name specified for this cluster. This value cannot be changed because synchronization has already been performed. You must recreate the cluster if you wish to change the name of the cluster. This is the current node list for the cluster. If you wish to add or remove nodes from the synchronized cluster, use the Manage Nodes menu, found under the Cluster Nodes and Networks menu. This is the disk used for the cluster repository. You may not change this value because the cluster has been synchronized. This is the multicast IP address used for the cluster . You may not change this value because the cluster has been synchronized. You can choose to remove the cluster definition from all nodes in the cluster with one operation by selecting Yes for this option. If you select Yes, the cluster definition will be completely removed from all nodes. Alternatively, you can choose to only remove the cluster definition from this node. The cluster definition will still exist on other nodes in the cluster and you can synchronize from one of those remaining nodes to recreate the cluster definition on this node. If you only remove the cluster definition from this node, the node still belongs to the cluster according to the other nodes where the cluster definition still exists. Define the repository disk and cluster IP address in this dialog. The repository disk is a dedicated disk that is shared by all nodes in the cluster and is not used for any other purpose. The cluster IP address is a multicast address that is used for internal cluster communication and monitoring. The cluster IP address must be specified as an IPv4 adddress. If IPv6 is in use, CAA will derive an IPv6 multicast address from the supplied IPv4 address. This is the multicast IP address used for the cluster . You may not change this value because the cluster has been synchronized. If IPv6 is in use, CAA will derive an IPv6 multicast address from this address. You can see all multicast addresses in use with the lscluster -i command. This determines whether CAA will use unicast or multicast heartbeat on the cluster. If multicast is chosen, then an IP can be specified for use with multicast heartbeat. If unicast is chosen, then any supplied IP address is ignored. Use this option to access Configuration Assistants and Smart Assists. If you have installed the Smart Assist software for DB2, Oracle, WebSphere, or other applications, you can access those Smart Assists from this menu as well.Use this option to use Smart Assists to make your applications highly available. If you have installed the Smart Assists for DB2, Oracle, WebSphere or other applications, you can access them from this menu. Find out what an application controller is and how application controller scripts are used.Choose this option to configure communications paths to Hardware Management Consoles (HMC) and to configure available or "Capacity on Demand" (CoD) CPU and memory resources for cluster applications that run on logical partitions (LPARs) that support dynamic resource allocation (DLPAR).Manage Application Controllers for all defined nodes. Define the name of the application controller as known to PowerHA SystemMirror. Define the scripts which are called to start and stop this application controller.Use this menu to initially set up a cluster or to manage an existing cluster with custom options.From this set of dialogs you can create a cluster a piece at a time, providing a name for each node and network, and adding only the desired network adapters. You will be required to add the cluster first, then the nodes, then the repository and optionally specified cluster IP address. You must also add at least one network and at least one network adapter per node and network before you can successfully verify and synchronize the configuration.Add, change, show, or remove network interfaces.Add a network interface to the cluster configuration.Change or show the current configuration for a network interface.Remove a network interface from the cluster configuration.Specifies whether SystemMirror is to start at system restart, whether to broadcast startup, whether to start the cluster information daemon (clinfo), and whether or not to perform cluster verification prior to starting PowerHA SystemMirror. Access menus to customize resources and resource processing. This includes configuring custom disk, volume group and file system methods, as well as customizing the selective fallover policy for resources.Customize cluster event processing by adding pre and post event commands, remote notification methods, configuring user-defined events, or changing the cluster time until warning value.Manage custom cluster settings such as the startup options and resetting the cluster tunables.Manage the responses to system events detected by the cluster manager from this SMIT menu.Select from the list of system events that are monitored by the cluster manager and change or show the specified response to the event.Choose the event for which you want to show or change the current response.Choose the response you would like to have when this event is detected by the cluster manager. All events will be logged, but you can choose an additional defined response here. If you only wish for the event to be logged, choose "Only log the event".Select "Yes" to run custom verification checks.When Active is "Yes" this system event will be monitored.Use the Custom Cluster Configuration menus to perform advanced configuration and cluster management operations.Specify the actions that PowerHA SystemMirror should take when the cluster is split, and when the split portions of the cluster merge again. A split occurs when failures in heartbeat paths leave one subset of cluster nodes unable to communicate to the remaining cluster nodes. A merge occurs when the heartbeat paths are re-established.A Tie Breaker disk can be selected for use with the Tie Breaker policy. In the event of a split or merge for which the Tie Breaker policy is selected, the portion of the cluster that can first access the tie breaker disk will continue.Use F4 to get a list of the disks that are available for use as a Tie Breaker. In the event of a split or merge for which the Tie Breaker policy is selected, the portion of the cluster that can first access the tie breaker disk will continue. Select the "None" option to have the Tie Breaker specification removed. If a Tie Breaker is selected, but neither split nor merge policy uses a Tie Breaker, the specification is accepted, but unused. Once a disk has been specified to be a Tie Breaker, it is not available for use in a volume group, or for any other purpose.The action plan describes the action taken on the nodes in the portion of a cluster that has split, or has merged after a split, and which has not been selected to continueThe Merge Handling Policy specifies the mechanism to identify which part of a split cluster will continue after the merge.The Split Handling Policy specifies the response when a split has been detected in a cluster. A choice of "None" takes no action; the other choices select a mechanism to identify which portion of the cluster will continue.The following options apply only if "manual" has been selected as "Split Handling" or "Merge Handling" policy.A method to be invoked in addition to a message to /dev/console to inform the operator of the need to chose which site will continue after a split or merge.The frequency of the notification - time, in seconds, between messages - to inform the operator of the need to chose which site will continue after a split or merge.The maximum number of times that PowerHA SystemMirror will prompt the operator to chose which site will continue after a split or merge.In the event that the operator has not responded to a request for a manual choice of surviving site on a "split" or "merge", this site will be allowed to continue. The other site will take the action chosen under "Action Plan". The time the operator has to respond is "Notify Interval" times "Maximum Notifications+1".The Split Handling Policy specifies the response when a split has been detected in a cluster. A choice of "None" takes no action; the other choices select a mechanism to identify which portion of the cluster will continue. A choice of "Tiebreaker" indicates that a tie breaker disk will be specified; ownership of the tie breaker disk determines which part of the cluster will continue in the event of a split. A choice of "Manual" indicates that PowerHA SystemMirror should prompt to determine which part of the cluster will continue in the event of a split.A method to be invoked in addition to a message to /dev/console to inform the operator of the need to chose which site will continue after a split or merge. The method is specified as a path name, followed by optional parameters. When invoked, the last parameter will be either "split" or "merge" to indicate the event.This determines if the manual response on a split also applies to those storage replication recovery mechanisms that provide an option for "Manual" recovery. If "Yes" is selected, then the partition that was selected to continue on a split will proceed with takeover of the storage replication recovery.This allows you to check if there is a split or a merge event that is waiting for a manual response, and to provide that response.This allows you to specify the response to a split or merge event. That is, you can indicate whether the current partition - the one that the current node is part of - will "Continue" or "Recover". Once a response has been entered, the opposite response must be entered on a node on the other partition.Use F4 to get a list of choices that may be selected. You can chose to have the current site (or partition) continue, or to recover. Once a choice has been made, the opposite choice must be made on the other site or partition.This allows you to check if there is a split or merge event that is waiting for a manual responseSpecify the actions that PowerHA SystemMirror should take when the cluster is split. A split occurs when failures in heartbeat paths leave one subset of cluster nodes unable to communicate to the remaining cluster nodes.Specify the actions that PowerHA SystemMirror should take when the split portions of the cluster merge again. A merge occurs when the heartbeat paths are re-established.Cluster wide Quarantine policy informs PowerHA SystemMirror about the method it should use to isolate a unresponsive node that was hosting the Critical Resource Group previously.A Resource Group for which the selected Quarantine Policy would be applicable. All the resources associated with this Resource Group would be handled by the Quarantine Policy that was configured.Configuration required when a Disk TieBreaker is selected for handling Split/Merge.Configuration required when a NFS TieBreaker is selected for handling Split/Merge.This policy will halt the unresponsive node before acquiring the Critical Resource group and helps to avoid data corruption.This policy will remove the disk access from the unresponsive node before acquiring the Critical Resource group and helps to avoid data corruption.Configure the HMC information related to the cluster nodes through which Active Node Halt Policy could check the status of unresponsive node before halting it.The hostname of the used NFSv4 server. Enter the Fully Qualified Domain Name. Example : NodeA.in.ibm.com.Enter the Absolute Path of the Local node directory, which is used by the NFS TieBreaker Server as a Local Mount Point.Enter the Absolute Path of the NFS TieBreaker Server directory, which is exported by the nfsQuorumServer.Specify the actions that PowerHA SystemMirror should take when the cluster is split. A split occurs when failures in heartbeat paths leave one subset of cluster nodes unable to communicate to the remaining cluster nodes. A merge occurs when the heartbeat paths are re-established. Appropriate merge policy is selected as per the selected split policy. Here are the valid Split-Merge combinations 1.None-Majority 2.TieBreaker-TieBreaker 3.NFS-NFS 4.Manual-Manual Split Merge Action Plan 1.Reboot-Nodes on the loosing partition will be rebooted. 2.Disable Applications Auto-Start and Reboot - Nodes on the loosing partition will be rebooted, the Resource Groups can not be brought online until merge is finished. 3.Disable Cluster Services Auto-Start and Reboot - Nodes on the loosing partition will be rebooted. CAA will not be started. Once the split condition is healed clenablepostsplit has to be run to bring the cluster back to stable state.A choice of split policy "None" and merge policy "Majority" are chosen. Only split merge action plan "Reboot" is allowed.A choice of split policy "Disk Tie-Breaker" and merge policy "Disk Tie-Breaker" are chosen. A choice of split policy "Disk Tie-Breaker" indicates the ownership of the tie breaker disk determines which partition will continue in the event of split and merge.A choice of split policy "NFS Tie-Breaker" and merge policy "NFS Tie-Breaker" are chosen. A choice of split policy "NFS Tie-Breaker" indicates the ownership of the nfs file determines which partition will continue in the event of split and merge.A choice of split policy "Manual" and merge policy "Manual" are chosen. A choice of split policy "Manual" indicates that PowerHA SystemMirror should prompt to determine which partition will continue in the event of split and merge.A choice of split policy "Manual" and merge policy "Manual" are chosen. A choice of split policy "Manual" indicates that PowerHA SystemMirror should prompt to determine which partition will continue in the event of split and merge.The Split Handling Policy specifies the response when a split has been detected in a cluster. A choice of "None" takes no action; the other choices select a mechanism to identify which partition will continue. A choice of "Tiebreaker" indicates that a tie breaker disk or nfs file will be specified; ownership of the tie breaker disk or nfs file determines which partition will continue in the event of a split. A choice of "Manual" indicates that PowerHA SystemMirror should prompt to determine which partition will continue in the event of a split.Once the split condition is healed CAA should be started to bring the cluster back to stable state.Split Merge Action Plan Reboot-Nodes on the loosing partition will be rebooted.A split policy of "Cloud" and merge policy of "Cloud" are chosen. A choice of split policy "Cloud" indicates that one of the cluster nodes will upload a file to cloud; the ownership of the cloud tiebreaker file determines which partition will continue in the event of a split.The Split Handling Policy specifies the response when a split has been detected in a cluster. A choice of "None" takes no action; the other choices select a mechanism to identify which partition will continue. A choice of "Tiebreaker" indicates that a tie breaker disk or nfs file will be specified; ownership of the tie breaker disk or nfs file determines which partition will continue in the event of a split. A choice of "Manual" indicates that PowerHA SystemMirror should prompt to determine which partition will continue in the event of a split. A choice of "Cloud" indicates that a file is uploaded to provided cloud service; ownership of the file determines which partition will continue in the event of a split.During split and merge scenario, PowerHA SystemMirror tries to upload the cloud tiebreaker file to the provided bucket.Cloud service provider. It can be either IBM or AWS.If this is set to yes, then the cloud tiebreaker file will upload to existing bucket which is provided. If this is set to no, then new bucket will be created with the provided bucket name and upload the file to new bucket. If yes is chosen and given bucket name does not exist in cloud storage then it throws an error.Specify the actions that PowerHA SystemMirror should take when the cluster is split. A split occurs when failures in heartbeat paths leave one subset of cluster nodes unable to communicate to the remaining cluster nodes. A merge occurs when the heartbeat paths are re-established. Appropriate merge policy is selected as per the selected split policy. Here are the valid Split-Merge combinations 1.None-Majority 2.TieBreaker-TieBreaker 3.NFS-NFS 4.Manual-Manual 5.Cloud-Cloud Select this option to define and work with user defined resource type and resources.Select this option to add/change/remove the user defined resource type.Select this option to add/change/remove the user defined resources.Select this option to import User defined resource type and resource configuration from a xml file. Selecting this option will prompt you to enter a filename in which user defined resource type configuration is added. Refer sample xml file /usr/es/sbin/cluster/etc/udrt_sample.xml.Select this option to define user defined resource type. Appropriate methods should be provided to manage the resources of the specified resource type.Once after defining the resource type, use "Add User Defined Resource" option to define resource instance for this resource type.Select this option to modify the existing user defined resource type. On selecting this option ,you will be prompted to chose resource type to modify.Select this option to remove the existing user defined resource type. On selecting this option ,you will be prompted to chose resource type to remove. You can not remove a user defined resource type if there are user defined resource instances configured of this type.Select this option to add a user defined resource. On selecting this option you will be prompted to chose a resource type. This resource instance will be managed by methods provided for the chosen resource type. If the resource type has monitor methods defined then a custom resource monitor will be added automatically for the resource. If there is no monitor method defined for the resource type then the resources of that particular resource type will not be monitored by PowerHA SystemMirror.Select this option to change a user defined resource configuration. On selecting this option you will be prompted to chose a existing user defined resourceSelect this option to remove the existing user defined resource. Removing a user defined resource will also remove associated monitors.Select this option to change the user defined resource monitor attributes.Enter user defined resources that will be started. These are the resources defined in the "Configure User Defined Resources" section.This is the symbolic name for an User Defined resource TypeSelect the one of the existing resource type from the pick list. All the user defined resources of this resource type will be acquired after the chosen resource type and will be released before the chosen resource type while processing the resources in the resource group. On selecting "FIRST", the resources will be acquiqred first and released at the end. Enter the name of the Verification method which will be called during cluster verification.Enter the verification method type.The full path to the script or executable associated with starting the resources of defined resource type you wish to keep highly available.This executable will be run whenever the required resources become available, such as when a node joins the cluster.The full path to the script or executable associated with stopping the resources of defined resource type you wish to keep highly available. This executable will be run whenever the required resources need to be released.The full path to the script or executable to check the status of the user defined resource.The full path to the script or executable to stop the resource. Default is the stop method.The full path to the script or executable to start the resource. Default is the start method.The full path to the script or executable to perform notification when a monitored resource fails. This method will execute each time a resource is restarted, fails completely, or falls over to the next node in the cluster. Configuring this method is strongly recommended.Enter comma separated list of attributes names. These attributes must be filled by a value while creating a user defined resource. Example: name,id,value1,value2Enter comma separated list of attributes names. These attributes are optional.These can be be filled by a value while creating a user defined resource. Example: name,id,value1,value2Enter description for the resource type.Select user defined resource type to modify.Select user defined resource type to delete.Enter xml file name in which user defined resource type and resources configuration is added.Select a user defined resource type for which you want to create user defined resources. These resources will be managed by the methods defined for the chosen user defined resource type configuration.Chosen user defined resource type.This is the symbolic name for an User Defined resource.Enter the space separated list of attribute=valu pair. The list of attributes should match the list of "required attributes" and "Optional attributes" of the chosen resource type. Value must be provided for the required attributes. For example: attr1="value1" attr2="value2"Select already configured user defined resource to modify.Select a user defined resource to delete.Select user defined resource monitor to change attributes associated with it.Use PF4 to generate a list then select the name of one or more User Defined Resources to MonitorConfigures LDAP server and client for PowerHA SystemMirror cluster environment. This LDAP will be used as central repository for implementing most of the security features.Configures LDAP server for cluster. A server can be an existing or new one. Configures LDAP client on all the nodes of the clusterAdd an existing (IBM or non-IBM) LDAP server to the cluster. It just uses the existing configuration based on inputs provided. A user may have to load the schema manually to make it work or in case remote shell services are working then it will be done automatically. Refer to administrator guide to load the schemas manuallyConfigures a new peer-to-peer LDAP server to the cluster. Required filesets should be installed already. Peer-to peer limits to have Maximum 4 node for better scalability.Display LDAP server configuration with respect to the cluster. Displays the parameters / attributes values used to configure the server.Deletes the LDAP server configuration from the cluster. Only deletes the ODM entries, data will be available in LDAP server. A user can reuse the data already available in case of re-setup.LDAP server(s) Server name are the hostname of the machine where LDAP server is already configured and used to contact LDAP server. In case of replica, referral or proxy server used, a comma separated respective hostnames is required.The LDAP distinguished name (DN) used to bind to the LDAP server. The DN you specify must exist on the LDAP server. The ability to perform operations on entries in the LDAP server database from the LDAP client is dependent on the access permissions granted to the bind DN on the LDAP server. Examples: cn=admin cn=proxy,o=ibm cn=user,ou=people,cn=aixdataThe text-only password for the distinguished name (DN) used to bind to the LDAP server. The password must be matched the password on the LDAP server for the specified DN.Suffix / base DN - The suffix or base distinguished name (DN) to search on the LDAP server for users, groups, and other network information entities. Examples: cn=aixdata,o=ibmThe port number on the LDAP server to connect to. The default is port number 636 (SSL enabled).The full path to the SSL key database. It should be in .kdb format and it should be existing.The password for the SSL Key.Hostname of the machine where LDAP server is already configured and used to contact LDAP server. In case of replica, referral or proxy server used, a comma separated respective hostnames is required. And can be selected by pressing F4The LDAP server administrator distinguished name (DN) Examples: cn=admin cn=administrator cn=userThe text-only password for the administrator distinguished name (DN).The LDAP schema used to represent user/group entries in the LDAP server. rfc2307aix - Sets up the LDAP server using RFC 2307 and auxiliary AIX schema. It is recommended that you use this schema because of its interoperability through RFC 2307 and full AIX attribute support. This is the default value and not editableThe suffix or base distinguished name (DN) to search on the LDAP server for users, groups, and other network information entities. Examples: cn=aixdata,o=ibmThe port number on the LDAP server to connect to. The default is port number 636 (SSL enabled).The full path to the Server SSL key database. It should be in .kdb format. If it doesn't exists then it will get created.The password for the SSL Key.Display current LDAP filesets version, minimum LDAP version required is 6.2.0.0Configure LDAP client on all the nodes of the cluster based on the server configured already.Show the LDAP client configuration.Deletes the LDAP client configuration from the cluster (all nodes).Server name are the hostname of the machine where LDAP server is configured and used to contact LDAP server.The LDAP distinguished name (DN) used to bind to the LDAP server. The DN specified must exist on the LDAP server. The ability to perform operations on entries in the LDAP server database from the LDAP client is dependent on the access permissions granted to the bind DN on the LDAP server. Examples: cn=admin cn=proxy,o=ibm cn=user,ou=people,cn=aixdataThe text-only password for the distinguished name (DN) used to bind to the LDAP server. The password must be matched the password on the LDAP server for the specified DN. Information in this field is required.The authentication mechanism used to authenticate users. Valid values are: unix_auth - Retrieve user's password from the LDAP server and perform comparison locally. ldap_auth - Send user's password in clear text to the LDAP server and allow LDAP to perform the comparison. This default value is ldap_auth. Note: If ldap_auth is selected, it is recommended to use SSL to protect the clear text password from exposure.The suffix or base distinguished name (DN) to search on the LDAP server for users, groups, and other network information entities. If this suffix is not specified, the entire database on the LDAP server is searched and the first set of recognized data used. Examples: cn=aixdata,o=ibmThe port number on the LDAP server to connect to. The default is port number 636 (SSL enabled). The full path to the client SSL key database. It should be in .kdb format.The password for the SSL Key.LOCAL (FILES) - Specifies that all user and group information and credentials are stored locally in files. This is the common and normal approach used by AIX for user and group administration. LDAP - Specifies that the user and group management are done through LDAP as all information and credentials stored in LDAP.Indicating that in case of LDAP mode the changes will be effective from and to all the nodes of the cluster.Specify the user name to be created, this should be unique for all the LDAP / LOCAL users in the cluster.Specify at least one PowerHA SystemMirror role to be assigned to user. Press F4 to get the list of all available roles for PowerHA SystemMirror. Specifying roles for a user adds an extra layer of security to differentiate the task for administrator.Specifies the methods through which the user must authenticate successfully before gaining access to the system. The default word LDAP indicates that procedures will be followed through LDAP. Registry specify the registry where authentication information and credentials to be storedThe key store will allow the user to utilize files in Encrypted File System. The selection of file will create a key store file associated with this user. The default value LDAP indicates that keystore attributes stored in LDAPEnable the EFS cluster wide with specific mode.Change and view the mode, Volume group for filesystem and service IPDeletes the keystore from the cluster. EFS filesystem cannot be managed by PowerHA SystemMirror once deleted.Manages the EFS keystore cluster wide. Mode for keystore management (ldap, shared fs). F4 will display available list.Volume group where keystore FS should resides. It should be Enhanced Concurrent VG. In case of ldap mode this is invalid.Service IP to be used in case the shared FS mode using nfs. In case of ldap mode this is invalid.Mode for keystore management (ldap, shared fs). F4 will display available list, user can switch from shared fs to ldap.Volume group where keystore FileSystem should resides. It should be Enhanced Concurrent VG. User can change the keystore VG. In case of ldap mode this is invalid.Service IP to be used in case the shared FS mode using nfs, user can change the IP. In case of ldap mode this is invalid.Select yes to enable EFS on filesystem.Directory instance also creates a DB2 databse instance which requires password to be given by user. Specify a password for DB2 instance.LDAP server creation also creates keys stash files which requires minimum of 12 characters encryption seed to be given by user. Specify the encryption seed to encrypt key stash files.A keystore admin password is used to manage EFS. Specify a password to manage keys and permission.A keystore admin password is used to manage EFS. Specify a password for changing the mode to LDAP.Use this option compare the active configuration with the default configuration. The active configuration exists only when cluster services are active. This option helps you compare and identify any changes in the default configuration before incorporating them into the active configuration. You can further tailor the comparison by using cl_dare_compare from the command line. Use this option to replace the disk used for cluster repository. This operation can be used to replace a failed disk or move the repository to another shared disk available on the system. Changes will updated to all the nodes in the cluster. Be sure to have a unused shared disk available to be used as repository. Select a Disk to use as new cluster repository. F4 will display list of available shared disks. Use this option to enable Heartbeat via SAN on your cluster. This operation will let you choose a Fibre Channel adapter for each node in the cluster and enable it for SAN communication. You can enable more than one Fibre Channel adapter on your nodes to create more heartbeat paths. Select the Fibre Channel adapter to use for heartbeat via SAN F4 will display list of supported Fibre Channel adapters. Failure Cycle will determine the frequency of the heartbeat. Enter a value between 1 and 20. Grace period is the amount of time in seconds the node will wait before marking the remote node as DOWN. Enter a value between 5 and 30. Use this option to tune the Cluster heartbeat settings. There are two cluster wide tunable parameters. Failure cycle determines the frequency of the heartbeat. Grace period is the amount of time in seconds the node will wait before marking the remote node as DOWN. Select UUID/Physical Volumes of raw disks from the picklist. Raw Disk PVIDs can not be combined with UUIDs in a Resource Group. Select a YES or NO to indicate whether PowerHA SystemMirror should manage disk failure events for the selected disks or not. Choosing YES requests PowerHA SystemMirror to initiate PowerHA SystemMirror events in response to disk errors. Choose NO for error logging only. Standard Cluster Deployment is applicable to a single site data center type of environment. In this case High Availability through clustering protects against local failures of resources within the data center. Multi Site Cluster Deployment lets you deploy a PowerHA SystemMirror configuration where the cluster spans multiple data centers, either within the same campus (for stretch clusters) or between centers that are geographically separated by a distance which exceeds the capability of a SAN (linked cluster). Linked clusters are primarily used in conjunction with a disaster recovery solution like those available with PowerHA SystemMirror Enterprise Edition. Multi Site clusters can be deployed in one of two modes: Stretched Cluster: Cluster resources reside in separate physical data centers which are connected by a common SAN and can share a common repository disk. A typical example would be a campus level distance between data centers. Linked Cluster: In this mode the cluster resources reside in data centers which are physically separated by a distance that exceeds the ability to use a common SAN. In this case, two different repository disks are required: one per site. Linked clusters are typically used in conjunction with a disaster recovery technology such as those available with SystemMirror Enterprise Edition. The cluster has already been defined in AIX. Basic elements like cluster name and multicast address can no longer be changed unless you remove the cluster first. Use the Custom configuration SMIT options to add and remove nodes and manage the repository. After the cluster has been created, multiple disks can be added as Backup Repository disks. This information will be stored in the database, but the disk will not be used. Space has been reserved in the database to define upto 100 disks. If there is a need to change the Repository disk (because of Disk/San failures or user causes) the first disk in the Backup list will be used. A disk can be added to the PowerHA SystemMirror configuration to be used as a backup Repository disk with this option. A disk previously defined as a backup repository disk can be removed from PowerHA SystemMirror configuration. This option will display the current disks defined in the PowerHA SystemMirror configuration as Repository disks. Use this option to tune the Cluster heartbeat settings. There are four cluster wide tunable parameters. Node Failure Detection Timeout: This is the time in seconds that the health management layer waits before startng to prepare to declare a node failure indication. Failure Indication Grace Time: This is the time that the node will wait after the Node Failure Detection Timeout before actually declaring that a node in the same site has failed. So a node is declared failed if no communication happens on any of the channels available for a period of (in seconds) Node Failure Indication Time = Node Failure Detection Timeout + Failure Indication Grace Time. Link Failure Detection Timeout: This is time in seconds that health management layer will wait before declaring the inter site link to have failed. A link failure detection could drive cluster to switch to another link and continue the communication. If all the links have failed then the preparation for declaring the site to have failed would start. So Site failure could be declared when the last of the links fail and potentially after or upto the following time in seconds has elapsed Site Failure Indication Time = Node Failure Detection Timeout + Failure Indication Grace Time + Link Failure Detection Timeout Site HB Cycle is number factor between 1 and 10 which controls heartbeat between the sites. Link Failure Detection Timeout: This is time in seconds that health management layer will wait before declaring the inter site link to have failed. A link failure detection could drive cluster to switch to another link and continue the communication. If all the links have failed then the preparation for declaring the site to have failed would start. Site HB Cycle is number factor between 1 and 10 which controls heartbeat between the sites. Select a Disk to be added as a backup repository. F4 will display a list of available candidate disks. Select a Disk to be removed as a backup repository. F4 will display a list of currently defined backup repository disks. Options for managing the repository disks for the clusterName of the site where the backup repository disk will be used. No backup repository disks are currently defined. You can add a backup repository disk using the Add a Repository Disk SMIT option. Enter one resolvable IP label (for example, the hostname), an IP address, or Fully Qualified Domain Name for the node(s) at this site. This will be used for initial communication with the node(s). Examples are: NodeA 10.11.12.13 NodeC.ibm.com. Options for managing the site definitions including adding and removing nodes, and changing site attributes. Node Failure Detection Timeout: This is the time in seconds that the health management layer waits before start preparing to declare a node failure indication. Failure Indication Grace Time: This is the time in seconds that the node will wait after the Node Failure Detection Timeout before actually declaring that a node has failed. Name of the site where the primary repository disk is being changed. Select a Disk to be used as the new Primary Repository disk. F4 will display a list of currently defined backup repository disks. If you do not have any Backup repository disks defined, a disk name can be entered in this field. Use this option to add site definitions for a stretched cluster. Use this option to change and show the site definitions in a stretched cluster. Use this option to remove site definitions from a stretched cluster. Use this option to learn more about working with stretched and linked clusters in PowerHA SystemMirror PowerHA SystemMirror supports 2 types of cluster configurations with sites: A stretched cluster is much like a local cluster in that it has a single repository disk shared between all nodes. Sites can be added to a standard cluster to create a stretched cluster, and sites can be removed from a stretched cluster to revert to a standard cluster. A linked cluster has 2 distinct physical sites with a repository disk at each site. Linked clusters have specific configuration dependencies on Cluster Aware AIX, and because the site definitions are so intrinsic to CAA, the site configuration cannot be added or changed without first removing the CAA cluster. Show the site, node and network topology# # There are already 2 sites defined. # SystemMirror only supports 2 sites. ## # When you first created the cluster you selected a Linked # Cluster type configuration. Linked Clusters require sites # to be defined. # You can remove sites but you will not be able to Verify and # Synchronize unless you add new ones. # To use either a Stretched Cluster or a cluster with no sites # you must delete the current configuration and start again # with the desired cluster type. #When you first created the cluster you did not select a Linked Cluster type configuration. To create a Linked Cluster at this point you must add all existing nodes to the first site, then add new nodes and add them to a second site. Automatically adding the following nodes to site %1$s: When you first created the cluster you selected a Linked Cluster type configuration. Linked Clusters do not support moving nodes between sites in a single operation. If you want to move a node to another site, you must first remove the node, then verify and synchronize before adding the node to the other site. Use this option to tune the Cluster heartbeat settings. There are four cluster wide tunable parameters. Node Failure Detection Timeout: This is the time in seconds that the health management layer waits before startng to prepare to declare a node failure indication. Node Failure Detection Grace Period: This is the time that the node will wait after the Node Failure Detection Timeout before actually declaring that a node in the same site has failed. So a node is declared failed if no communication happens on any of the channels available for a period of (in seconds) Node Failure Indication Time = Node Failure Detection Timeout + Node Failure Detection Grace Period. Link Failure Detection Timeout: This is time in seconds that health management layer will wait before declaring the inter site link to have failed. A link failure detection could drive cluster to switch to another link and continue the communication. If all the links have failed then the preparation for declaring the site to have failed would start. So Site failure could be declared when the last of the links fail and potentially after or upto the following time in seconds has elapsed Site Failure Indication Time = Node Failure Detection Timeout + Node Failure Detection Grace Period + Link Failure Detection Timeout Site HB Cycle is number factor between 1 and 10 which controls heartbeat between the sites. Node Failure Detection Timeout: This is the time in seconds that the health management layer waits before start preparing to declare a node failure indication. Enter a value between 1 and 20. Node Failure Detection Grace Period: This is the time in seconds that the node will wait after the Node Failure Detection Timeout before actually declaring that a node has failed. Enter a value between 5 and 30. Use this option after running configuration discovery to add newly discovered interfaces to the SystemMirror configuration. Select a Disk to be removed as a backup repository. F4 will display a list of currently defined backup repository disks. If you want to replace the active repository, use Problem Determination Tools -> Replace the Primary Repository Disk SMIT screen. Node Failure Detection Timeout: This is the time in seconds that the health management layer waits before start preparing to declare a node failure indication. Enter a value between 10 and 600. Node Failure Detection Grace Period: This is the time in seconds that the node will wait after the Node Failure Detection Timeout before actually declaring that a node has failed. Enter a value between 5 and 600. You can change basic aspects of the cluster configuration prior to creating the cluster in AIX. Once the AIX (CAA) cluster is created you can no longer change the cluster definition. You must delete the existing cluster and redefine it.An error occurred while checking the configuration. There appears to be a cluster defined to AIX (CAA) but no corresponding SystemMirror cluster definition. The existing AIX cluster definition will need to be removed before the SystemMirror cluster can be configured again. Please report this error and the following information to IBM support. Specifies the number of megabytes in each physical partition, where the Size value is expressed in units of megabytes from 1 through 131072. The Size value must be equal to a power of 2 (example 1, 2, 4, 8). The default value is 4 megabytes. The default number of physical partitions per physical volume is 1016. Thus, a volume group that contains a physical volume larger than 4.064 gigabytes must have a physical partition size greater than 4 megabytes.Split and Merge policies apply to Site events. You must first configure a Stretched or Linked cluster with sites before configuring Split and Merge policies.# # No resource groups are defined with site policies. # You can move any resource group to any node using the # SMIT option for moving a group to another node. # Use this option to display the installed version of SystemMirror software on all nodes in the cluster. Use this option to read about support services available from IBM To learn about available fixes for this product, visit the IBM Support Portal at http://ibm.co/MyNeSupport Among the many resources available from the Portal is FixCentral - a source for viewing and downloading fixes for all IBM products: http://www.ibm.com/support/fixcentral/ FixCentral is also the source for retrieving fixes for CAA and RSCT. You can also subscribe to receive notifications whenever new fixes become available. - Subscribe or Unsubscribe - https://www.ibm.com/support/mynotifications - Feedback - https://www.ibm.com/x_dir/xfeedback.nsf/feedback?OpenForm To ensure proper delivery please add mynotify@stg.events.ihost.com to your address book. Important information about each service pack and the fixes contained therein can be found in the README file located under /usr/es/sbin/cluster When reporting a problem to IBM, you may be asked to provide information about the installed version of code or to collect log files for problem determination. There are SMIT options for these tasks under the Problem Determination Tools top level SMIT menu.If specified, this timeout value (in seconds) will be used during a Live Partition Mobility (LPM) instead of the Node Failure Detection Timeout value. You can use this option to increase the Node Failure Detection Timeout during the LPM duration to ensure it will be greater than the LPM freeze duration in order to avoid any risk of unwanted cluster events. Enter a value between 10 and 600. Specifies the action to be taken on the node during a Live Partition Mobility operation. If "unmanage" is selected, the cluster services are stopped with the 'Unmanage Resource Groups' option during the duration of the LPM operation. Otherwise PowerHA SystemMirror will continue to monitor the resource group(s) and application availability. If specified, this timeout value (in seconds) will be used during a Live Partition Mobility (LPM) instead of the Node Failure Detection Timeout value. You can use this option to increase the Node Failure Detection Timeout during the LPM duration to ensure it will be greater than the LPM freeze duration in order to avoid any risk of unwanted cluster events. Enter a value between 1 and 20. Network Failure Detection Time: This is the time in seconds that the health management layer waits before declaring a network failure indication. Enter a value between 0 and 590. Split and Merge policies apply to PowerHA SystemMirror events. You must first configure a PowerHA SystemMirror cluster before configuring Split and Merge policies.Network Failure Detection Time: This is the time in seconds that the health management layer waits before declaring a network failure indication. The minimum and maximum values are in effect to CAA layer and can be seen by running the below command clctrl -tune -L network_fdt Consider the MIN and MAX values for network_fdt and the values shown in the output of command are in milli seconds, enter the values in this screen in seconds. Node Failure Detection Timeout: This is the time in seconds that the health management layer waits before start preparing to declare a node failure indication. The minimum and maximum values are in effect to CAA layer and can be seen by running the below command clctrl -tune -L node_timeout Consider the MIN and MAX values for node_timeout and the values shown in the output of command are in milli seconds, enter the values in this screen in seconds. Node Failure Detection Grace Period: This is the time in seconds that the node will wait after the Node Failure Detection Timeout before actually declaring that a node has failed. The minimum and maximum values are in effect to CAA layer and can be seen by running the below command clctrl -tune -L node_down_delay Consider the MIN and MAX values for node_down_delay and the values shown in the output of command are in milli seconds, enter the values in this screen in seconds. Split and Merge policies apply to Site events with current AIX Level. You must first configure a Stretched or Linked cluster with sites before configuring Split and Merge policies.Controls the behaviour of the CAA deadman timer. Valid values: Assert,Event Default value: Assert When the value is set to Assert, the node will crash upon the deadman timer expiration. When the value is set to Event, an AHAFS event is generated.Controls the node behaviour when cluster repository disk access is lost. Valid values: Assert,Event Default value: Event When the value is set to Assert, the node will crash upon losing access to the cluster primary repository without moving to backup repositories. When the value is set to Event, an AHAFS event is generated.Specifies the CAA config timeout for configuration change. A positive value indicates the maximum number of seconds CAA will wait on the execution of client side callouts including scripts and CAA configuration code. A value of zero disables the timeout. Default value: 240 seconds Range: 0 - 2147483647To enable or disable the CAA PVID based identification when UUID based authentication failed. Default value: 1 Valid values: 0,1 When the tunable is set to 0, PVID based identification will be disabled. When the tunable is set to 1, PVID based identification will be enabled.Select a Disk to be used as the cluster repository disk. F4 will display a list of currently defined backup repository disks and the available shared disks if any. If no backup repository disk is defined, define one or select an available shared disk on the system. Use this option to replace the disk used for cluster repository. This operation can be used to replace a failed disk or move the repository to another shared disk available on the system or to a defined backup repository disk. Changes will updated to all the nodes in the cluster. Be sure to have an unused shared disk or a backup repository disk available to be used as repository. This option displays the disks and corresponding PVIDs which are common to all the nodes of local site. Select as many disks as needed to be kept in VG from this site. Make sure that you do not select the disks which are common to both the sites. This lists the disks and corresponding PVIDs which are common to all the nodes of remote site. You need to select as many disks as were selected from the local site. Also, make sure that you do not select the disks which are common to both the sites. Enter the size for the cache LV. This size is entered in Logical Partition units. This option helps you configure Asynchronous Geographically Mirrored Volume Group. GMVG enables creation of Volume Group over geographically separated sites which are the part of a cluster. Asynchronous GMVG is the VG in which replication of data happens asynchronously. First the data is stored on the local site in a cache which is copied over the time to the remote site. This option helps you configure Synchronous Geographically Mirrored Volume Group. In Synchronous GMVG data is written to both local and remote sites synchronously, it is not cached. So once the write operation has completed on both the sites, then only next write operation can take place. This option enables you to remove the definition of RPVClients and RPVServers, from all the nodes of the cluster, which are the part of the selected GMVG. VG will not be removed. Name of VG can be upto 15 characters alphanumeric value. Make sure that the name which is entered is already not a VG in the cluster. This option list the GMVGs present in the cluster. RPVServers and RPVClients which are the part of the selected GMVG will be deleted. This option will help you configure GLVM. In order to use this assistant you need to make sure the following: 1. Cluster is configured. 2. Repository disk/disks has been added. 3. Persistent IPs has been defined. Make sure there are as many Persistent IPs defined in XD_data network on each node as there are number of nodes on the other site. Choose this option to configure Resource Optimized High Availability (ROHA). ROHA performs dynamic management of hardware resources (memory, cpu) for use by PowerHA SystemMirror. This dynamic management of resources uses three types of mechanisms: DLPAR mechanism, On/Off CoD mechanism, Enterprise Pool CoD mechanism. If resources available on the CEC are not sufficient, and cannot be got through a DLPAR operation, it is possible to fetch into external pools of resources provided by CoD: either On/Off, or Enterprise Pool. On/Off CoD may result in extra costs, and formal agreement from the user is required. User has to configure Hardware Management Consoles (HMC) to contact for actual acquisition/release of resources.Using On/Off CoD requires an activation code to be entered on the Hardware Management Console (HMC) and may result in extra costs due to usage of the On/Off CoD license. Please note that Enterprise Pool CoD resources can be used instead of On/Off CoD resources, as complement of On/Off CoD resources, to perform a DLPAR operation on your node.Enter 'yes' to have PowerHA SystemMirror use Capacity on Demand (CoD) On/Off resources to perform DLPAR operation on your nodes. Using On/Off CoD requires an activation code to be entered on the Hardware Management Console (HMC) and may result in extra costs due to usage of the On/Off CoD license. Please note that Enterprise Pool CoD resources can be used instead of On/Off CoD resources, as complement of On/Off CoD resources, to perform a DLPAR operation on your node. If you agree to use On/Off CoD, you must ensure that On/Off CoD enablement keys are activated, the On/Off CoD license key needs to be entered into HMC before PowerHA SystemMirror can activate this type of resources. And you must ensure that these resources, which are let available on your CEC, are not used for any other purpose, and are just kept here for PowerHA SystemMirror purpose.Choose this option to configure Hardware Management Console (HMC) used by your cluster configuration, and to optionally associate HMC to your clusters nodes. If no HMC associated with a node, PowerHA SystemMirror will use the default cluster configuration.Choose this option to change or show CPU and memory resource requirements for any Application Controller that runs in a cluster that uses DLPAR, CoD and/or Enterprise Pool CoD capable nodes.Choose this option to modify or view the DLPAR, CoD and Enterprise Pool CoD configuration parameters.Choose this option to modify or view the policy concerning automatic operations on secondary LPARs in the CEC. Secondary LPARs are partitions that do not support critical workloads. These LPARs consume resources that may be needed by LPARs supporting PowerHA nodes.Choose this option to add an Hardware Management Console (HMC) and its communication parameters, and add this new HMC to the default list. All the nodes of the cluster will use by default these HMC definitions to perform DLPAR operations, unless you associate a particular HMC to a node.Choose this option to modify or view an Hardware Management Console (HMC) hostname and communication parameters.Choose this option to remove an Hardware Management Console (HMC), and then remove it from the default list.Choose this option to modify or view the list of Hardware Management Consoles (HMC) of a node.Choose this option to modify or view the HMC default communication tunables.Choose this option to modify or view the default HMC list used by default by all nodes of the cluster. Nodes which define their own HMC list will not use this default HMC list.Enter the hostname for the Hardware Management Console (HMC). IP Address is also accepted here, IPv4 and IPv6 addresses are supported.Enter a timeout in minutes on DLPAR commands executed on an HMC (-w parameter). This -w parameter only exists on chhwres command, when allocating or releasing resources. It is adjusted according the type of resources (for memory one minute per gigabytes is added to this timeout. Setting no value means you use the default value which is defined in "Change/Show Default HMC Tunables" Panel. When -1 is displayed in this field, it indicates the default value is used.Enter a number of times one HMC command is retried before the HMC is considered as non-responding. The next HMC in the list will be used after this number of retries have failed. Setting no value means you use the default value which is defined in "Change/Show Default HMC Tunables" Panel. When -1 is displayed in this field, it indicates the default value is used.Enter a delay in seconds between two successive retries. Setting no value means you use the default value which is defined in "Change/Show Default HMC Tunables" Panel. When -1 is displayed in this field, it indicates the default value is used.Enter the list of nodes which use this HMC.Enter the sites which use this HMC. (All nodes of the sites will then use this HMC by default, unless the node defines an HMC as its own level.)Select Yes to check communication links between nodes and HMC. Select No to force even if connectivity check fails.This is the node name to associate with one or more Hardware Management Consoles (HMC).Define the precedence order of the Hardware Management Consoles (HMC) used by this node. The first in the list is tried first then the second, etc. You cannot add or remove any HMC, you are only able to modify the order of the already set HMCs.Enter a timeout in minutes on DLPAR commands executed on an HMC (-w parameter). This -w parameter only exists on chhwres command, when allocating or releasing resources. It is adjusted according the type of resources (for memory one minute per gigabytes is added to this timeout.Enter a number of times one HMC command is retried before the HMC is considered as non-responding. The next HMC in the list will be used after this number of retries have failed.Enter a delay in seconds between two successive retries.This is the list of Hardware Management Consoles (HMC) you are using for all non-specifically configured nodes in the cluster.Choose this option to add CPU and/or memory resource provisioning through DLPAR, CoD and/or Enterprise Pool CoD operations to an application controller, which is going to be selected.Choose this option to modify or view the CPU and memory resource provisioning of an application controller, which is going to be selected.Choose this option to remove the CPU and memory resource provisioning from an application controller, which is going to be selected.This is the application controller for which you will configure DLPAR and CoD resource provisioning.Enter 'yes' if you only want that the LPAR hosting your node reaches the level of resources indicated by the desired level the LPAR profile. By choosing 'yes', you trust the desired level of LPAR profile to fit the needs of your application controller. Enter 'no' if you prefer enter exact optimal values for memory and/or cpu and or pu, which match the needs of your application controller, and to better control the level of resources to be allocated to your application controller. No default value. For all application controllers having this tunable set to 'Yes', the performed allocation will let the LPAR reach the LPAR desired value of the profile. If there is a mix of settings, in which some application controllers have this tunable set to 'Yes', and other application controllers have this tunable set to 'No' and then have some optimal level of resources set, the allocation performed will let the LPAR reaches the desired value of the profile added to the optimal values.Enter the amount of memory PowerHA SystemMirror will attempt to acquire to the node before starting this application controller. This 'Optimal amount of gigabytes of memory' value can only be set if the 'Use desired level from the LPAR profile' value is set to 'No'. Enter the value in multiples of .25, .5, .75, 1 GB. For example, 1 would represent 1 GB or 1024 MB, 1.25 would represent 1.25 GB or 1280 MB, 1.50 would represent 1.50 GB or 1536 MB, 1.75 would represent 1.75 GB or 1792 MB. If this amount of memory is not satisfied, PowerHA SystemMirror will take resource group recovery actions to move the resource group with this application to another node, or PowerHA SystemMirror may allocate less memory depending on the 'Start RG even if resources are insufficient' cluster tunable.Enter the amount of processors PowerHA SystemMirror will attempt to allocate to the node before starting this application controller. This attribute is only for nodes running on LPAR with Dedicated Processing Mode. This 'Optimal number of dedicated processors' value can only be set if the 'Used desired level from the LPAR profile' value is set to 'No'. If this amount of CPUs is not satisfied, PowerHA SystemMirror will take resource group recovery actions to move the resource group with this application to another node, or PowerHA SystemMirror may allocate fewer CPUs depending on the 'Start RG even if resources are insufficient' cluster tunable.Enter the amount of processing units PowerHA SystemMirror will attempt to allocate to the node before starting this application controller. This attribute is only for nodes running on LPAR with Shared Processing Mode. This 'Optimal number of processing units' value can only be set if the 'Used desired level from the LPAR profile' value is set to 'No'. Processing units are specified as a decimal number with two decimal places, ranging from 0.01 to 255.99. This value is only used on nodes which support allocation of processing units. If this amount of CPUs is not satisfied, PowerHA SystemMirror will take resource group recovery actions to move the resource group with this application to another node, or PowerHA SystemMirror may allocate fewer PUs depending on the 'Start RG even if resources are insufficient' cluster tunable.Enter the amount of virtual processors PowerHA SystemMirror will attempt to allocate to the node before starting this application controller. This attribute is only for nodes running on LPAR with Shared Processing Mode. This 'Optimal number of dedicated or virtual processors' value can only be set if the 'Used desired level from the LPAR profile' value is set to 'No'. If this amount of CPUs is not satisfied, PowerHA SystemMirror will take resource group recovery actions to move the resource group with this application to another node, or PowerHA SystemMirror may allocate fewer CPUs depending on the 'Start RG even if resources are insufficient' cluster tunable.Enter 'Yes' to have PowerHA SystemMirror start Resource Groups even if resources are insufficient. It may occur when the total requested resources exceeds the LPAR profile maximum or the combined available resources. Thus the best-can-do allocation is performed. Enter 'No' to prevent starting Resources Groups with insufficient resources. Resource Groups may move to error state if resources are insufficient. Default is 'No'.Enter 'Yes' to authorize PowerHA SystemMirror to dynamically change the maximum Shared-Processors Pool boundary. Only if it is necessary, the allocation process will increase the maximum limit for the duration of the process unit allocation.Enter 'yes' to have PowerHA SystemMirror release CPU and memory resources synchronously. For example, if customer need to free resources on one side, before they can be used on the other side. By default, PowerHA SystemMirror detects automatically the resource release mode by looking if Active and Backup nodes are on same or different CECs. Best practices is to have asynchronous release in order not to delay the takeover.Enter 'Yes' to have PowerHA SystemMirror use On/Off Capacity On Demand (On/Off CoD) to obtain enough resources to fulfill the optimal amount requested. Using On/Off CoD requires an activation code to be entered on the Hardware Management Console (HMC) and may result in extra costs due to usage of the On/Off CoD license.Enter a number of activating days for On/Off CoD requests. If the requested available resources are insufficient for this duration, then the longest-can-do allocation is performed. We try to allocate the amount of resources requested for the longest duration. To do that we consider the overall resources available: this is the sum of the On/Off CoD resources already activated but not yet consumed, and the On/Off CoD resources not yet activated.Enter 'Yes' to authorize PowerHA SystemMirror to dynamically operate CEC LPARs (not vios) which are not in the cluster.Enter 'Minimize' to have PowerHA SystemMirror bring secondary LPARs to their profile minimum, if not enough resources at takeover. Enter 'Shutdown' to have PowerHA SystemMirror shutdown secondary LPARs, if not enough resources at takeover.Enter Threshold to determine on which LPARs apply the secondary LPARS policy. Partitions having a 'Availability Priority' value higher than the threshold will continue, while the Secondary LPARs policy will apply on partition with lower values. This enables to firstly apply the policy on lower priority partitions. That does not apply on VIOS partitions.Define the precedence order of the Hardware Management Consoles (HMC) used by this site. The first in the list is tried first then the second, etc. You cannot add or remove any HMC, you are only able to modify the order of the already set HMCs.This is the application controller from which you will remove DLPAR and CoD resource provisioning.Cluster configuration changes must be propagated to all nodes in the cluster. This process requires that all nodes be running AIX and the clcomd subsystem such that verification checks can be performed and the changes propagated. If there are any nodes that are not reachable, synchronization will fail and you may not be able to start cluster services or make subsequent changes. Depending on the nature of the failure, the synchronization may be incomplete and there may be different versions of the configuration on different nodes. If synchronization fails because of unreachable nodes, you will need to make those nodes accessible and try again. If you have a situation where you cannot make all nodes accessible, you can choose to ignore synchronization errors by selecting this option. Please be aware that ignoring these errors will lead to different versions of the configuration on different nodes, which you will need to correct manually, later, before attempting to bring those node(s) back into the cluster. This is the site name to associate with one or more Hardware Management Consoles (HMC).Choose this option to modify or view the list of Hardware Management Consoles (HMC) of a site.Enter the new ip label or address. You must also update /etc/hosts with the new information. Learn more about cluster eventsUse this option to learn more about SystemMirror cluster events. Note that you must have the message catalog fileset cluster.msg installed for your preferred locale. To use this feature you must have the message catalog fileset cluster.msg installed for your preferred locale. Select an event to learn more aboutYou can specify the order in which resources are allocated when SystemMirror activates ROHA resources for an application. 1.Free Pool First - DLPAR, EPCoD, On/Off Cod 2.Enterprise Pool First - EPCoD, DLPAR, On/Off Cod.Enter 'Yes' to have PowerHA SystemMirror start Resource Groups even if resources are insufficient. It may occur when the total requested resources exceeds the LPAR profile maximum or the combined available resources. Thus the best-can-do allocation is performed. Enter 'No' to prevent starting Resources Groups with insufficient resources. Resource Groups may move to error state if resources are insufficient. Default is 'Yes'.Specify the connection type for powerHA SystemMirror to connect with HMC for DLPAR, EPCoD and OnOff CoD operations. Default value for this parameter is SSH.Enter the HMC user name for REST API connection. First time users will be asked to provide the password in next screen.Enter the HMC user name and password for REST API connection.User Name and Password options are applicable only when HMC connection type is set to REST API based connection.Choose this option to configure NovaLink used by your cluster configuration,and associate NovaLink to clusters nodes.Choose this option to add a NovaLink and its communication parameters. Associate the NovaLink to a specific node.Choose this option to modify or view a NovaLink hostname and communication parameters.Choose this option to remove a NovaLink and then remove it from the powerHA SystemMirror cluster.Choose this option to modify or view the NovaLink default communication tunables.Enter the hostname for the NovaLink or IP Address of the NovaLink, IPv4 and IPv6 addresses are supported.Enter a user name to use with REST API or SSH to establish a connection with the NovaLink. You must specify a user name in this field. Password will be asked in next screen if REST API is enabled.Enter a timeout in minutes on DLPAR commands executed on a NovaLink. It is adjusted according the type of resources for memory one minute per gigabytes is added to this timeout. Setting no value means you use the default value which is defined in Change/Show Default NovaLink Tunables Panel. When -1 is displayed in this field, it indicates the default value is used.Enter a number of times one NovaLink command is retried before the NovaLink is considered as non-responding. Setting no value means you use the default value which is defined in Change/Show Default NovaLink Tunable Panel. When -1 is displayed in this field, it indicates the default value is used.Enter a delay in seconds between two successive retries. Setting no value means you use the default value which is defined in Change/Show Default NovaLink Tunable Panel. When -1 is displayed in this field, it indicates the default value is used.Select Yes to perform an immediate, one time check of connectivity.You can specify the connection type for PowerHA SystemMirror cluster node(s) to connect with the NovaLink for dynamic resource adjustments. Only SSH connection type is supported.You can specify the order in which resources are allocated when SystemMirror activates ROHA resources for an application. 1.Free Pool before Enterprise pool - DLPAR, EPCoD, EPCoD from other CECs, On/Off Cod. 2.Enterprise Pool before Free Pool - EPCoD, DLPAR, EPCoD from other CECs, On/Off Cod. 3.All Enterprise Pool before Free Pool - EPCoD, EPCoD from other CECs, DLPAR, On/Off Cod.Provides menus for managing Mirror Pools for Volume Group. For more information about Mirror Pools, visit https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.osdevice/mirrorpools.htmDisplays information about all Mirror Pools for all Volume Group.Displays information about all Mirror Pools for selected Volume Group.Changes the characteristics of selected Mirror Pool. Adds a new disk to the selected Mirror Pool.Removes a disk from the selected Mirror Pool.Rename a selected Mirror Pool.Delete a selected Mirror Pool.Use this option to change the log file maintenance strategy. Log files are wrapped when they reach a certain size and the oldest version of the log is discarded. If the log size is set too small, this process may inadvertantly result in discarding logs that had information critical for problem determination, During verification PowerHA will give suggestions for approriate log file sizes based on your specific configuration. You can change those values here or with the clmgr command. Change or show the maximum size for PowerHA log files. Specify the maximum size for this log file in Mega Bytes. If the log size is set too small, it may inadvertantly result in discarding logs that had information critical for problem determination, During verification PowerHA will give suggestions for approriate log file sizes based on your specific configuration.Network instability occurs when there are an excessive amount of events received over a period of time. The unstable threshold defines the number of events which must be received inside of the unstable period in order for the network to be declared as unstable. Provide an integer value between 1 and 99. Network instability occurs when there are an excessive amount of events received over a period of time. The unstable period defines the period of time used to determine instability. If the threshold number of events is received inside of the unstable period, the network is declared as unstable. Provide an integer value between 1 and 120 seconds. Choose this option to configure Cloud Backup Configuration. The PowerHA SystemMirror Cloud Backup Configuration collects the flash copy of the application data. The flash copy data will be mirrored to the target storage or will be copied to the cloud storage. When a data corruption happens or loss occurs you can recover a copy from the backup.Use this option to define Backup Profile configuration settings for the volume groups defined in a resource group and the backup schedule. Data for these volume group will be either copied to remote disk or to the cloud storage as for the defined backup schedule.Choose this option to add Backup Profile and the backup schedule.Choose this option to modify or view the Backup Profile and the backup schedule.Choose this option to remove the Backup Profile.Choose this option to enable or disable the Backup Settings for Volume Groups defined in a Resource Group.Choose this option to enable the Backup method: Remote Storage: Backup the data to remote storage through mirroring. Cloud: Backup the data to the cloud storage.Select the Resource Group(s) for which the selected Backup Profile would be applicable.Select the Volume Group(s) for which current Backup Profile would be applicable. The volume group must be part of a selected Resource Group.Define the Cloud service provider where the data should be backed up.The compression algorithm for the backup data. You can select the following choices: Enable: Compression is enabled, all the backup data is automatically compressed. Disable: Compression is disabled.Define the frequency in days when PowerHA SystemMirror collects the full data backup. Allowed values for Backup frequency can be 0 to 999 daysEnter the time at which backup should start. If it is empty backup will start at 12AM by default. Allowed values for Backup Schedule can be 00:00 to 23:59Define the frequency in hours when PowerHA SystemMirror collects the incremental data backup. Incremental backup frequency duration should not be more than the Backup Frequency duration. Allowed values for Incremental backup frequency can be 0 to 999 hours.A custom method (script) to be invoked when the backup operation starts. The backup operations will also be logged. You must specify a full path name to the method.Select the directory as target location to store the backup file. Please make sure you have adequate free space to hold the content of all volume groups present in selected resource group.Enables encryption. If this field is set to Yes the data will be encrypted by one of the following algorithms: KMS, AES. If it is set to Disable then there will be no encryption for data.Define the Replicated Resource(s) for the flash copy backup.Choose this option to remove the backup setting for the selected Resource Group.Select the volume group(s) for which the current Backup Profile would be applicable.Use this option to provide the storage configurationUse this option to add storage configurationUse this option to modify or view storage configurationUse this option to remove the storage configurationUse this option to provide the storage name. For remote copy pprc relation, you should provide svc cluster name.Use this option to provide the storage typeUse this option to provide the User nameUse this option to provide the IP addressUse this option to remove the storage configuration for selected storage name.Use this option to clear the failed copies of backupUse this option to cancel the running backupThis node will be recovered from event script failure. After an event script fails, PowerHA will by default resume event processing with the next event specified in the recovery program. Alternatively you can choose to cancel all remaining event processing. If you cancel further event processing, any resource groups in indeterminate states will be moved to the ERROR state. No matter what option you select, you will need to manually verify all resources and resource groups are in the expected state once the cluster stabilizes. Option to list the backup files in the cloudBucket Name to list the files in the bucket. For AWS bucket name refers to the S3 bucket.Use this option to list the files specific to a resource group.Use this option to list the files which are uploaded from start time. Format for start time is yyyy-mm-ddThhUse this option to list the files which are uploaded from start time to end time. If end time is not provided it will list all the files from start time to current time. Format for end time is yyyy-mm-ddThhUse this option to change the storage system name. Bucket name to keep backup files in the cloud. For AWS bucket name refers to the S3 bucket.Check the Storage connectivity for the configured cloud backup profiles from the local node.You can configure pre and post event commands to run before and after the main event command. By default the exit status returned by these commands is ignored and does not affect the exit status of the main event. If you select 'yes' for this option, any non-zero exit status from a pre or post event command will be treated like a failure of the main event. Further, if the pre event command fails, the main event and post events will not be called. And if the main event fails, the post event will not be called. In all cases the notify command will be called after a failure. Choose this option to configure Cloud Backup Configuration. The PowerHA SystemMirror Cloud Backup Configuration will back up the data to the cloud storage or to a remote storage. In case of data corruption or data loss, you can recover your data either from cloud copy or remote storage.Choose this option to add Backup Profile and the backup attributes. You can select any of the configured resource group or rootvg_profile as backup profile name. You can choose "ALL" for backup all configured resource groups. For "rootvg" backup, backup profile name would be "rootvg_profile".Selected Backup Profile.Select the Volume Group(s) for which current Backup Profile would be applicable. The volume group must be part of a selected Resource Group. You can choose "ALL" for backup all volume groups in a selected resource group. For "rootvg" backup, backup profile name would be "rootvg_profile".Select the directory as target location to store the backup files. Please make sure you have adequate free space to hold the content of all volume groups present in selected backup profile.Use this option to list the files specific to a backup profile.Enables encryption. If this field is set to Yes the data will be encrypted by one of the following algorithms: KMS, AES. Encryption algorithm KMS is only supported with AWS cloud service If it is set to Disable then there will be no server side encryption for data.Bucket Name to list the files in the cloud storage bucket. Define the Replicated Resource in the storage for the backup operation. In case of SVC, replicated resources are: 1. Flash Copy Consistency Group for the Cloud Backup Method 2. SVC PPRC Consistency Group for the Remote Storage Backup Method.Use this option to search for and navigate to SMIT panels for PowerHA. You can use the SMIT search facility to enter the name or feature you are looking for. If the title of the SMIT panel is not preceded by the "#" symbol, you can press enter to navigate directly to that panel. PowerHA will use the Cloud Backup Configuration to backup the data to the cloud storage. If this backup process is successful then it will change the backup date of regular backup. The next backup will trigger after the number of days from now which is configured as backup frequency to the provided backup profile.Select this option to compare cluster snapshots or compare a snapshot to the Active Configuration Directory (ACD) or Default Configuration Directory (DCD). Use this entry field to select the first configuration to compare. Use F4 to generate a list of options to select from, or enter a full path to a cluster snapshot file. Use this entry field to select the second configuration to compare. Use F4 to generate a list of options to select from, or enter a full path to a cluster snapshot file. Use these entry fields to select the two configurations for comparison. The default is to use the Default Configuration Directory (DCD) as the first configuration and the Active Configuration Directory (ACD) as the second configuration. You may also select a snapshot name from the list of snapshots provided using F4 or enter a full path to a snapshot file. Select this option to diagnose common problems with network interfaces.Select this option to show the current state of network interfaces.