ADISO8859-1Lh L} L L L LLLMMM= MY>MeMM"MMNNN6NONnN.N N!7N#.O(OK-0Oa.O/O0 O1O2O3!O4 P!6 P.7P:9!PQ:+Ps<P=P?P@4PAQCQ/F!QGK$QiP4QRQSQUQZR_RdR-i!RFn(RhpRsRtRuRv Rw@RxS:zSU|SuSS#SSS6T :TB&T}TTT.T/U-UJ.UxU&U UVV#V:VSVi"V2VV%V!WW@%WV(W|+W$WWXX"XB X_XXX4X!X$YY?,Y[,YY)YY!Z$Z8!Z]%Z1Z6Z [[+[H7[`8[.[F\P\G C\"E\$H]"&J]k(A])1],^*6^H;^a=#^@^B^C!^D^E0_F(_LG _uJ*_O$_T^_U`EV`[W `cY `m^ `xcC`h`m`r`sat au ava,wa;xaWy ajzau{a| a}a~a aaaabb.b?bZ bkbxbbb bbbc c cc#c6cPcoccc cc ccc ddXd.]dI_dZ`dqadbdc dd!de efe&g!eFhehi$ej"ekelem"enfo$f1p fVqfdr"fus,ft fu(fv fw*gxgFy*gaz)g{,g|g}-h~-h/"h]+h%h'h'h&i"(iI(ir-i#i i*j,j9jf(j j*jj!k k:)k[+kkk kkl"l",lE:lr9llmm m/.mPm#mmmmn nn$n<nO nano n~n#n"nnnoo)o9 oLoZ oi ow o!o" o#o$o%o*o/o4p6p9p;$p/=!pT>pv@pBpCpH$pI$pJqK qL(q2M#q[N#qPqQqRqSqTrUr/V#rKW*roY,rZ r[r\r]r^r_s`$s%asJbsjesf"sgshsi#t jt-ktLlt]mtunto!tp tq!tru u&"uD)ug)umu/x).xY1x&x xx x y  yy" y;yIyeyuyy2y,yzz5zP zVzb(zzzzzzzzzz{{ {{{#{7 {G{Q&{p+{ {7{| | ||&|.|7|<|L|f.| |||} }& }0}<}S}Z }i }| } -} ,}'~3~*$~^~)~,~)~/$2Ty24ZF>d nx #A]M smb f ~  (1@Tdv0 ,?Z x( ! ++)W  0F ^i{0 . 3: .n 1 8n)o?p\qvrw|xyz2P-a,00a&{?& # 5 U hdex z '{r;EA ~K%=*[$",M $X#}+efg-/ 01   !-2Y8ZAJNV[bjrz   -JhTm#$%4&'%(B)NU   Aa5p %$! i$j8kTlgn~opY]"_ 60"Ps  & +   *7@P U a ly#$ %&'()*+ #,/FCG#_HI&J'N-O&P.FQ/uS-T,U6Z97[!q\"] x.'/,Ivz"@    *! )(6#_  p9! '&+Rq#$(#-24 5 7&< ;AIK XPfUwdinsx} *$H%mG&D %e'$%3Yi /54'/\98.B^{# (dn/DW,hw/(*2<FP Z  "* *M%x!(# (%-+24<7/q<  2?(^)"~* +,dnx 0D\"c$ )")6"`$",6'@ J%)T OU[^'khr | ,DWi* |)"X4b+l"*vM/& V q "(0Y!x 45# %G S(Bn2    !:U/j+ eE$fjg h% %0 K E Z ycd+n$x%>$[R"$ Cd! !"(?Sj      $!')/((G%p'- & & : &W &~+/8$:_{  #+$ )P!&z% ]¡^ 3^ 5Ò  ! ') Q [g"oĒ#IJ /? .[!,Ŋ" ŷ4$4N `g Ɔ (Ɵ&n ƶ"4)F=p EǮ    #> ["|)ȟ&9& `Ɂ)ɞ6$@Zq6ʊ &!2";#W$,l%˙&0˶'0(!)+:*f+*́, ̬-̶.=/ 0) 12J2}3͝4͸5,67 89 0:=;\< z=/·>.η?@ABC D )E$5FZGcH tIςJ#ϊK#ϮLM NO PQ,*R*WSЂTПUкV-WX1Y0Z I[ U\`]w^ш_Ѡ`ѱa8dno0p%Eq'krғsңt!ҳuvw xy/'?&K.rӡӿ.6 /W &ԇ =Ԯ %)HO@՘=)M0w&֨!'"8#30$d%;ׂ&8׾'C(@;)R|*5+F,L-f.ن/&٣012'3"04S5'l6/ڔ7*891 :<<;+y<'ۥ@A"B9C+?DkE܁FܙGܲHIJK1L"PM%s(ݙ$ݠ,'R 3m 0ޡ )  '"D5gߝ&߼,:K%i86Gd)4 A!!X"3z#$%%5&*(':S())$*$+',?*-Aj../.0. 1"929\34,5:6275I8-9?:;) <5=L>g?)@,A,B.C96D*pE!FGH0IMJ,iK#L$M'NO1%P9WQ6R5ST!U4V)SW*}XY'Z"[\-.](\^>_B`a/b5Oc3dAh%i!j;kPl1`mno?;p2{q$rst"wuvLwixyHz{X|} ~%*PMSzmnh%(*+S<F%7'L0t)!2! $ F? V2C1v,] k;":Q]!!Cx'{.+ ZL  !"L1$l($P$%%'{(+) 2)9')lX)O)(*=9*f*)-***+*;+#+A+Z+z+9++%+ L, 2,[ -, !, ,#,-5-;,-q4---2.3.81.l2...(/)/),/S/P/ /!/"0#0$"03%0V& 0m'0{(!02 03 04%05(0681"7%1[819)1:>1;2<2 =22>2J?2[@:2{A2B%2C2D/3E3I+$(3Q#3z3333=4$4D 4i #4 4 4 -45(5/#5X5|5(525%6 763(6k*66666666 6!$7"75#7Q$7p,/78/ 8@#8a888!8 8 #9 97 9L /9h9999::<:':[- &: ::+:+;;A;Q ;` ;o ;y.@.;3;;;6<'4<^ <<>< 9= =M =j =} >=F=A>2>W1>>>->1??G?a?{??!?,?.@,@H>@u F@!A@"-A=#8Ak$1A%3A&B ''B*(BR)8Bq*3B+B,B-=C .CH/Cb0C|1C28C3C4C5C6D 7<D87DZ9?D:GD;BE<9E]=4E>BE?1F@,FA/&+FnBF4F#G:G6$GqG,G 'G 'H "H3 HV HiH}4H<H7I1I?6IqI II'I#J 'JD'JlEJ>J;KKUBKm +K!eK"#LB##Lf$L%$MW((M|0 !M-MM#N N7NX9NtN N N N O1!%O*OPOp!OFOGO'P3&P[ (P /P &P 'Q Q*"QEQh#QQQ#Q@Q%R:&R`(R9R-R%SS> SWSx*S(S(S)S2 TT-,TL$Ty$T&T?T&U* ?UQ :U3 6U<V?V@AVV:V&W0W= Wn DW{ 4W !W "X>X: XyX X"X"X%XY"Y7'YQ1YyYEY?Z'ZA4ZiZZ :[4[[[$[\"] 5d1]]B$]Y/]~']]]] ^ ^* &^I #^p 0^3^3^_-_F)_`*_____``' `<`J`Y`h`w` `!`"`#`$a%a&a)'a>(aW) ao*a|+ a,a-a.a/a0a1a2b3b+4bA5bW6bm7b~8$b9b:&b;c<$c%=#cJ? cn@c{A cBcCcDcEcFcIdLd7MdFNdWOdkPdQ0dRdSdTdU;eV;eNW8eXeYeZe[f\6f]3fT_f`fafbfcfd"fe%gf!g6g$gXh'g}igjg6%g-g(h!h/=hQ7h 7h Dh <iD *i (i3i2j -j<0jj)jjj j k(k#DkLMkJk7 l* l9lGlYlil} ll l l l80!m(m)n;nH9n9nCn;o< Eox <o o p pp<"pZp}pp ppp#p$qq=qXqoq!qIqrr r,! r9"!rC#"re$r%-r&r'r(r)'r*s +s,)t+-tU.tu/!t0)t9tt:{{#{-||/|B |` |m |x | &| | | ||"}}#.}4}c}{}}}}#}+~/~E ~u,~~5~ !%2"X#a$ h%4&/'7((&)7O*(+,-./ 0%$1J2R38X4556789:-;D<\=b>h?6w@A BCVDgEPF?GH&$I!KJ"mKLM NO1P1Q2RQSbT$zUV WXYZ[)%\O]f^~_`ab!c  de*f Bg LhVi ^jjkpl!umn`opq;r@s ItTuXv_wfx3y*z9{0| I} U~c!w;+" 42 >Wv " *  % ,>%kZ89%3_;6:/A.q)1,/)(Y0+ .!3" B# c$>%>&$''(C)[*#+"%<HZ`=o7Y) Q    &+!Rt {  *?j>8*q%  (#-Q#m  J ) '% ;M   7K _!k"# $#% &'()'*18+j,~-*.@/ 01):2!d3 4)56$78)9F:d; |</FG?G 4 @ L Zg~   (   s-eJB)AS[ !  #!5"T#r$!%)&&'-(I2)|*+,-./01 2#)3$M4Vr56]788)9:<.=MFlGF|HpI4J CKPL`M rN$OP@YA.p y$Ä>éE o. Ğ #ľ  ck'E ;OgƋC67n; "F<i1Ȧ,A/ʕ0!2 *O z ˖ )˱  Z<"͗ͺCC$9$&"K n ρ Ϝ ϲ %?"Qt(Д н?GD\'ѡ%& "!$9# ^$;k%.ҧ&-' (:)2J*!}+ӟ,$Ӷ-.(/0&1%A2g3ԅ4 ԕ5 Ԣ6Ԭ789:;%2< X=-f>Ք?!զ@8AB* C;5DXqEFGHf DNr׍םװ     - 1 =X'h&ؐط3HX%u$ٛ- !!*" G# T$ a'l(ڂ) ژ* ڦ+2ڳ,-)ۀ."۪0123܆4 ܙ5 ܥ6ܰ7#ܹ89&:;ݽ<*V=ށ>ޜ?޹@A>B:C_ZDnߺEA)FkG.HMIJ+K/GLwMNOP)Q)PowerHA SystemMirrorCluster System ManagementCluster ConfigurationCluster TopologyCluster ResourcesCluster SnapshotsCluster VerificationCluster ServicesCluster Recovery AidsClear SSA Disk Fence RegistersRecover From Script FailureRAS SupportRestore System Default Configuration from Active ConfigurationConfigure ClusterConfigure NodesConfigure Communication InterfacesConfigure Network ModulesShow Cluster TopologySynchronize Cluster TopologyAdd a Cluster DefinitionChange/Show Cluster DefinitionRemove Cluster DefinitionAdd a Cluster NodeConfigure Topology Services and Group ServicesConfigure Global NetworksAssociate a Communication Interface with a Cluster NodeChange Cluster Node/Network Adapter AttributesRemove a Cluster NodeDissociate a Network Adapter from a Cluster NodeSelect Application ServerRemove an Application ServerServer NameAdd an Application ServerNew Server NameChange/Show an Application ServerStart ScriptStop ScriptStart Cluster ServicesShow Resource Information by NodeShow Resource Information by Resource GroupStop Cluster ServicesRefresh Cluster ServicesShow Cluster ServicesChange/Show Cluster Lock Manager Resource AllocationShow Cluster TopologyShow Cluster DefinitionShow Topology Information by NodeShow Topology Information by NetworkShow Topology Information by Communication InterfaceShow All Clusters DefinedSelect a Cluster to ShowShow All NodesSelect a Node to ShowShow All NetworksSelect a Network to ShowShow All Communication InterfacesSelect a Communication Interface to ShowShow Cluster ResourcesRemove Node EnvironmentChange/Show Cluster EventsArchive File NameError CountPowerHA SystemMirror Verification Methods (Pre-Installed, none)Configure Node EnvironmentChange/Show Run Time ParametersConfigure Owned ResourcesConfigure Take Over ResourcesRemove Node EnvironmentView PowerHA SystemMirror Log FilesTrace FacilityError NotificationEnable/Disable Tracing of PowerHA SystemMirror DaemonsStart/Stop/Report Tracing of PowerHA SystemMirror ServicesAdd/Change/Show/Remove a Notify MethodAdd a Notify MethodChange/Show a Notify MethodRemove a Notify MethodScan the PowerHA SystemMirror Scripts Log FileWatch the PowerHA SystemMirror Scripts Log FileScan the PowerHA SystemMirror System Log FileWatch the PowerHA SystemMirror System Log FileScan the C-SPOC System Log FileSelect Communication Interface to showWatch the C-SPOC System Log FileSelect node to showSelect network to showSelect cluster to removeSelect node to removeSelect cluster to changeSelect a Node ID to Add to ClusterSelect a Node ID to which to add a Network AdapterSelect a Node Name to ConfigureSelect a Shared IP Label to ConfigureSelect Owner Node ID to ConfigureSelect node to changeSelect Take Over Node ID to ConfigureSelect node Id to remove its environmentSelect a Communications Interface to changeSelect network adapter to dissociateConfigure Node EnvironmentSelect Node RoleSelect Notification Object NameSelect Scripts Log File NameCluster Node Name to Change/ShowStart Cluster ServicesStop Cluster ServicesShow Cluster ServicesChange/Show Cluster Lock Manager Resource AllocationSet Cluster Identification ValuesChange Cluster Identification ValuesRemove a Cluster DefinitionAdd a Node or Network Adapter to the ClusterChange Attributes of Cluster Node or AdapterRemove a Node from the ClusterRemove a Network Adapter from the ClusterShow Cluster DefinitionShow Definitions for All ClustersShow Definition for Selected ClusterShow Information for All NetworksShow Information for Selected NetworkShow Information for All Communication InterfacesShow Information for Selected Communication InterfacesShow Cluster Node AttributesShow Cluster Node AttributesRemove Node EnvironmentAdd/Change/Show Hot Standby Configuration (Active Node)Add/Change/Show Hot Standby Configuration (Standby Node)Add/Change/Show Rotating Standby ConfigurationAdd/Change/Show One-Sided Takeover Configuration (Primary Active Node)Add/Change/Show One-Sided Takeover Configuration (Secondary Active/Standby Node)Add/Change/Show Mutual Takeover Configuration (Primary Active Node)Add/Change/Show Mutual Takeover Configuration (Secondary Active Node)Add/Change/Show Third-Party Takeover Configuration (Primary Active Node)Add/Change/Show Third-Party Takeover Configuration (Secondary Active Node)Add/Change/Show Third-Party Takeover Configuration (Standby Node)Show Current Resource Group and Application StateBROADCAST message at startup?Startup Cluster Manager?Startup Cluster Lock Services?Startup Cluster Information Daemon?BROADCAST cluster shutdown?Lock Entries Per SegmentLock Resource Entries Per SegmentPre-Allocated Resource SegmentsPROMPT for confirmation before cluster shutdown?Lock Tuning Statistic Recalculation RateLock Tuning Statistic Decay RateHow many seconds to delay before shutdown? (Enter 00 for IMMEDIATE shutdown)**NOTE: Cluster Manager MUST BE RESTARTED in order for changes to be acknowledged.** Node Adapter IP LabelCommandArgumentsCluster IDCluster NameName of cluster.cf file If not specified, default cluster.cf shownCluster NetworksNetwork to showCluster Network EntryPre-event CommandDescriptionEvent CommandNotify CommandSelect Event Name to ChangePost-event CommandEvent NameRecovery CommandRecovery CounterNode NameShared IP LabelSelect Owner Node IdOwner Node IDTake Over Node IDAdapter IP labelSelect Take Over Node IdNode ID to Remove environmentSelect Node NameSelect Resource Group NameAdapter functionNetwork nameNetwork attributeAdapter IP addressNode ID to be changedAdapter to be changedNew Node IDNew Adapter IP labelNode ID to be deletedNode IDCluster NodesNode to showCluster Node EntryAdapter to be dissociatedAdapter IP Label to dissociateNetwork AdaptersAdapter to showCluster Adapter EntryID of cluster to be changedCluster IDNew Cluster IDCluster NameNew Cluster nameCluster ID to deleteCluster IDCluster DefinitionsConfigure Node EnvironmentSelect Node RoleNode ID for local nodeNode ID for remote nodeConcurrent Volume groupsNode ID for local (active) nodeNode ID for local (standby) nodeNode ID for remote (standby) nodeNode ID for remote (active) nodeNode ID for active primary nodeNode ID for active secondary nodeService IP label for local nodeVolume groups owned by active serverService IP label for active serverDisks owned by active serverDisksDisks owned by active primary nodeDisks owned by remote nodeDisks owned by active secondary nodeVolume groupsService IP labelVolume groups owned by remote nodeVolume groups owned by active secondary nodeFilesystemsFilesystems owned by active primary nodeFilesystems owned by remote nodeFilesystems owned by active secondary nodeFilesystems to be exportedFilesystems to be exported by primary nodeFilesystems to be exported by remote nodeFilesystems to be exported by secondary nodeFilesystems to be NFS mountedFilesystems mounted by primary from secondaryFilesystems mounted by secondary from primaryFilesystems owned by active serverFilesystems to be exported by active serverNumber of times to retry an NFS mountThe following four entries are requiredThe following five entries are requiredThe following six entries are requiredThe following seven entries are requiredThe following eight entries are requiredonly for IP Address Takeover / Reintegration:Participate in IP Address Takeover?Service interface for local nodeStandby interface to masquerade as primaryStandby interface to masquerade as secondaryBoot IP label for local nodeService IP label for active primary nodeService IP label for remote nodeService IP label for active secondary nodeStandby IP label for local nodeService IP label for standby nodeStandby interface for local nodeStandby IP label to masquerade as primaryStandby IP label to masquerade as secondaryNetmask for local nodeTurn on disk fencing?Debug LevelTakeover for inactive nodeHost uses NIS or Name ServerPowerHA SystemMirror Security ModeStart Cluster Lock Manager at cluster start?Start PowerHA SystemMirror demo software at cluster start?Directory containing images for PowerHA SystemMirror demoConcurrent Access Volume GroupsHTY Service LabelDynamic Node PriorityFormatting options for hacmp.outDefault (None),Standard,Html (Low),Html (High)Notification Object NameProcess ID for use by Notify MethodPersist across system restart?Match Alertable errors?Select Error IDSelect Error LabelNotify MethodResource / Monitor TypeSelect Error ClassSelect Error TypeResource NameResource ClassResource TypeNotification Object NameProcess ID for use by Notify MethodPersistence across system restart?Select Error ClassSelect Error TypeMatch Alertable errors?Select Error IDSelect Error LabelResource NameResource ClassResource TypeNotify Methodfalse,trueyesHigh,Standardhigh,lowStandard,Kerberosservice,standby,boot,sharedpublic,private,serialnow,restart,bothgraceful,takeoverYes,NoIgnore,All,True,FalseNone,All,Hardware,Software,ErrloggerNone,All,PEND,PERM,PERF,TEMP,UNKNPre-Installed,noneAddress,DeviceFast,Normal,SlowActual,EmulateIP,ether,fddi,hps,rs232,tmscsi,tokenConfigure Custom Verification MethodVerify ClusterAdd a Custom Verification MethodChange/Show a Custom Verification MethodRemove a Custom Verification MethodSelect a Custom Verification MethodLog File to store outputVerification Method NameVerification Method DescriptionVerification Script FilenameNew Verification Method NameCluster Custom ModificationCustom Defined Verification MethodsAutomatic Cluster Configuration MonitoringAutomatic cluster configuration verificationNode nameHOUR (00 - 23)Default(None) DefaultEnabled,DisabledAdd a Custom Snapshot MethodChange/Show a Custom Snapshot MethodRemove a Custom Snapshot MethodSelect a Custom Snapshot MethodCustom Snapshot Method NameCustom Snapshot Method DescriptionCustom Snapshot Script FilenameNew Custom Snapshot Method NameChange/Show a Cluster Log DirectorySelect a Cluster Log DirectoryCluster Log NameCluster Log DescriptionLog Destination DirectoryCluster Log ManagementDefault Log Destination DirectoryAllow Logs on Remote FilesystemsChange All Cluster Logs DirectoryLogs Destination DirectoryPowerHA SystemMirror SecurityShow PowerHA SystemMirror SecurityChange/Show PowerHA SystemMirror SecuritySynchronize PowerHA SystemMirror SecurityWARNING: The /usr/es/sbin/cluster/etc/rhosts file must be removed from ALL nodes in the cluster when the security mode is set to 'Kerberos'. Failure to remove this file makes it possible for the authentication server to become compromised. Once the server has been compromised, all authentication passwords must be changed. Changes to the cluster security mode setting alter the cluster topology configuration, and therefore need to be synchronized across cluster nodes. Since cluster security mode changes are seen as topology changes, they cannot be performed along with dynamic cluster resource reconfigurations.List Error Notify Methods for Cluster ResourcesAdd Error Notify Methods for Cluster ResourcesRemove Error Notify Methods for Cluster ResourcesConfigure Automatic Error NotificationAdd a SiteChange/Show a SiteRemove a SiteSite NameSite NodesSite Name to Change/ShowNew Site NameCluster Site Name to RemoveConfigure SitesSite DominanceSite Backup Communications TypeConfigure NetworksCluster to re-acquire resources after forced down?PowerHA SystemMirror Documentation BookshelfConcepts and Facilities GuideWho should use this guide?IndexPDF VersionPlanning and Installation GuideAdministration and Troubleshooting GuideProgramming Client ApplicationsUPDOWNUNKNOWNONLINEOFFLINEJOININGLEAVINGState:Network:UNSTABLESTABLERECONFIGAll Resource GroupsResource Group:Location:What would you like to do withWhat would you like to do with networkWhat would you like to do with boot addressShow DetailsUnable to communicate with the Cluster SMUX Peer DaemonCloseExpand AllCollapse AllClusterSubStateBackMaster GlossaryShow Cluster ApplicationsShow Summary of Cluster EventsWhat would you like to do with service addressMove it to a different interfaceBring Resource Group On-lineBring Resource Group Off-lineCluster Snapshot NameFile NameDescriptionAutomatically,Manuallyno,yesPlanning GuideInstallation GuideAdministration GuideTroubleshooting GuideView the HTML version of the "%1$s" document.View the PDF version of the "%1$s" document.View the index for the "%1$s" document.View the intended audience for the "%1$s" document.Global Logical Volume Manager (GLVM)Peer-to-Peer Remote Copy (PPRC)PowerHA SystemMirror Smart Assist for DB2PowerHA SystemMirror Smart Assist for OraclePowerHA SystemMirror Smart Assist for SAPPowerHA SystemMirror Smart Assist for WebSpherePowerHA SystemMirror Smart Assist Developers GuideNo documentation is currently available for this topic.
Please refer to the Online resources, below, for more options.Automatically,Manually,Manual with NFS crossmountsVerification Type# No candidate nodes were found. # Check /etc/hosts and /etc/cluster/rhosts on all nodes. Hot StandbyRotating StandbyOne-Sided TakeoverMutual TakeoverThird-Party TakeoverActive NodeStandby NodePrimary Active NodeSecondary Active/Standby NodePrimary Active/Standby NodeSecondary Active NodeNetwork TypeNew Network NameNode NamesNew Node NameNetwork Module NameAddress TypePathParametersConfigure Dynamic Node Priority PoliciesAdd a Resource GroupRemove a Resource GroupStartup PolicyResource Group NameFallover PolicyNode RelationshipSite RelationshipParticipating Node Names / Default Node PriorityFallback PolicyNew Resource Group NameVolume GroupsRaw Disk PVIDsApplication ServersPPRC Replicated ResourcesMiscellaneous DataERCMF Replicated ResourcesSVC PPRC Replicated ResourcesWPAR NameSRDF(R) Replicated ResourcesFilesystems mounted before IP configuredTRUECOPY(R) Replicated ResourcesGENERIC XD Replicated ResourcesFilesystems/Directories to ExportFilesystems/Directories to Export (NFSv2/3)Filesystems/Directories to Export (NFSv4)Stable Storage Path (NFSv4)Grace PeriodFailure CycleFailure Detection RateClear SSA Disk Fence RegistersDisk Fencing ActivatedConnections ServicesFast Connect ServicesNetwork For NFS MountSupports gratuitous arpEntry typeNext generic typeNext generic nameSupports source routingUse forced varyon of volume groups, if necessaryTRUECOPY Replicated ResourcesUser Defined ResourcesCreate a Snapshot of the Cluster ConfigurationChange/Show a Snapshot of the Cluster ConfigurationRemove a Snapshot of the Cluster ConfigurationRestore the Cluster Configuration from a SnapshotConfigure Custom Snapshot MethodConvert Existing Snapshot For Online Planning WorksheetsCluster Snapshot NameCluster Snapshot DescriptionNew Cluster Snapshot NameForce apply if verify fails?WARNING: Applying a Cluster Snapshot while any participating node is currently running the PowerHA SystemMirror Cluster Manager or associated daemons may cause unrecoverable data loss and/or cluster integrity may be compromised.Custom Defined Snapshot MethodsCluster Snapshot to Change/ShowCluster Snapshot to RemoveCluster Snapshot to RestoreResource Group to Bring OnlineResource Group to Bring OfflineResource Group(s) to be MovedDestination NodeNode On Which to Bring Resource Group OfflineSelect a Destination NodeSelect an Online NodeSelect a Destination SiteNode on Which to Bring Resource Group OnlineDestination SiteSelect Resource Group(s)Select Resource Group(s) to Move To Another SiteSelect a Destination SiteMove Resource Group(s) to Another SiteMove Secondary Instance(s) of Resource Group(s) to Another SiteResource Group(s) to be MovedDestination SiteIgnore Cluster Verification Errors?Un/Configure Cluster Resources?Emulate or Actual?Skip Cluster VerificationVerification ModeVerbose Outputnormal,modifiedoff,onNOTE: If the Cluster Manager is active on this node, synchronizing the Cluster Topology will cause the Cluster Manager to make any changes take effect once the synchronization has successfully completed.No,YesYes,Nofsck,logredosequential,parallel_rawnameNone,All,TRUE,FALSEtrue,falseenable,disableFast Disk Takeover,Concurrent Access,noOnline On Home Node Only,Online On First Available Node,Online Using Node Distribution Policy,Online On All Available NodesFallover To Next Priority Node In The List,Fallover Using Dynamic Node Priority,Bring Offline (On Error Node Only)Fallback To Higher Priority Node In The List,Never Fallbackignore,Prefer Primary Site,Online On Either Site,Online On Both Sitesnode,network# # This option specifies how the Group will be managed when you # start cluster services. # # If you want the group to only come up on a specific node, select Online On Home Node Only # # To have the group start on any node select Online On First Available Node # # To start an instance of the group on every node select Online On All Available Nodes # # The node distribution policy means only one instance of # the group will be started per node Online Using Node Distribution Policy # # This option specifies how the Group will be managed when a # failure occurs. # # If you want the group to move to the next node in the list, select Fallover To Next Priority Node In The List # # To simply have the group move off line use Bring Offline (On Error Node Only) # # PowerHA SystemMirror can dynamically determine which node to move the group # to based on free memory, disk, or cpu. Select this option, then # specify the policy you want in the list of resources for this group. Fallover Using Dynamic Node Priority # # This option specifies how the Group will be managed when you # start cluster services after a fallover. # # If you want the group to move back to the node where it was # active before the fallover, select Fallback To Higher Priority Node In The List # # If you want the group to simply stay where it is, select Never Fallback Online On Home Node OnlyOnline On First Available NodeOnline Using Node Distribution PolicyOnline On All Available NodesFallover To Next Priority Node In The ListFallover Using Dynamic Node PriorityBring Offline (On Error Node Only)Fallback To Higher Priority Node In The ListNever FallbackFallover To Next Priority Node In The List,Bring Offline (On Error Node Only)immediate,abort,normal,transactionalRoundrobin,Favor Copy,Site AffinitypublicprivateserialservicestandbybootsharedrotatingconcurrentignorecustomhighlowDefault (none)StandardHtml (Low)Html (High)truefalseKerberosStandardnowrestartbothpublicprivateunknownstandbybootservicepersistentserialIPatmetherfcsfddihpsrs232soccsliptmscsitokenNetwork to RemoveNode Adapter LabelSelect a Resource GroupRemove a Cluster Network ModuleChange/Show a Resource GroupNode Service Adapter IP LabelPhysical Volume IdentifierWARNING: Only execute this command in emergency situations where you are certain that a fencing problem exists. Clearing fence registers may lead to data integrity problems in concurrent access environments.Network Module to RemoveSelect a Resource GroupApplication Server to RemoveFilesystems Consistency CheckFilesystems Recovery MethodSelect Local Network for Global Network ModificationGlobal Network NameChange/Show a Global NetworkLocal Network Name**NOTE: This will remove all cluster and node configuration information.** Node Up EventNode Down EventNetwork Up EventNetwork Down EventFail Standby EventJoin Standby EventSwap Adapter EventNode Down Mode (graceful, graceful with takeover, forced)Node Down ModeChange/Show Topology and Group Services ConfigurationInterval between Heartbeats (seconds)Topology Services log length (lines)Group Services log length (lines)Select a Custom Cluster Event MethodCustom Event Name to RemoveCluster Event NameNew Cluster Event NameCluster Event DescriptionCluster Event Script FilenameSelect a Custom Cluster EventAdd a Custom Cluster EventChange/Show a Custom Cluster EventRemove a Custom Cluster EventRemove a Communications AdapterChange/Show Communications AdapterRemove a Communications LinkListing Communication AdaptersSelect Communication Adapter to RemoveSelect Communication Adapter to Change/ShowNameNodeDeviceMultiple Links AllowedCurrent NameNew NameAllowable LinksPortAddress/NUANetwork IDCountry CodeApplication Service FileAdapter Name(s)DLC NameNameX.25 PortX.25 Address/NUAX.25 Network IDX.25 Country CodeApplication Service FileX.25 Adapter Name(s)SNA DLCSNA Port(s)SNA Link Station(s)Listing Communication LinksSelect Communication Link to RemoveAdd Highly Available X.25 LinkAdd Highly Available SNA-over-LAN LinkAdd Highly Available SNA-over-X.25 LinkList All Highly Available Communication LinksChange/Show Highly Available X.25 LinkChange/Show Highly Available SNA-over-LAN LinkChange/Show Highly Available SNA-over-X.25 LinkSelect X.25 Communication Link to Change/ShowSelect SNA Communication Link to Change/ShowSelect SNA over X.25 Communication Link to Change/ShowConfigure Communication Adapters for PowerHA SystemMirrorConfigure SNA Communication LinksConfigure X.25 Communication LinksPort NumberConfigure Highly Available Communication LinksAdd Highly Available Communication LinkChange/Show Highly Available Communication LinkRemove a Highly Available Communication LinkDLCLink Station(s)Port(s)Communication LinksEmulate Error Log EntrySwap Communication InterfacesAvailable Service/Communication InterfacesSwap onto Communication InterfaceNetwork NameService IP Label/Communication InterfaceSwap to IP LabelChange/Show I/O pacingChange/Show syncd frequencysyncd frequency (in seconds)Add a Process Application MonitorAdd a Custom Application MonitorChange/Show Process Application MonitorChange/Show Custom Application MonitorSuspend an Application MonitorResume an Application MonitorRemove a Process Application MonitorRemove a Custom Application MonitorApplication Server to MonitorApplication Server NameInvocationMonitor ModeProcesses to MonitorProcess OwnerInstance CountRestart CountRestart IntervalAction on Application FailureCleanup MethodRestart MethodMonitor MethodMonitor IntervalHung Monitor SignalStabilization IntervalApplication Monitor to SuspendMonitor NameApplication Monitor to ResumeCustom Application Monitor to RemoveProcess Application Monitor to RemoveApplication Monitor to ChangeConfigure Process Application Monitor or an Application Startup MonitorConfigure Process Application MonitorsConfigure Custom Application Monitor and Application Startup MonitorConfigure Custom Application MonitorsSuspend Application MonitoringResume Application MonitoringSelect an Application Server to MonitorSelect a Process Application MonitorApplication Monitor to RemoveSuspend/Resume Application MonitoringMonitor Name(s)Application Monitor Name(s)Application Server(s) to MonitorApplication Server Name(s)Startup Monitoring,Long-running monitoring,BothSuspend Application Monitor(s) for Application ServerResume Application Monitor(s) for Application ServerLong-running monitoring,Startup Monitoring,BothApplication startup modebackground,foregroundSuspend Application Monitor(s) for Application ControllerResume Application Monitor(s) for Application ControllerMonitor Retry CountEnable CPU usage statisticsProcess to monitor CPU usageCPU usage monitor intervalEnable Availability Metrics LoggingConfigure Tape ResourcesAdd a Tape ResourceChange/Show a Tape ResourceRemove a Tape ResourceSelect Tape ResourceChange Tape ResourceTape Resource NameTape Device NameTape ResourcesTape Resource to RemoveChange/Show a Custom Remote Notification MethodRemove a Custom Remote Notification MethodAdd/Remove the nodes/portsNodenameTTYMethod nameDescriptionAdd a Dynamic Node Priority PolicyChange/Show a Dynamic Node Priority PolicyRemove a Dynamic Node Priority PolicyDynamic Node Priority Policy NameDynamic Node Priority Policy DescriptionResource VariableConditionNew Dynamic Node Priority Policy Namelargest,smallestSelect a Dynamic Node Priority Policy to Change/ShowSelect a Dynamic Node Priority Policy to RemoveAll_Networks# Network_Name List_of_SubnetsNON-IPpublic,privateserialThe following line is not valid for Add a Network:The selected subnets should belong to the same type of network.Change/Show an IP-based AdapterChange/Show a Non IP-based AdapterAdapter LabelNew Adapter LabelNew Adapter IP LabelConfigure IP-based NetworksDiscover IP TopologyShow NetworksCluster-wide ConfigurationLocal ConfigurationAdd an IP-based NetworkSubnetChoose subnets for the new networkSelect an IP-based Network to changeChange an IP-based NetworkSubnetsAdd SubnetsRemove SubnetsChoose additional subnets for the networkChoose subnets for the new networkChoose subnets to remove from the networkSelect an IP-based Network to showSelect an IP-based Network to RemoveConfigure Non IP-based NetworksAdd a Non IP-based NetworkSelect a Non IP-based Network to ChangeChange a Non IP-based NetworkSelect a Non IP-based Network to showDevice NameNew Device NameSelect a Non IP-based Network to RemoveAdapters on IP-based networkAdapters on Non IP-based networkConfigure IP-based AdaptersChoose Network for new AdapterAdd an Adapter on a new NetworkAdd an IP-based AdapterAdapter IP addressAdd an IP AdapterConfigure Non IP-based AdaptersAdd a Non IP-based AdapterChange a Non IP-based AdapterInitializing.. Collecting information from local node... Processing... Storing information in file # No subnets are known. You can discover the cluster # configuration by selecting Discover Current Network # Configuration. # Network_Name List_Of_Subnets %1$s:Problem accessing the configuration file %2$s. Discover Current Volume Group ConfigurationAutomatically Import Volume GroupsNo volume group information is known. You can discover the volume groups by going to the 'Discover PowerHA SystemMirror-related Information from Configured Nodes' panel, or type in the name of desired volume group. Storing the following information in file %1$s Select Custom Disk MethodsAdd Custom Disk MethodsChange/Show Custom Disk MethodsRemove Custom Disk MethodsBreak reserves in parallelCustom Disk MethodsDisk Type (PdDvLn field from CuDv)Method to break a reserveMethod to determine if a reserve is heldMethod to identify ghost disksMethod to make the disk availableNew Disk Typegraceful,takeover,forcedSelect an Action on Resource GroupsPCI Hot Plug Replace Network AdaptersAvailable PCI Hot Plug Network Adapters Managed by PowerHA SystemMirrorNetwork Adapter to ReplaceShutdown PowerHA SystemMirror and Move Resource Group(s)if needed?false,trueEvent nameRecovery program pathResource nameSelection stringExpressionRearm expressionSelect Custom User Defined EventsCustom User Defined EventsAdd a Network ModuleChange a Network Module using Predefined ValuesChange a Network Module using Custom ValuesShow a Network ModuleRemove a Network ModuleNetwork Module to ChangeNetwork Module to ShowChange/Show PowerHA SystemMirror Workload Manager Run-time ParametersPrimary Workload Manager ClassSecondary Workload Manager ClassWorkload Manager ConfigurationNo Workload Manager Classes defined. Change/Show Time Until Warning NOTE: Changes made to this panel must be propagated to the other nodes by Verifying and Synchronizing the clusterMax. Event-only Duration (in seconds)Max. Resource Group Processing Time (in seconds)minutes andsecondsTotal time to process a Resource Group event before a warning is displayedShow Event SummariesSave Event Summaries to a fileClear Event Summary HistorySave to path / fileIP LabelNetworkDiscovered-NetworkSubnetChange/Show Resource Group Processing OrderResource Groups Acquired in ParallelSerial Acquisition OrderNew Serial Acquisition OrderResource Groups Released in ParallelSerial Release OrderNew Serial Release OrderUpdate PowerHA SystemMirror Communication Interface with Operating System SettingsBegin analysis on YEAR (1970-2038)Begin analysis at HOUR (00-23)End analysis on YEAR (1970-2038)End analysis at HOUR (00-23)Application Availability AnalysisSelect an ApplicationNew Tape Resource NameConfigure GPFSConfigure a GPFS ClusterList All GPFS FilesystemsAdd a GPFS FilesystemRemove GPFS FilesystemRemove GPFS ClusterConfigure GPFS ClusterGPFS Filesystem NameGPFS Filesystem Mount Pointhdisk(s)Force Create FilesystemGPFS Filesystem Name(s)Force Remove GPFS Filesystem(s)Remove existing GPFS filesystemsNOTE: Setting this option to 'true' will delete all existing GPFS filesystems and all the data in the GPFS filesystems will be lost.Change/Show Daily Fallback Timer PolicyChange/Show Monthly Fallback Timer PolicyChange/Show Specific Date Fallback Timer PolicyChange/Show Weekly Fallback Timer PolicyChange/Show Yearly Fallback Timer PolicyConfigure Daily Fallback Timer PolicyConfigure Monthly Fallback Timer PolicyConfigure Specific Date Fallback Timer PolicyConfigure Weekly Fallback Timer PolicyConfigure Yearly Fallback Timer PolicyRemove Fallback Timer PolicySelect Fallback Timer Policy to ChangeSelect Fallback Timer Policy to RemoveSelect Recurrence for Fallback Timer PolicyJan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,DecSunday,Monday,Tuesday,Wednesday,Thursday,Friday,SaturdayRecurrence for Fallback Timer PolicyName of the Fallback PolicyDay of Month (1-31)Week Day (Sun - Sat)HOUR (0-23)MINUTES (0-59)MONTH (Jan - Dec)YEARDailyWeeklyMonthlyYearlySpecific DateAdd a Delayed Fallback Timer PolicyChange/Show a Delayed Fallback Timer PolicyConfigure Delayed Fallback Timer PoliciesRemove a Delayed Fallback Timer Policy NOTE: PowerHA SystemMirror must be RESTARTED on all nodes in order for change to take effect NOTE: All user-configured PowerHA SystemMirror information WILL BE DELETED by this operation.Add Discovered Communication Interface and Devices Add Pre-defined Communication Interfaces and Devices Communication Interfaces Communication Devices # Discovery last performed: (%s) Not PerformedDiscovering Volume Group Configuration Rotating Concurrent Custom Discovered IP-based Network Types Discovered Serial Device Types Pre-defined IP-based Network Types Pre-defined Serial Device Types Select Resource to CustomizeAction on Resource Failurenotify,falloverCustomize Resource RecoveryCustomize Resource Group and Resource RecoveryCustomize Inter-Site Resource Group RecoveryAction on Resource Group FailureConfigurable on Multiple Nodes Bound to a Single Node Network Interfaces RS-232 Devices Target-Mode SCSI Devices Target-Mode SSA Devices X.25 Communication Interfaces SNA Communication Links Physical Disk Devices Select a NodeNode IP Label/AddressSelect Node IP Label/AddressNew Node IP Label/AddressSelect a Node and IP Label/AddressSelect a categorySelect the Pre-Defined Communication typeSelect one or more Discovered Communication Interfaces to AddSelect Point-to-Point Pair of Discovered Communication Devices to AddIP Label/AddressNetwork InterfaceDevice PathCommunication Path to NodeSelect a Node to Change/ShowPersistent Node IP Label/AddressSelect one or more nodes to deleteSelect a Resource Group Management PolicyInter-Site Management PolicyParticipating Node NamesSelect a Resource Group to Change/ShowChange/Show Resources and Attributes for a Resource GroupResource Group Management PolicyInter-site Management PolicyDynamic Node Priority (Overrides default)Service IP Labels/AddressesFilesystems (empty is ALL for volume groups specified)Filesystems/Directories to NFS MountInactive Takeover AppliedWorkload Manager ClassConcurrent Volume GroupsSelect a Communication Interface/Device to Change/ShowSelect a Service IP Label/Address typeAlternate HW Address to accompany IP Label/AddressSelect a Service IP Label/Address to Change/Show (extended)New IP Label/AddressNew Nodes (via selected communication paths)Currently Configured Node(s)Select a Service IP Label/Address to Change/ShowSelect Service IP Label(s)/Address(es) to RemoveSelect a Fallover/Fallback PolicyParticipating Nodes (Default Node Priority)Service IP Label/AddressesSelect a Standard Resource Group to RemoveDominanceBackup CommunicationsSelect one or more Communication Interfaces/Devices to RemoveSelect a Network TypeEnable IP Address Takeover via IP AliasesIP Address Offset for Heartbeating over IP AliasesSelect a Network to Change/ShowSelect a Network to RemoveEmulate or ActualForce synchronization if verification fails?Verify changes only?LoggingVerify, Synchronize or BothCluster FileStart Processing SynchronouslyStop Processing SynchronouslySNA DLC NameHIGH water mark for pending write I/Os per fileLOW water mark for pending write I/Os per fileNew Event NameAdd/Remove the nodes/portsNodenameTTYMethod nameNodename(s)Number to dial or Cell Phone AddressFilenameCluster event(s)Retry counterTIMEOUTChange/Show the Notification MethodChoose a Remote Notification MethodRemove the Notification MethodEVENTNAMEStartup PolicyFallover PolicyFallback PolicyFilesystems (empty is ALL for VGs specified)Fallback Timer Policy (empty is immediate)Select Fallback Timer PolicySettling Time (in Seconds)Resource Group NameSelect a Node on which to Open a SMIT Session Boot-time IP Label of available Network InterfaceService IP Label to MoveError ClassClass TypeError Label to EmulateError Label NameSelect a Device NetworkSelect a NetworkDynamic Node Priority PolicyAutomatically correct errors found during verification? Standard,VerboseBoth,Synchronize,VerifyNo,Interactively,YesParticipating Nodes from Primary SiteParticipating Nodes from Secondary SiteNew method nameAssociated SiteNetmask(IPv4)/Prefix Length(IPv6)DNP Script pathDNP Script timeout valueEnable AIX Live Update operationDetailed ChecksDB Instance Shutdown OptionAdd a Persistent Node IP Label/AddressChange/Show a Persistent Node IP Label/AddressAdd a Communication InterfaceAdd a Communication DeviceDiscover ConfigurationAdd a Node to the PowerHA SystemMirror ClusterChange/Show a Node in the PowerHA SystemMirror ClusterAdd/Change/Show an PowerHA SystemMirror ClusterRemove an PowerHA SystemMirror ClusterChange/Show All Resources and Attributes for a Resource GroupAdd a Resource Group (extended)Change/Show a Resource GroupChange/Show a Communication InterfaceAdd a Service IP Label/Address configurable on Multiple Nodes (extended)Add a Service IP Label/Address bound to a single node (extended)Add Application ServerChange/Show Application ServerConfigure Nodes to an PowerHA SystemMirror Cluster (standard)Add a Service IP Label/Address (standard)Change/Show a Service IP Label/Address(standard)Remove Service IP Label(s)/Address(es)Remove a Communication Interface/DeviceChange a Cluster Network Module using Pre-defined ValuesChange a Cluster Network Module using Custom ValuesShow a Cluster Network ModuleAdd an IP-Based Network to the PowerHA SystemMirror ClusterAdd a Serial Network to the PowerHA SystemMirror ClusterChange/Show an IP-Based Network in the PowerHA SystemMirror ClusterChange/Show a Serial Network in the PowerHA SystemMirror ClusterPowerHA SystemMirror Verification and Synchronization (Active Cluster Nodes Exist)PowerHA SystemMirror Verification and SynchronizationPowerHA SystemMirror Verification and Synchronization (Active Cluster)Add Communication AdapterList All Communication AdaptersList All Communication LinksAdd Highly Available SNA-over-LAN LinkRemove a Custom Cluster EventAdd a Custom User-Defined EventChange/Show a Custom User-Defined EventRemove a Custom User-Defined EventConfigure Node/TTY pairsAdd a Custom Remote Notification MethodChange/Show a Custom Remote Notification MethodRemove a Custom Remote Notification MethodSend a Test Remote MessageChange/Show a Service IP Label/Address (extended)Change/Show a Node-Bound Service IP Label/Address (extended)Configure Settling Time for Resource GroupsChange/Show a Resource Group (standard)Add a Resource GroupPowerHA SystemMirror ConfigurationVerify and Synchronize PowerHA SystemMirror ConfigurationRelease Lock Set By Dynamic ReconfigurationEmulate Node Up EventEmulate Node Down EventEmulate Network Up EventEmulate Network Down EventEmulate Fail Standby EventEmulate Join Standby EventEmulate Swap Adapter EventConfigure Distribution Policy for Resource GroupsResource Group Distribution PolicyConfigure Resource Group DistributionCustomAdd Communication Interfaces/DevicesChange/Show Communication Interfaces/DevicesRemove Communication Interfaces/DevicesUpdate PowerHA SystemMirror Communication Interface with Operating System SettingsRemove a Node from the PowerHA SystemMirror ClusterChange / Show a Persistent Node IP Label/AddressRemove a Persistent Node IP Label/AddressAdd a Resource Group (standard)Change/Show a Resource Group (standard)Remove a Resource Group (standard)Change/Show Resources for a Resource Group (standard)Add a Service IP Label/AddressChange/Show a Service IP Label/AddressConfigure Resource Group Processing OrderingConfigure PowerHA SystemMirror Workload Manager ParametersConfigure Application ServersConfigure Service IP Labels/AddressesConfigure Volume Groups, Logical Volumes and FilesystemsConfigure Concurrent Volume Groups and Logical VolumesPowerHA SystemMirror for AIXDiscover PowerHA SystemMirror-related Information from Configured NodesExtended Topology ConfigurationExtended Resource ConfigurationExtended Event ConfigurationExtended Verification and SynchronizationExtended Performance Tuning Parameters ConfigurationSecurity and Users ConfigurationSnapshot ConfigurationConfigure Pre/Post-Event CommandsChange/Show Pre-Defined PowerHA SystemMirror EventsConfigure User-Defined EventsConfigure Remote Notification MethodsPowerHA SystemMirror Extended Resources ConfigurationConfigure Resource Group Run-Time PoliciesPowerHA SystemMirror Extended Resource Group ConfigurationConfigure an PowerHA SystemMirror ClusterConfigure PowerHA SystemMirror NodesConfigure PowerHA SystemMirror SitesConfigure PowerHA SystemMirror NetworksConfigure PowerHA SystemMirror Communication Interfaces/DevicesConfigure PowerHA SystemMirror Persistent Node IP Label/AddressesConfigure PowerHA SystemMirror Global NetworksConfigure PowerHA SystemMirror Network ModulesConfigure Topology Services and Group ServicesShow PowerHA SystemMirror TopologyChange/Show Resources and Attributes for a Resource GroupRemove a Resource GroupShow All Resources by Node or Resource GroupConfigure PowerHA SystemMirror Service IP Labels/AddressesConfigure PowerHA SystemMirror Application ServersConfigure PowerHA SystemMirror Application MonitoringConfigure PowerHA SystemMirror Tape ResourcesConfigure PowerHA SystemMirror Communication Adapters and LinksConfigure Custom Disk MethodsInitialization and Standard ConfigurationExtended ConfigurationSystem Management (C-SPOC)Problem Determination ToolsVerify PowerHA SystemMirror ConfigurationAdd Nodes to an PowerHA SystemMirror ClusterConfigure Resources to Make Highly AvailableConfigure PowerHA SystemMirror Resource GroupsVerify and Synchronize PowerHA SystemMirror ConfigurationDisplay PowerHA SystemMirror ConfigurationPowerHA SystemMirror VerificationView Current StatePowerHA SystemMirror LogsRecover From PowerHA SystemMirror Script FailureRestore PowerHA SystemMirror Configuration Database from Active ConfigurationRelease Locks Set By Dynamic ReconfigurationPowerHA SystemMirror Trace FacilityPowerHA SystemMirror Event EmulationPowerHA SystemMirror Error NotificationOpen a SMIT Session on a NodeAdd a Network to the PowerHA SystemMirror ClusterChange/Show a Network in the PowerHA SystemMirror ClusterRemove a Network from the PowerHA SystemMirror ClusterChange/Show Topology and Group Services configurationAdd a Tape ResourceChange/Show Communication AdapterRemove a Communication AdapterChange Highly Available SNA-over-LAN LinkChange Highly Available SNA-over-X.25 LinkAdd Custom User-Defined Events Change/Show Custom User-Defined Events Remove Custom User-Defined Events Configure a Node/Port PairChange/Show Custom Remote Notification MethodRemove Custom Remote Notification MethodEnable/Disable Tracing of PowerHA SystemMirror for AIX daemonsStart/Stop/Report Tracing of PowerHA SystemMirror for AIX ServicesRemove a Node/Port PairCollect Cluster log files for Problem ReportingExport Definition File for Online Planning WorksheetsConfigure an PowerHA SystemMirror Cluster and NodesImport Cluster Configuration from Online Planning Worksheets FileCan't find what you are looking for ?Not sure where to start ?Primary / Local NodePowerHA for AIXDon't see the resource(s) you want to work with ?Volume Groups for ReplicationThese options let you (optionally) specify additional cluster resources: An application server is used to start and stop an application: PowerHA SystemMirror can keep an IP address highly available: Press enter to verify and synchronize the clusterGLVM Cluster Configuration Assistant# # An error occurred while trying to discover the configuration. # See the log file /tmp/2siteglvm_configassist_failures.log # for more details. # # Press F3 to return. ## # Configured %1$s as the local node. # # Select the name of the remote node and press enter to # to run discovery on that node. If you dont see the # system you want listed here, add the hostname to /etc/hosts. # and run this assistant again. # Enter path to Backup / Remote Node# # The glvm fileset glvm.rpv is not installed on the remote node. # Install GLVM on the remote node and then re-run this assistant. # # Press F3 to return to the main menu. # # # GLVM requires volume groups that are defined as "scalable" # and are not rootvg. You must first define a suitable # volume group using CSPOC, then return to this assistant. # Press F3 to return. # # The following volume groups are not viable candidates for the # reason listed: # # # This configuration assistant is for configuring an PowerHA SystemMirror cluster # with GLVM from scratch, however, it appears the cluster is already # configured with GLVM. # You can use the regular SMIT menus to manage your cluster. # # Press F3 to return to the main menu.# # This configuration assistant aids in configuring an PowerHA SystemMirror cluster # with GLVM from scratch. The glvm fileset glvm.rpv is not installed # on this node. Install GLVM before using this assistant. # # Press F3 to return to the main menu. This configuration assistant aids in configuring an PowerHA SystemMirror cluster with GLVM from scratch. It appears that there are RPVs already configured, and this assistant will help you configure an PowerHA SystemMirror cluster to keep the RPVs highly available. Press enter to proceed with discovery of the configuration. This configuration assistant aids in configuring an PowerHA SystemMirror cluster with GLVM. An PowerHA SystemMirror cluster is already configured, so your next step is to select the volume group(s) to use for GLVM. Press enter to proceed. This configuration assistant aids in configuring an PowerHA SystemMirror cluster with GLVM from scratch. Press enter to proceed with discovery of the configuration on the local node. After that you will be prompted to select the remote node and the volume group(s) to use for GLVM. Press enter to proceed with discovery of the configuration. A failure occured while trying to configure a PowerHA SystemMirror cluster from the local node and the remote node (found from the target of the configured rpv server). See the log file /tmp/2siteglvm_configassist_failures.log for more details. Cluster Configuration AssistantEnter path to Backup / Remote Node(s)# # This configuration assistant is for configuring an PowerHA SystemMirror cluster # from scratch, however, it appears the cluster is already configured. # You can use the regular SMIT menus to manage your cluster. # # Press F3 to return to the main menu. # This configuration assistant aids in configuring an PowerHA SystemMirror cluster from scratch. Press enter to proceed with discovery of the configuration on the local node. After that you will be prompted to select the remote node and other information about the cluster. Press enter to proceed with discovery of the configuration.What is an Application Server ?PowerHA SystemMirror provides high availability for applications by detecting and recovering from failures of resources the application depends on - like disks and network adapters - as well as the application itself. To define the application to PowerHA SystemMirror you create an "Application Server" then add the Application Server to a resource group. PowerHA SystemMirror will start and stop the application by calling the scripts you provide. If you use a Configuration Assistant, PowerHA SystemMirror provides the scripts for you. Resource Group BasicsA resource group defines all the resources needed by an application, for example, s network interface, disks, and the application software itself. PowerHA SystemMirror manages these resources as a unit so the entire application can be started, restarted and moved as failures occur. The Basic Properties of the group are the name, the list of nodes that can host the group, and the different policies for how you want the group to respond at startup, failure, and recovery. The Change/Show Resources option lets you change the individual resources - the Service IP, Storage and Applications - that are managed by the Resource Group. # # There are no resource groups defined. # Thats ok, you can define your resource groups later # then add the resources to the respective group. ## # There are no application servers defined. Follow the # SMIT paths to configure your Application Server(s) first # before adding them to a Resource Group. ## # There are no networks defined. Configure the Cluster, nodes # and networks first - PowerHA SystemMirror will discover most everything automatically - # then configure your service IP labels. ## # PowerHA SystemMirror will use the interfaces on the selected network to keep # the Service IP available # # Press F3 to return.Discovery failed with exit code %1$d Whats the difference ?Failed creating Application Server %1$s Successfully added Application Server %1$s You must add this Application Server to a Resource Group for PowerHA SystemMirror to keep it highly available. Visit the SMIT panel for Change/Show All Resources and Attributes for a Resource Group Failed adding Application Server %1$s to Resource Group %2$sSuccessfully added Application Server %1$s to Resource Group %2$s This change will take effect after you Verify and Synchronize the configuration Failed creating Service Label %1$s Check the spelling and or format of the input and make certain the address is defined in /etc/hostsSuccessfully added Service Label %1$sYou must add this Service Label to a Resource Group for PowerHA SystemMirror to keep it highly available. Visit the SMIT panel for Change/Show All Resources and Attributes for a Resource Group Failed adding Service Label %1$s to Resource Group %2$sSuccessfully added Service Label %1$s to Resource Group %2$s This change will take effect after you Verify and Synchronize the configuration What is an Application Server anyway ?Change/Show Basic Attributes of a Resource GroupChange/Show Resources in a Resource GroupWhat is a Resource Group anyway ?Configure Nodes to an PowerHA SystemMirror Cluster What is a Service Label anyway ?Getting StartedPowerHA SystemMirror provides high availability by monitoring and responding to failures of specific resources. These resources are combined into groups which comprise all the resources associated with a specific application. HACMP manages a resource group as a unit - if one or more resources in the group fails, PowerHA SystemMirror recovers them or moves the entire group / application to a backup node. Configuring PowerHA SystemMirror consists of defining the set of nodes and networks where PowerHA SystemMirror will run, defining the resources and resource groups PowerHA SystemMirror is to monitor and recover, verifying the configuration and correcting any problems, then synchronizing the configuration to all nodes in the cluster. Once configured, you will start cluster services to have PowerHA SystemMirror begin managing and recovering the resources. The most commonly used options for configuring PowerHA SystemMirror are found under the 'Initialization and Standard Configuration' SMIT path. Advanced configuration options are found under the 'Extended Configuration' SMIT path. 'System Management Tools (C_SPOC)' provides tools for configuring, changing and managing the cluster, including starting and stopping PowerHA SystemMirror. 'Problem Determination Tools' provides tools for diagnosing and correcting any problems. Use the Configuration Assistants to quickly configure PowerHA SystemMirror with DB2, Websphere or Oracle. If you are not using these applications or prefer to setup PowerHA SystemMirror directly, use the other options here. Configuring PowerHA SystemMirror consists of defining the set of nodes and networks where PowerHA SystemMirror will run, defining the resources and resource groups PowerHA SystemMirror is to monitor and recover, verifying the configuration and correcting any problems, then synchronizing the configuration to all nodes in the cluster. Once configured, you will start cluster services to have PowerHA SystemMirror begin managing and recovering the resources. Configuration Assistants for specific ApplicationsBasic Configuration of Cluster Nodes and NetworksConfigure Resource GroupsVerify and Synchronize Cluster ConfigurationDisplay Cluster ConfigurationPowerHA SystemMirror provides high availability for applications by detecting and recovering from failures of resources the application depends on - like disks and network adapters - as well as the application itself. These resources are managed as a group so that the entire application can be recovered as a unit. First step is to create the group and define the basic attributes like the nodes that can host it, and how you want it to behave at startup and when a failure occurs. Once you have defined the basics you can begin to add resources - service IPs, applications and storage - into the group. PowerHA SystemMirror provides high availability for applications by detecting and recovering from failures of resources the application depends on like disks and network adapters, as well as the application itself. The service IP is the address a client outside the cluster would use to access the application running on the cluster nodes. PowerHA SystemMirror keeps this address highly available by moving it between network interfaces as those interfaces fail. The options listed here are the most basic and most common for configuring typical clusters. For advanced configuration options, navigate to the 'Extended Configuration' SMIT path. There you will find all possible resources and resource group options. Deciding on shared storage optionsPowerHA SystemMirror provides high availability for applications by detecting and recovering from failures of resources the application depends on - like disks and network adapters - as well as the application itself. PowerHA SystemMirror can manage the storage between nodes, whether that storage is direct attached disks, a SAN or NFS.# # Once you have configured the cluster, this option will # verify the configuration and propagate it to all nodes # in the cluster. # Define your cluster first before using this option. # # Press F3 to return to the main menu. # Any time you change the cluster configuration, those changes must be propagated to all nodes in the cluster. PowerHA SystemMirror will also perform a set of verification checks to make sure the configuration is viable. Depending on the nature of the changes, this operation can disrupt the applications currently under PowerHA SystemMirror control. Press enter when you are ready to begin.Failed adding resource group %1$sAdded resource group %1$s You can now add resources to the group using the 'Change/Show Resources for a Resource Group' SMIT path.Cluster Test ToolUsing the Cluster Test Tool The cluster test tool executes a predefined set of tests to help verify the setup and behavior of your cluster. These tests are disruptive and should not be run on a cluster running a production application ! Press enter when you are ready to begin or PF3 to return to the menu # # The cluster test tool executes a predefined set of tests # to help verify the setup and behavior of your cluster. # To use the test tool you must first configure the cluster. # # Press F3 to return to the main menu. ## # There are no resource groups defined. # Define your resources and resource groups # then add the resources to the respective group. #The cluster, nodes and networks must be defined before you can add resources and resource groups. Use F3 to return to the menu.There are no resource groups defined. Define your resources and resource groups then add the resources to the respective group. There are no resource groups defined. # # There are no application servers defined ## # The cluster, nodes and networks must be defined before you can # add resources and resource groups. Use F3 to return to the menu. ## # There are no service labels defined. ## # Select from the list below (taken from /etc/hosts) or type in a value # # # Service IPs are not supported with resource groups # using the Online on All Available Nodes startup policy. # Thats ok, you can define your resource groups later # then add this Service Label to the respective group. # The Resource Group has been removed. The resources that used to be in this group still exist. You can now remove them or add them to a different group.This configuration assistant aids in configuring an PowerHA SystemMirror cluster from scratch. On the next screen you will be prompted for the local and remote cluster node names as well as some information about an Application Server and a Service IP. These resources will be added to a Resource Group that PowerHA SystemMirror will keep highly available. After the initial configuration is complete you can use the rest of the menus to configure and manage your cluster. You can use F1 for more help on any entry. Press enter to proceed. Don't see the application you are interested in ?Sucessfully updated resource group %1$s This change will take effect after you Verify and Synchronize the configuration There is no cluster defined. You must first define the cluster, nodes and networks (PowerHA SystemMirror will discover everything automatically for you) before using this assistant to add a NFS resource group.This configuration assistant helps you configure a resource group with NFS crossmounts. PowerHA SystemMirror will keep the crossmounts available by recovering the service IP you specify and re-mounting the NFS directories after a failure. Fill in all required fields on the next screen (Hint: use F1 for help on any field) and when you are done, PowerHA SystemMirror will complete the configuration process.This utility lists all the SMIT paths available with PowerHA SystemMirror. You can search for the function you are looking for, and if the option is not preceeded by a '#' you can select it to go directly to that option. The SMIT fastpath is listed next to each option.# Select this next line (yes, its blank) to leave this field empty. # You can add this resource to a resource group later. Manage Cluster Services and Resource Groups# # There are no application controllers defined ## Don't see what you are looking for ? # To add a node, you must first create entries in # /etc/hosts and /etc/cluster/rhosts. # The following entries in /etc/hosts do not exist # in /etc/cluster/rhosts: # Change/Show Group Services Log File SizeRecover Resource Group From SCSI Persistent Reserve ErrorSelect the RG in ERROR stateSelect a Recurrence for Fallback Timer PolicySelect the NetworkSelect the NodeSelect the InterfacePowerHA SystemMirror ServicesCommunication InterfacesResource Group and ApplicationsSecurity and UsersLogical VolumesPowerHA SystemMirror Concurrent Logical Volume ManagementPhysical VolumesMove a Resource Group to Another NodeConfigure Communication Interfaces/Devices to the Operating System on a NodeSwap IP Addresses between Communication InterfacesPCI Hot Plug Replace a Network Interface CardList All Concurrent Volume GroupsCluster Disk ReplacementCluster Data Path Device ManagementEnhanced Journaled File SystemsView/Save/Remove PowerHA SystemMirror Event SummariesView Detailed PowerHA SystemMirror Log FilesChange/Show PowerHA SystemMirror Log File ParametersView Event SummariesRemove Event Summary HistoryScan the PowerHA SystemMirror for AIX Scripts log.Watch the PowerHA SystemMirror for AIX Scripts log.Scan the PowerHA SystemMirror for AIX System log.Watch the PowerHA SystemMirror for AIX System log.Change/Show PowerHA SystemMirror Security ModeUsers in an PowerHA SystemMirror clusterGroups in an PowerHA SystemMirror clusterPasswords in an PowerHA SystemMirror clusterInterface IP LabelWARNING: Only use this option at the direction of IBM Support PersonnelCollection pass numberNodes to collect data fromDebugCollect rsct log filesSave Cluster Log Files in snapshotReset Cluster TunablesMessage LevelCreate cluster snapshot firstSynchronize Cluster ConfigurationFile NameCluster NotesPowerHA SystemMirror Cluster SecurityConfigure Connection Authentication ModeConfigure Message Authentication Mode and Key ManagementConfigure Message Authentication ModeGenerate/Distribute a KeyEnable/Disable Automatic Key DistributionActivate the new key on all PowerHA SystemMirror cluster nodesMessage Authentication ModeEnable EncryptionType of Key to GenerateDistribute a KeyEnable/Disable Key DistributionActivate the key on all PowerHA SystemMirror cluster nodesConnection Authentication ModeUse Persistent Labels for VPN Tunnelsmd5_des,md5_3des,md5_aes,noneChange/Show Cluster Manager Log File ParametersStorageTwo-Node Cluster Configuration AssistantCommunication Path to Takeover NodeApplication Server NameApplication Server Start ScriptApplication Server Stop ScriptService IP LabelPowerHA SystemMirror Two-Node Cluster Configuration AssistantInitializing Configuration AssistantAssistant Initialization FAILED!Step 1 of 4: Topology ConfigurationConfiguring TopologyTopology Configuration FAILED!Step 2 of 4: Application Server ConfigurationConfiguring Application ServerApplication Server Configuration FAILED!Step 3 of 4: Resource ConfigurationConfiguring ResourcesResource Configuration FAILED!Step 4: Verification and SynchronizationVerifying and Synchronizing Cluster Configuration.Please wait. This may take some time.Congratulations! Cluster Configuration is Now Complete!Verification and Synchronization FAILED!PowerHA SystemMirror Cluster ConfigurationExit<< BackNext >>StartFinishedOKConfiguration AssistantsApplication NameGLVM Cluster Configuration AssistantPesistent IP for Local NodePesistent IP for Takeover NodePowerHA SystemMirror can keep an IP address highly available: Consider specifying Service IP labels and Persistent IP labels for your nodes. PowerHA SystemMirror File Collection ManagementFile CollectionsManage Files in File CollectionsPropagate Files in File CollectionsAdd a File CollectionChange/Show a File CollectionRemove a File CollectionChange/Show Automatic Update TimeAdd Files to a File CollectionRemove Files from a File CollectionFile Collection NameFile Collection DescriptionPropagate files during cluster synchronization?Propagate files automatically when changes are detected?Select a File CollectionNew File Collection NameCollection filesNew FileSelect one or more files to remove from this File CollectionAutomatic File Update Time (in minutes)PowerHA SystemMirror Cluster Test ToolExecute Automated Test ProcedureExecute Custom Test ProcedureExecute Automated Test Procedure (standard)Execute Automated Test Procedure (extended)Verbose LoggingCycle Log FileAbort On ErrorTest PlanVariables FileConfigure Dependencies between Resource GroupsAdd Parent/Child Dependency between Resource GroupsChange/Show Parent/Child Dependency between Resource GroupsRemove Parent/Child Dependency between Resource GroupsDisplay All Parent/Child Resource Group DependenciesSelect the Parent Resource GroupSelect the Child Resource GroupSelect a Parent/Child Resource Group Dependency to Change/ShowSelect a Parent/Child Resource Group Dependency to DeleteSelect a Base for DisplayingDisplay per ParentDisplay per ChildAdd Online on the Same Node Dependency Between Resource GroupsChange/Show Online on the Same Node Dependency Between Resource GroupsRemove Online on the Same Node Dependency Between Resource GroupsDisplay All Resource Group Dependencies per ParentDisplay All Resource Group Dependencies per ChildParent Resource GroupChild Resource GroupResource Groups to be Online on the same siteNew Resource Groups to be Online on the same siteOld Parent Resource GroupNew Parent Resource GroupOld Child Resource GroupNew Child Resource GroupSelect a Base for DisplayingConfigure Parent/Child DependencyConfigure Online on the Same Node DependencyConfigure Online on Different Nodes DependencyConfigure Online on the Same Site DependencyAdd Online on the Same Site Dependency Between Resource GroupsChange/Show Online on the Same Site Dependency Between Resource GroupsRemove Online on the Same Site Dependency Between Resource GroupsResource Groups to be Online on the same nodeSelect Online on the same node Dependency to Change/ShowNew Resource Groups to be Online on the same nodeSelect Online on the same node Dependency to DeleteHigh Priority Resource Group(s)Intermediate Priority Resource Group(s)Low Priority Resource Group(s)Select Online on the same site Dependency to Change/ShowSelect Online on the same site Dependency to DeleteSource Resource GroupTarget Resource GroupSelect a Start After Resource Group Dependency to Change/ShowOld Source Resource GroupNew Source Resource GroupOld Target Resource GroupNew Target Resource GroupSelect a Start After Resource Group Dependency to RemoveSourceTargetDisplay per SourceDisplay per TargetSelect a Stop After Resource Group Dependency to Change/ShowSelect a Stop After Resource Group Dependency to DeleteAdd Online on Different node Dependency between Resource GroupsChange/Show Online on Different node Dependency between Resource GroupsRemove Online on Different node Dependency between Resource GroupsSelect Online on Different node Dependency to Change/ShowSelect Online on Different node Dependency to DeleteRemove Online on Different node Dependency between Resource GroupsSelect a Resource Group Dependency to Change/ShowSelect a Resource Group Dependency to DeleteConfigure PowerHA SystemMirror ApplicationsConfigure PowerHA SystemMirror for Dynamic LPAR and CUoD ResourcesConfigure Communication Path to HMC and CUoD serversConfigure Communication Path to HMCConfigure Dynamic LPAR and CUoD Resources for ApplicationsAdd HMC/CUoD IP addresses for a nodeAdd HMC IP addresses for a nodeChange/Show HMC/CUoD IP addresses for a nodeChange/Show HMC IP addresses for a nodeRemove HMC/CUoD IP addresses for a nodeRemove HMC IP addresses for a nodeHMC IP Address(es)Managed System NameCUoD Console IP AddressAdd Dynamic LPAR and CUoD Resources for ApplicationsChange/Show Dynamic LPAR and CUoD Resources for ApplicationsRemove Dynamic LPAR and CUoD Resources for ApplicationsSelect a Node to Change/Show the HMC IP AddressesSelect an Application Server to Configure ProvisioningMinimum number of CPUsEnforce Minimum CPU AvailabilityDesired number of CPUs Minimum amount of memory (in megabytes)Enforce Minimum Memory AvailabilityDesired amount of memory (in megabytes)Use CUoD if resources are insufficient?I agree to use CUoD resources (Using CUoD may result in extra costs)Configure PowerHA SystemMirror Dynamic LPAR and CUoD ResourcesConfigure Service IP Labels/Address Distribution PreferenceDistribution PreferenceSelect the Network to Change Service Label Distribution PreferenceConfigure Resource Distribution PreferencesAnti-Collocation,Collocation,Collocation with Persistent Label,Anti-Collocation with Persistent Label Minimum number of processing units Desired number of processing unitsAnti-Collocation,Anti-Collocation with Source,Collocation,Collocation with Source,Collocation with Persistent Label,Anti-Collocation with Persistent Label,Anti-Collocation with Persistent Label and SourceSource IP Label for outgoing packetsParameters for Shared Processor partionsExtended Cluster Service SettingsStart PowerHA SystemMirror at system restart?BROADCAST message at startup?Startup Cluster Information Daemon?Verify Cluster Prior to Startup?Ignore verification errors?Automatically correct errors found during cluster start? Interactively,Yes,NoCluster Services SettingsCluster Event ConfigurationPerformance Tuning ParametersSecurity and UsersConfigure Custom Volume Group MethodsAdd Custom Volume Group MethodsVolume Group TypeMethod to List Volume Group NamesMethod for determining shared volume groups in a clustered environmentMethod to determine physical disks (hdisks) comprising the volume groupMethod for bringing volume group onlineMethod for forcing volume group onlineMethod for bringing volume group offlineMethod for verifying volume group configurationDirectories containing log informationChange/Show Custom Volume Group MethodsChoose A Volume Group TypeRemove Custom Volume Group MethodsChoose A Volume Group TypeConfigure Custom Filesystem MethodsAdd Custom Filesystem MethodsFilesystem TypeMethod for listing filesystem namesMethod for listing volume groups hosting a specified filesystemMethod for bringing filesystem onlineMethod for bringing filesystem offlineMethod for forcing the filesystem onlineMethod for performing a 'health check' on the filesystemMethod for verifying filesystem configurationChange/Show Custom Filesystem MethodsChoose A Filesystem TypeRemove Custom Filesystem MethodsChoose A Filesystem TypeMethod for determining volume group statusMethod for determining filesystem statusNew Volume Group TypeNew Filesystem TypeBring a Resource Group OnlineBring a Resource Group OfflineMove a Resource Group to Another Node / SiteMove Resource Groups to Another NodeMove Resource Groups to Another SiteMove Resource Group(s) to Another NodeMove Secondary Instance(s) of Resource Group(s) to Another NodeMove Resource Group(s) to Another SiteMove Secondary Instance(s) of Resource Group(s) to Another SiteShow the Current State of Applications and Resource GroupsMake Applications Highly Available (Use Smart Assists)Add an Application to the PowerHA SystemMirror ConfigurationChange/Show an Application's PowerHA SystemMirror ConfigurationRemove an Application from the PowerHA SystemMirror ConfigurationManage Your ApplicationsChange/Show the Resources Associated with Your ApplicationTest Your Application for AvailabilityConfigure PowerHA SystemMirror Cluster and NodesDefine SitesSelect an Application from the List of Discovered Applications BelowSelect the Specific Configuration You Wish to CreateEnter Communication Path to NodesEnter nodes for (primary) site "A"Select a Resource Group To Change its Resources and AttributesPrimary NodeTakeover NodesGeneral Application Smart AssistEnter nodes for (primary) site "B"NFS Export Configuration AssistantAutomatic Discovery And ConfigurationManual ConfigurationSelect Configuration ModePath to Smart Assist Configuration FileImport Smart Assist Configuration From a XML FileSmart Assist IDUnable to read the configuration file. Please ensure the correct pathNo Manual Configuration command available for this Smart AssistChange/Show the SAP Instance AttributesSelect the specific configuration you wish to createRun the registration scriptNote: The default file shown above is a sample file, with sample values. Enter the full path to an XML file which mimics that sample file, but has been populated with valid values.Cluster Services are Active,cannot remove an application Manage RSCT ServicesStop RSCT ServicesStop RSCT ServicesStop RSCT Services on the local node NOTE: Do not stop RSCT services unless IBM support suggests this. Stopping RSCT services disrupts other system services, such as enhanced concurrent mode volume groups, which may impact applications.ForceConfigure Application Monitoring and DependenciesConfigure ApplicationsConfigure Application Custom MonitorConfigure Parent/Child Application DependenciesApplication Dependency Group ManagementUpdate the configurationAdd ApplicationChange/Show ApplicationRemove ApplicationAdd Custom Application MonitorChange/Show Custom Application MonitorRemove a Custom Application MonitorAdd Parent/Child Dependency between ApplicationsChange Parent/Child Dependency between ApplicationsRemove Parent/Child Dependency between ApplicationsBring Application OnlineBring Application OfflineBring Application Dependency Group OnlineBring Application Dependency Group OfflineAdd ApplicationApplication NameStart Application MethodStop Application MethodApplication Custom Monitor NameProcesses to MonitorProcess OwnerInstance CountCleanup MethodRestart MethodDependent NetworkDependent IP Labels/AddressesDependent Volume GroupsDependent File SystemsSelect an ApplicationApplication to ChangeChange an ApplicationApplication NameNew Application NameStart Application MethodStop Application MethodMonitor NameProcesses to MonitorProcess OwnerInstance CountCleanup MethodRestart MethodDependent NetworkDependent IP Labels/AddressesDependent Volume GroupsDependent File SystemsSelect an ApplicationApplication to RemoveRemove an ApplicationApplication NameAdd Custom Application MonitorSelect a Process Application MonitorApplication Monitor to ChangeChange/Show Custom Application MonitorApplication Monitor to RemoveCustom Application Monitor to RemoveRemove a Custom Application MonitorMonitor ModeStabilization IntervalRestart CountRestart IntervalCustom Monitor MethodCustom Monitor IntervalCustom Monitor Hung SignalApplication Monitor to ChangeProcess Stabilization IntervalMonitor MethodMonitor IntervalHung Monitor SignalSelect the Parent ApplicationSelect the Child ApplicationAdd Parent/Child Dependency between ApplicationsParent ApplicationChild ApplicationDependency Group NameSelect a Parent/Child Application Dependency to Change/ShowSelect a Parent/Child Application Dependency to Change/ShowChange/Show Parent/Child Dependency between ApplicationsOld Parent ApplicationNew Parent ApplicationOld Child ApplicationNew Child ApplicationSelect a Parent/Child Application Dependency to DeleteRemove Parent/Child Dependency between ApplicationsUpdate the configurationStart ApplicationApplication to StartStop ApplicationApplication to StopStart Application Dependency GroupApplication Dependency Group to StartStop Application Dependency GroupApplication Dependency Group to StopAPPLICATION PROCESS MONITOR ATTRIBUTES:ParentChildAdd a Resource Group with NFS exportsChange/Show a Resource Group with NFS exportsRemove a Resource Group with NFS exportsSelect a Resource Group to RemoveAdd a Concurrent Logical Volume for Multi-Node Disk HeartbeatShow Volume Groups in use for Multi-Node Disk HeartbeatStop using a Volume Group for Multi-Node Disk HeartbeatConfigure failure action for Multi-Node Disk Heartbeat Volume GroupsAdd or Remove Nodes from a Multi-Node Disk Heartbeat NetworkSelect Logical Volume to use for HeartbeatPhysical Volume to reserve for HeartbeatSelect a Volume Group for Multi-Node Disk HeartbeatSelect Multi-Node Disk Heartbeat Network to RemoveSelect Multi-Node Disk Heartbeat Volume GroupExisting Logical Volume to reserve for heartbeatRemove Logical Volume from Volume Group ?Optional notification methodOn loss of accessHalt the nodeBring the Resource Group offlineMove the Resource Group to a backup nodeManage Concurrent Access Volume Groups for Multi-Node Disk HeartbeatUse an existing Logical Volume and Volume Group for Multi-Node Disk HeartbeatCreate a new Volume Group and Logical Volume for Multi-Node Disk HeartbeatSoft FILE sizeSoft CPU timeSoft DATA segmentSoft STACK sizeSoft CORE file sizeHard FILE sizeHard CPU timeHard DATA segmentHard STACK sizeHard CORE file sizeERROR: The version of snapshot [%1$s] does not match the installed software version. You must use clconvert_snapshot first to convert the snapshot to be compatible with the installed version. See the man page for clconvert_snapshot for further information. AIX Tracing for Cluster ResourcesEnable AIX Tracing for Cluster ResourcesDisable AIX Tracing for Cluster ResourcesManage Command Groups for AIX Tracing for Cluster ResourcesList Command Groups for AIX Tracing for Cluster ResourcesAdd a Command Group for AIX Tracing for Cluster ResourcesChange / Show a Command Group for AIX Tracing for Cluster ResourcesRemove Command Groups for AIX Tracing for Cluster ResourcesSelect a Template Command Group for AIX Tracing for Cluster ResourcesSelect a Command Group for AIX Tracing for Cluster ResourcesCommand GroupsMaximum DurationADDITIONAL EVENT GROUPS to traceADDITIONAL event IDs to traceEvent Groups to EXCLUDE from traceEvent IDs to EXCLUDE from traceProcess IDs to TracePropagate Tracing toTrace MODESTOP when log file full?LOG FILEOmit PS/NM/LOCK HEADER to log file?Omit DATE-SYSTEM HEADER to log file?Trace BUFFER SIZE in bytesLOG FILE SIZE in bytesBuffer AllocationWPAR names to TraceSave WPAR's CID in trace entries?Command Group ID (optional) If none, no template command group is used.Command Group IDCommand Group DescriptionEvent GroupsEvent IDsCommand to Enable Component TraceCommand to Disable Component Trace24new processes and threads,new threads,nothingalternate,single,circularyes,nono,yesautomatic,kernel heap,separate segmentsThere are no Composite Groups defined (according to symcg). That's ok - you can type the name here manually then create the Composite Group later. There are no Device Groups defined (according to symdg). That's ok - you can type the name here manually then create the Device Group later. # No Fallback Timer Policies configured. Enhanced Journaled File System Standard Journaled File System Compressed Journaled File System Large File Enabled Journaled File System PowerHA SystemMirrorPowerHA SystemMirror provides high availability by monitoring and responding to failures of specific resources. These resources are combined into groups which comprise all the resources associated with a specific application. PowerHA SystemMirror manages a resource group as a unit - if one or more resources in the group fails, PowerHA SystemMirror recovers them or moves the entire group / application to a backup node. Configuring PowerHA SystemMirror consists of defining the set of nodes and networks where PowerHA SystemMirror will run, defining the resources and resource groups PowerHA SystemMirror is to monitor and recover, verifying the configuration and correcting any problems, then synchronizing the configuration to all nodes in the cluster. Once configured, you will start cluster services to have PowerHA SystemMirror begin managing and recovering the resources. The most commonly used options for configuring PowerHA SystemMirror are found under the top level menus in SMIT. For example, options for configuring and managing the cluster, nodes, and networks (also known as the 'topology' components of the cluster) are found under the 'Cluster Nodes and Networks' SMIT path. Options for configuring and managing applications, resources and resource groups are found under the 'Cluster Applications and Resources' SMIT path. Advanced or custom configuration options are found under the 'Custom Cluster Configuration' SMIT path. 'System Management Tools (C_SPOC)' provides tools for configuring, changing and managing the cluster, including starting and stopping PowerHA SystemMirror. 'Problem Determination Tools' provides tools for diagnosing and correcting any problems. Cluster Nodes and NetworksInitial Cluster Setup (Typical)Setup a Cluster, Nodes and NetworksDefine Repository Disk and Cluster IP AddressManage the ClusterRemove the Cluster DefinitionManage NodesAdd a NodeChange/Show a NodeRemove a NodeManage Networks and Network InterfacesNetworksNetwork InterfacesAdd a NetworkChange/Show a NetworkMove an Adapter to Another NetworkRemove a NetworkShow Topology Information by Network InterfaceAdd a Network InterfaceChange/Show a Network InterfaceRemove a Network InterfaceRepository DiskCluster IP AddressShow All Network InterfacesSelect an Network Interface to ShowShow Information for All Network InterfacesShow Information for Selected Network InterfaceSelect network interface to showConfigure Persistent Node IP Label/AddressesNetwork interface to showWhat are the repository disk and cluster IP address ?PowerHA SystemMirror uses a central repository disk to store its configuration. This disk must be shared by all nodes in the local cluster. When sites are defined for PowerHA SystemMirror Enterprise Edition, a unique repository disk must be specified for each site in the cluster. The disk specified does not have to share the same hdisk name on each node, but you should verify that it is a disk with a common PVID defined on each node according to lspv output. Specify the hdisk name of the disk on this node, from which you are creating the cluster. The cluster IP address is a multicast IP address that is used for internal cluster communication and monitoring. This multicast IP address will be generated automatically if one is not provided, and you should only provide one if it is necessary in your environment. If you do provide one, it should be in the multicast address range, 224.0.0.0 - 239.255.255.255. Neither the repository disk nor the cluster IP address can be changed once you have synchronized the cluster configuration. Discover Network Interfaces and Disks This operation will discover configured interfaces and disks on the nodes currently defined to the cluster. Press enter to continue with this operation, or F3 to cancel. No nodes are currently defined for the cluster. Define at least one node, and ideally all nodes, prior to defining the repository disk and cluster IP address. It is important that all nodes in the cluster have access to the repository disk and can be reached via the cluster IP address, therefore you should define the nodes in the cluster first. Remove definition from all nodesSelect one or more network interfaces to change/showSelect one or more network interfaces to removeUpdate Network Interface with Operating System SettingsCluster Aware AIX uses a central repository disk to store its configuration. This disk must be shared by all nodes. When sites with linked clusters are defined a unique repository disk must be specified for each site. The disk specified does not have to share the same hdisk name on each node, but you should verify that it is a disk with a common PVID defined on each node according to lspv output. Specify the hdisk name of the disk on the node where you are configuring the cluster. You can change the repository disk later if needed. The cluster IP address is a multicast IP address that is used for internal cluster communication and monitoring. This multicast IP address will be generated automatically if one is not provided. If you do provide one it must be in the multicast address range 224.0.0.0 - 239.255.255.255. If you have IPv6 addresses configured, CAA will also use IPv6 multicast. The IPv6 multicast address cannot be specified directly: it is derived by OR'ing the standard prefix of 0xFF05 with the (hex converted) IPv4 multicast. Example: Assume IPv4 address is 228.8.16.129 or 0xE4081081 Transformation by OR'ing bits (0xFF05:: | 0xE4081081) The resulting IPv6 multicast address will be 0xFF05::E408:1081 The cluster IP address can not be changed later without recreating the CAA cluster.Learn more about repository disk and cluster IP addressConfigure Cluster Split and Merge PolicySplit Handling PolicyMerge Handling PolicySplit and Merge Action PlanSelect Tie BreakerNone,Tie Breaker,Manual Majority,Tie Breaker,Priority,Manual Reboot None Configure Split and Merge Policy for a Stretched ClusterConfigure Split and Merge Policy for a Linked ClusterNone,Tie BreakerMajority,Tie BreakerManual Choice Options Notify Method Notify Interval (seconds) Maximum Notifications Default Surviving Sitesplitmergesplit or mergeA cluster %1$s has been detected. You must decide if this side of the partitioned cluster is to continue. To have it continue, enter /usr/es/sbin/cluster/utilities/cl_sm_continue To have the recovery action - %2$s - taken on all nodes on this partition, enter /usr/es/sbin/cluster/utilities/cl_sm_recover Current Policies are: Tie BreakerNo communications path was specified and node name "%1$s" was not resolved by the "host" command. Either provide a communications path, or specify a node name that resolves to an IP address. Communications path "%1$s" is not valid. It cannot be resolved by the "host" command. Node name "%1$s" which resolves to IP address "%2$s" is not consistent with communications path "%3$s" Node name "%1$s" which resolves to IP address "%2$s" is not consistent with communications path "%3$s" which resolves to IP address "%4$s" %1$s - select this item to remove the current Tie Breaker, %2$sReboot Apply to Storage Replication RecoveryManual Response to Split or MergeDisplay any needed Manual ResponseProvide a Manual ResponseThis site shouldContinue Recover You have chosen for this side of the partitioned cluster to continue. You must now log in to a node on the other side of the partitioned cluster and use smit 'cl_sm_manual_menu_dmn' to specify the appropriate response ('recover') for PowerHA SystemMirror to completely recover from the split or merge. You have chosen for this side of the partitioned cluster to recover. You must now log in to a node on the other side of the partitioned cluster and use smit 'cl_sm_manual_menu_dmn' to specify the appropriate response ('continue') for PowerHA SystemMirror to completely recover from the split or merge. Add a Node to a Linked ClusterContinue,RecoverNone,Tie Breaker,ManualMajority,Tie Breaker,Priority,ManualReboot,Restart Cluster ServicesNFS OptionsNFS Export ServerLocal Mount DirectoryNFS Export DirectoryNone,Tie Breaker,Manual,NFS Majority,Tie Breaker,Priority,Manual,NFS Select TieBreaker TypeCritical Resource GroupDisk TieBreaker ConfigurationNFS TieBreaker ConfigurationSplit Management PolicyMerge Management PolicyConfigure Active Node Halt PolicyDisk FencingQuarantine PolicyActive Node Halt PolicyPriority Majority Manual TieBreaker Disk NFS Split and Merge Management PolicyMajority,Tie Breaker,Manual,NFSReboot,Disable Applications Auto-Start and Reboot,Disable Cluster Services Auto-Start and RebootRebootNone,Tie Breaker,Manual,NFSNoneMajorityTieBreakerNFSManualquorumStart CAA on Merged NodeNone,Majority,TieBreaker Disk,TieBreaker NFS,ManualNone,TieBreaker Disk,TieBreaker NFS,ManualNone,Majority,TieBreaker Disk,TieBreaker NFS,Manual,CloudNone,TieBreaker Disk,TieBreaker NFS,Manual,CloudBucket nameCloud serviceUse existing bucketAdd a Node to a Stretched ClusterCluster Applications and ResourcesResourcesResource GroupsApplication Configuration Assistants (Smart Assists)Configure User Applications (Scripts and Monitors)Configure Tape ResourcesApplication Controller ScriptsApplication MonitorsAdd Application Controller ScriptsChange/Show Application Controller ScriptsSelect Application ControllerRemove Application Controller ScriptsApplication Controller to RemoveWhat is an "Application Controller" anyway ?What is an "Application Controller" ?PowerHA SystemMirror provides high availability for applications by detecting and recovering from failures of resources on which an application depends, such as disks that host the volume groups and file systems for an application, and network adapters that host application IP addresses. In order to define an application to the cluster, you create "Application Controller" scripts that are used to start and stop the application. From the "Add Application Controller Scripts" menu, you give the Application Controller itself a name and specify the path to the start and stop scripts that control the application, then add this Application Controller to a resource group. SystemMirror will start and stop the application by calling the scripts you provide. If you use a Configuration Assistant, SystemMirror provides the scripts for you. It is important that the start and stop scripts you specify for the Application Controller are available in the same path on all nodes that can host the resource group (according to the resource group definition), and that the scripts are executable on those nodes.Configure Application for Dynamic LPAR and CoD ResourcesConfigure Dynamic LPAR and CoD Resources for ApplicationsAdd Dynamic LPAR and CoD Resources for ApplicationsChange/Show Dynamic LPAR and CoD Resources for ApplicationsRemove Dynamic LPAR and CoD Resources for ApplicationsSelect an Application Controller to Configure ProvisioningConfigure Start After Resource Group DependencyConfigure Stop After Resource Group DependencyAdd Start After Resource Group DependencyChange/Show Start After Resource Group DependencyRemove Start After Resource Group DependencyDisplay Start After Resource Group DependenciesAdd Stop After Resource Group DependencyChange/Show Stop After Resource Group DependencyRemove Stop After Resource Group DependencyDisplay Stop After Resource Group DependenciesChange/Show Nodes and Policies for a Resource GroupSelect the Source Resource GroupSelect the Target Resource GroupDisplay All Start After Resource Group Dependencies per SourceDisplay All Start After Resource Group Dependencies per TargetApplication Controller(s) to MonitorApplication Controller NameApplication Controllers# # There are no application servers defined. Follow the # SMIT paths to configure your Application Controller(s) first # before adding them to a Resource Group. #Application Controller Start ScriptApplication Controller Stop ScriptDisaster RecoverySitesSite ResourcesCustom Cluster ConfigurationInitial Cluster Setup (Custom)Add/Change/Show ClusterNodesVerify and Synchronize Cluster Configuration (Advanced)Cluster Startup SettingsCluster Events NOTE: Cluster services must be RESTARTED on all nodes in order for change to take effect NOTE: All user-configured cluster information WILL BE DELETED by this operation.Custom Volume Group MethodsCustom File System MethodsAdd Custom File System MethodsChange/Show Custom File System MethodsRemove Custom File System MethodsEventsSystem EventsPre/Post-Event CommandsChange/Show Pre-Defined EventsUser-Defined EventsRemote Notification MethodsChange/Show Event ResponseEvent NameResponseSelect a Monitored System EventInclude custom verification library checksActiveConfigure User Defined Resources and TypesConfigure User Defined Resource TypesConfigure User Defined ResourcesAdd a User Defined Resource TypeChange/Show a User Defined Resource TypeRemove a User Defined Resource TypeAdd a User Defined ResourceChange/Show a User Defined ResourceRemove a User Defined Resource Import User Defined Resource Types and Resources Definition from XML fileChange/Show User Defined Resource MonitorUser Defined Resource Monitor to change# #There are no user defined resource monitors configured #Resource Type NameProcessing orderVerification Method Verification TypeStart MethodStop MethodMonitor MethodCleanup MethodRestart MethodFailure Notification MethodRequired AttributesOptional AttributesDescriptionSelect a Resource TypeNew Perform AfterFile nameSelect a User Defined Resource TypeResource NameAttribute dataSelect a User Defined ResourceNew Resource NameLog event and reboot,Only log the eventAdd Stop After Dependency between Resource GroupsResource to Monitoracquire first and release lastacquire after %1$s and release before %2$s# Select FIRST to acquire the resource first and release it last# Selecting a resource type from the following list # will cause the new resource type to be acquired after the # selected type, and released before it# Selecting a resource type from the following list # will cause this new resource type to be acquired after the # selected type, and released before it Learn more about Application Controllers Learn more about Resource Groups Learn more about Service LabelsCompare Active and Default ConfigurationsFibre channel interfaceSelect a new Cluster repository diskCluster heartbeat settingsHeartbeat Via SAN interfacesAdd a SAN Heartbeat interfaceFibre channel interfaceRemove a SAN Heartbeat interfacePowerHA SystemMirror Log Viewing and ManagementRaw Disk UUIDs/hdisksDisk Error Management?Standard Cluster DeploymentMulti Site Cluster DeploymentSite 1 NameSite 2 NameSite PriorityCluster TypeSite Multicast AddressSelect Site PrioritySite Grace PeriodSite Heartbeat CycleSetup Cluster, Sites, Nodes and NetworksManage Sites1,2,ManualStretched Cluster,Linked ClusterNo nodes are currently defined for the cluster. Define at least one node, and ideally all nodes, prior to defining the repository disk and cluster IP address for each site. It is important that all nodes in a site have access to the repository disk and can be reached via the cluster IP address, therefore you should define the nodes and sites in the cluster first. Multi Site with Linked Clusters ConfigurationInitial site configuration has been saved. You can now go on to complete the rest of the configuration, including adding backup repository disks (recommended), custom event notifications, resource groups and applications, etc. When you have entered all the basic information you can then use Verification and Synchronization to verify the configuration and distribute it to all cluster nodes. Default Multicast IP address will be used for site %1$s and will be assigned during Synchronization. The cluster has already been defined in AIX and basic elements can no longer be changed unless you remove the cluster first. Use the Custom configuration SMIT options to add and remove nodes. The basic cluster configuration is as follows: # # You must define a repository disk before using this option. # Add a Repository DiskRemove a Repository DiskShow Repository DisksManage Repository DisksRepository Disk to Remove# # There are currently no backup repostory disks defined. # Press F3 to return. # Select a SiteShow Topology Information by SiteShow All SitesSelect a Site to ShowSite to ShowReplace the Primary Repository DiskNode Failure Detection TimeoutFailure Indication Grace TimeLink Failure Detection TimeoutAdd a Site to a Stretched ClusterChange/Show a Site in a Stretched ClusterRemove a Site from a Stretched ClusterI want to work with Sites in a Linked ClusterInvalid repository disk specified: PVID %1$s could not be found in CuAt. Stretched ClusterLinked ClusterSelect a new repository diskERROR: This cluster is defined as a Linked Cluster. You cannot manage the cluster sites through these smit panels. Please remove the existing cluster first if you wish to change the cluster sites or configure the cluster as a stretched cluster instead. # ERROR: This cluster is defined as a Linked Cluster. # You cannot manage the cluster sites through these smit panels. # Please remove the existing cluster first if you wish to change # the cluster sites or configure the cluster as a stretched # cluster instead. Show Site Topology InformationThere are no sites definedSite Multicast IPSite Repository Disk (pvid)Node Failure Detection Grace Period ERROR: the repository disk specified for site "%1$s", with a PVID of "%2$s", is ambiguous since it has multiple matching entries in CuAt on node "%3$s". Only disks with a unique PVID should be specified for use as a repository. PVID "%2$s" maps to the following disks on node "%3$s": %4$s To replace the current repository for site "%1$s", either use the following command: Or use the SMIT fast path, "%1$s", which will open directly to the "%2$s" panel, which can also be navigated to under the top-level "%3$s" panel. Warning: you are removing a site which will leave only 1 site configured. You will not be able to verify and synchronize these changes until you add site(s) for a total of 2 sites or delete all sites. Warning: you are removing sites from a Linked Cluster. You will not be able to verify and synchronize these changes until you add site(s) for a total of 2 sites. Node Failure Detection Timeout during LPMLPM Node Policymanage,unmanageNo repository disk specified. Network Failure Detection TimeNo Site Cluster WARNING: The backup repository disk specified for node "%1$s", with a PVID of "%2$s", does not exist in node "%1$s". If the PVID "%2$s" is not valid in the current system configuration, then use the following command to remove PVID from backup repository list. /usr/es/sbin/cluster/utilities/clmgr remove repository Network failure detection time is not modified as PowerHA supports this tunable only from AIX 71 TL4 version. Config timeoutDeadman ModeRepository ModeDisaster RecoveryAssert,EventCritical Daemon Restart Grace PeriodSuccessfully configured the cluster level RSCT Critical Resource Daemon Grace Period. Tie Breaker and other RSCT services will not work during this time. And after waiting for %1$d seconds if daemons are not restarted, cluster nodes will be halted.. ERROR: Installed RSCT verion "%1$s" is not supported for CRIT_DAEMON_RESTART_GRACE_PERIOD attribute. Minimum RSCT version required to support CRIT_DAEMON_RESTART_GRACE_PERIOD is "%2$s"The optional cluster IP address input lets you specify a multicast IP address for use with multicast heartbeat. The IP address you supplied is ignored because you selected unicast heartbeat. Heartbeat MechanismCluster Multicast Address (Used only for multicast heartbeat)Unicast Multicast This cluster uses unicast heartbeat Site Multicast Address (used only for multicast heartbeat)Default Multicast IP address will be assigned during synchronization Each site in a linked cluster must have a different repository Chose a different repository disk for each site Multicast with IP address %1$s Cluster configuration is unchanged Cluster heartbeat mechanism has been changed to %1$s. Use the Verification and Synchronization operation to make this change effective. Note: Once a cluster has been defined to AIX, all that can be modified is the Heartbeat MechanismUnicast,MulticastCurrent cluster configuration follows: Note: Once a cluster has been defined to AIX, it cannot be modified Press enter to see a display of the current configurationDefault Multicast IP address will be used for cluster %1$s and will be assigned during Synchronization Multicast with IP address %1$s on %2$s and IP address %3$s on %4$s Add Discovered Network Interfaces to the ConfigurationA node name could not be determined from the supplied communication path. Either supply a node name, or correct the specification of the communcation path.Either a node name or a communication path must be suppliedCluster is already defined to AIX.To makes changes you must first delete the existing cluster.Press enter to display the current configuration.Learn more about available updates and fixes Note: Modification of Heartbeat Mechanism is disabled. Possible reason may be network is down and cluster is running over repository disk. Please bring the network up and try to change Heartbeat Mechanism. Note: Modification of Heartbeat Mechanism is disabled. Possible reason may be network is down in one of the cluster nodes. Please bring the network up and try to change Heartbeat Mechanism.Select disks to be mirrored from the local siteSelect disks to be mirrored from the remote siteEnter the size of the ASYNC cacheRemove GLVM ConfigurationCheck Whether Cluster ExistsCreate GMVG with Asynchronous Mirror PoolsConfigure Asynchronous GMVGConfigure Synchronous GMVGCreate GMVG with Synchronous Mirror PoolsEnter the name of the VGRemove GLVMSelect GMVG to be unconfiguredGLVM Configuration AssistantThere are no sites defined. Define a multi-site cluster, which is needed to configure GLVM, prior to defining the repository disks and cluster IP address. It is important that all nodes in the cluster have access to the repository disk or respective repository disks(in case of a linked cluster) and can be reached via the cluster IP addresses. Enter the unit of ASYNC cache sizeGigabytes,Megabytes,KilobytesResource Optimized High AvailabilityOn/Off CoD AgreementResources Optimized High Availability management can take advantage of On/Off CoD resources. On/Off CoD use would incur additional costs. Do you agree to use On/Off CoD and be billed for extra costs?HMC ConfigurationHardware Resource Provisioning for Application ControllerChange/Show Default Cluster TunablesChange/Show Secondary LPARs PolicyAdd HMC DefinitionChange/Show HMC DefinitionRemove HMC DefinitionChange/Show HMC List for a NodeChange/Show Default HMC TunablesChange/Show Default HMC ListHMC nameDLPAR operations timeout (in minutes)Number of retriesDelay between retries (in seconds)Change/Show HMC List for a SiteCheck connectivity between HMC and nodesSelect an HMCHMC listAdd Hardware Resource Provisioning to an Application ControllerChange/Show Hardware Resource Provisioning of an Application ControllerRemove Hardware Resource Provisioning from an Application ControllerUse desired level from the LPAR profileOptimal amount of gigabytes of memoryOptimal number of dedicated processorsOptimal number of processing unitsOptimal number of virtual processorsDynamic LPAR Start Resource Groups even if resources are insufficient Adjust Shared Processor Pool size if required Force synchronous release of DLPAR resourcesOn/Off CoD I agree to use On/Off CoD and be billed for extra costs Number of activating days for On/Off CoD requestsEnable secondary LPARs managementSecondary LPARs policyLPAR Availability Priority ThresholdMinimize,ShutdownIgnore errors if nodes are unreachable ?Enterprise Pool Resource Allocation orderFree Pool First,Enterprise Pool First Always Start Resource GroupsConnection TypeSSH,REST APIUser NameChange/Show HMC CredentialsNovaLink ConfigurationAdd NovaLink DefinitionChange/Show NovaLink DefinitionRemove NovaLink DefinitionChange/Show Default NovaLink TunablesNovaLink nameCheck connectivity between NovaLink and nodesSelect a NovaLinkChange/Show PowerHA Log File SizeSelect the Log File to change, or ALL to change all logsLog FileWrap when it reaches this size (megabytes)ALL - Use the same value for all logs The changes made to log file %1$s will take effect when cluster services are restarted. Log File size in MBUnstable ThresholdUnstable Period (seconds)Free Pool before Enterprise pool,Enterprise pool before Free Pool,All Enterprise Pool before Free PoolCloud Backup ConfigurationBackup ProfilesAdd Backup ProfileChange/Show Backup ProfileRemove Backup ProfileEnable BackupBackup MethodResource Group(s)Volume Group(s)Cloud ServiceAWSCompressionBackup Frequency (in days)Backup ScheduleIncremental Backup Frequency (in hours)Select a Backup Enabled Resource GroupTarget LocationSelect a Backup TypeCloud Remote Storage CloudRemote StorageReplicated ResourcesStorage ConfigurationEncryption AlgorithmDISABLE,KMS,AESAdd Backup Profile for CloudAdd Backup Profile for Remote StorageChange/Show Backup Profile for CloudChange/Show Backup Profile for Remote StorageAdd Storage ConfigurationChange/Show Storage ConfigurationRemove Storage ConfigurationStorage NameStorage TypeIP AddressSelect a Storage TypeSelect a Storage NameCancel BackupClear BackupSelect a Node to recover from event script failure# No nodes found that currently have event script failures. # Check the status of all nodes to determine if any # further recovery actions are required. Node to recover from event script failureCancel remaining event processing?Event processing resumed. Event processing cancelled. There was a problem attempting to recover from event script failure. If this problem persists, please report it to IBM support. List Cloud BackupsBucket NameStart TimeEnd Time# No cloud resources are configuredNew Storage NameFail event if pre or post event fails?Cluster services are not stable. Additional event script errors may have occurred. Check hacmp.out for errors and run recovery again until cluster services stabilize. Verify that all resources and resource groups are in the expected state. Take any necessary actions to recover resource groups that are in ERROR state. The following events have been cancelled: Storage Connectivity CheckSelect a resource for backupSelect a Backup ProfileBackup Profile# No volume groups are configured for selected resource group.# Cluster is not configured, configure cluster to proceed.# There are no resource groups configured, # at least one resource group should be configuired.# Backup is defined for all the configured resource groups, # please configure new resource groups to proceed.# No volume groups are configured for configured resource groups.Search for PowerHA SMIT panelsLearn more about initial cluster configuration# If you do not see a selection you were expecting # consider the following: Create Cloud BackupCompare Cluster Snapshots and ConfigurationSnapshot name, file, or Configuration Directory# Enter the first configuration to use in the comparison. # The default is to use the Active Configuration Directory (ACD). # You may also select the Default Configuration Directory (DCD), # a snapshot name from the list of snapshots below # or enter a full path to a snapshot file. # Enter the second configuration to use in the comparison. # The default is to use the Default Configuration Directory (DCD). # You may also select the Active Configuration Directory (ACD), # a snapshot name from the list of snapshots below # or enter a full path to a snapshot file. First configuration to compareSecond configuration to compareDiagnose problems with network interfacesShow current state of a network interface