RISO8859-1AO|.OO#P P1 PL1Pm&P )P 'P (Q (QA QjQQ"Q/Q*R R8%RRRx&R$RRRRRS+&S+SR<Se(S S %S TWT%T} TTT TT)T@TU)7UC U{UU'U UV<V' 'Vd!V"3V#1V$"W % W/&"W:'PW](%W)W*W+X , X&-XG. Xf/#X0"X1XX5Y,Y: 8Yg ,Y*Y/YZ(-ZF$?Zt' Z+2Z,5[.D[>1F[2%[3([4d\5A\~<6\=6\>A].?5]p@"]A"]3*]5^3^M;^.^5^2_"2_U -_ _ ,_ O` M`P:`4`8a+aGDas6a7aCb'Abk.b(bAcScGPcPc#3d=( dq)Gd}*<d+6e,Pe9-0e.;e23e3Gf+4fs7 f87f<-f=Hg>BgW?g@JgA:hBh?ChDiEjkt*k-kkHl=lm<mK-mjmoY >qN q 2rvRr-rOs*Dsz 2snsFta-tbt.u9 uh u jvz,v0w lwC-w1wxrx,yzyF-yKyGz;z{{+|yJ}3}$A(j-6h-W 8x7'U}l- &D %k8n9'U}S>1,*^#3 0$D-d4 ) / p! }EMrUJ !"~#[ $z&$&6, 3c - 4  0.JWy O=!-_#D1 - !1}3*]""10<6-s#36j$%I1)>-Il & d1+E*q9G%$D 2i'S1)"KL/'(1(K)*b+49 Qar"0+ 0\ :.5iC1$:0L} A.6p e!a "9o#=$%G()7*A82>z3J45K#<7o=C>9?W%@&}A[BICZJD EFGusH=,D'+l+'- .0C:t/+ )) %S ,y  56'/<W'&5?.+u/OB?40t!ÓGõ1$*n"p6œ>; 6E=|/ƺ73"#V,z'ǧ)#"8"[$~#ȣ5 h(1~)6ɰ*.+3,dJ-6ʯ2345f΃69d2$ 9 %T z ϒ$ϣ+?XtА!/У#.(-0.I/c0 |1ш2џ3ѯ4 56 7<?)@?AZCpD҉F!ҢK)P,RS5UNZ]_sdӅiӞnӰprst u&v 8w+Dxpzԋ|ԩ~#:I6\:Փ& %.</k-֛.-AXqׇ"נ2%!:\%r(ؘ $"3Spهٝٳ!$,1,^ڋ)ڪ!$!3%U){-ۥ  7%8].ܖFP  C]"Eݡ$H&J0(A{)*޽*$+) ,%7-F]1ߤ6;=#>#@BB^CpDE0F:J*O$=T^bUVW Y ^ cChEmVrfs|t u vwxy z{|}!~1 FTfw  ,8Mc k y  %0 ?L] r}X]_`abc %d!Fe hfg!hi$j"k3lPm"Vnyo$p qr"s,t (u(4v ]w*~xy*z){,|F}-d~-"+%'5']&((-#, P*q,( *0[!{ )++ AMh,?>1Q#j   ) 8F#_"  # 1 ?!J#N%^*z/01469;&=#>2@JA6dBCDEFef*A [e6z     -J[ l z    3Fc "!"# $%&&'' 0(:)M*%m+),I-V.^/\u0-123/=4%m5-6)7=8!)9=K:8;<=,>@??_@#A?B#CE'D mE@F'G0HL(I0uJ?KCL#*MCNN#O(P3Q%R59SoT.UV0W8X!9Y#[Z&[D\!]$ ^&2_!Y`${a!b:c@dF>e(f6f ):C7~E4 $1 V b 8 .+5a$v 8($):*0U+,-]2<%3=Y>'m?DF'PS-Z76\ n]0zd"efg*m'nAq'aw(xy#",2I|,$ . :; i $ ,  , $? Q      ( "# K0 o -  $ ( %2 N W # 6  D T h | & )  0(Yr" %    02 c "',27P'(33"V)v." % !0"#O#s$%& '%()*'+@, T-a.-v/20123'14)Y52647(89</:l;{<=A>.?(@FA XBeC {DEFg9  #=(,T-27<AF1K!#PEUbZx_6diGn+*V t $#6 O&[ =   /L d,4p6B@J^h ,r&:|Ca &he/LTo% !    /8h{, !&<@ D O\ k v ! "# $ % & 0' B( U) s*  +0 , -  . /  0!1"!2!=3"Z4$T5$k6$7$8$9'$:$;%<%%=%6>%U?%o@.%A!%B3%C & D(&.E"&WF+&zG&H&I&J,&K+'L1'IM0'{N&'O2'P(Q(R(5S (KT(UU&(pV (W(X(Y (Z(([)\)']!)A^ )c_)q`)a )b')c )d)e )f *g*)h *Hi*ij-*|k*l *m+*n+ o +"p+Cq1+Xr+s"+t*+u+v,w,$x,Cy-,Tz,{,|),},~,---,-D -b-o-x---- ---%-,-...M.dB.G./ / /34/K/////00!0@0U0l/00001611R1q171122 20 2<2G2_ 2~/2222"3$333X&3n233#3.4444L4] 4r444 44445 (55F5U45l/555566$36B6v66!666#7 7/7M*7j77,788(888a8i8r88888899;9R9k)99"99:':-:A):o:::":; ;;6;M;e";~;';; ; $< <& <8  >A->a> 2>!>" ># >$? %!?&?@'4?[(0?)?*;?+@,5@4-1@j.@/;@0$@1A24A135Af4A5&A61A7B8!B,9:BN:6B;B<'B=#C>@C,?&Cm@1CA<CB%DC0D)DDZE0DqF DGDHDI:EJ8E=K EvL,EMENEOEP(FQF:RFSS%FkTFUFV"FWFXFY%GZG6[GP\&Gi]G^G_'G`Ga'Hb,H+cHXdHreHfHg&Hh4HiIj#I1k"IUl!Ixm/In#Io"Ip0Jq#JBr"Jfs0Jt(Ju'Jv5K w.KAx-Kpy;KzK{K|+L}LC~qLL LLLL LL MM( M9 MG MT M^MiM|MMMMMNN/NENWNlN NNNN NN$OO3O<#OX"O|O$OO/O'P)PE1Po+P%P,PQ Q;QCQVQnQ0QQ!QRR  R6RD/Rc R R*R&RR S S!S.SD/S[S'S)SS8TTG)TfT1T(T"U U,U=UXUgUyUU,UUU!V 'V/ VW*Vb(VVV*V(V W$W$)WI2WsWWWWXX*9XE'XX%X)X(YY>2YX"YY#YY#ZZ'$Z:Z_ ZyZ&Z=Z$[.[%[T[l3[9[[ \ \ \! 3\> \r \\ \\\]]/]<]l:]]] ]]^^!$^9^^ +^z!1^".^#_$ _%%_F&%_V'&_|(_)9_*_+`,`-`2.)`E/*`o0`1`2`3 `4 `5`6a 7a!8)a;9ae:a~;a< a=a>/a?a@bA b(Bb3C-bED bsEbF-bG.bH;cL%cKX!cqYcZc[c\c] c^d_ d.`d9ad>b$dPcdugdlemexAey6f,fF7fsff)f#g .g1=g` g$g&g.gi h&hFh`"hhhhh 9h #i&i? ifirjPi3iCjZjU/jDj4k%'kZ Mk Ok Vl lw 7mmHkLm(n&nG$nn2nn'nlFBo4KoJmKo1o8pdpMi$pfn)ps px p}pppppq qqq)Nq88qGq0@r2rI4Cr_:*r;;r<*s =;s5@psqJ sKgsn *tT1t2to&tu 'u+*uS(u~$uuIu Iv0 @vz ;v Av Cw9?w}<w<wMx7TxBxy y,y8p*y=6yh(yJyCz+zWJz:z 1{ /{; E{k ?{ .{C| 2|d5|q|0|$}P}:r!}/}s}~#j2"0 0% V :8?FxA6 V8!a"<#=.$l%G &Iht3}' 1uR&3y0vE1$?VwSR2=xG$p  7 > U k  2&) P Z fp!   !4"H$X%x& ' ()* +,-.//0H1\2)y345 67-8?9Q:c;x<=!>"?@ A B*C<DLE`F tGHy.&. .;j%L ? 65 7l 5 6%7Q-l'/zz/{:E::5=8/ h r } , ' #CUUsLh?,G/|;4w^\ h.Z%W2  D < ,. [NcSIr/4\!~. Z;W2!<ccd.eMf0g/h<7itjc%kEl0m,n1-o-_pq7%rU]st7K<.zhl$2'-ZN,<}~0stIN | &, +S WiDA*9B0.d_53U. i),*B+5^,2-C.) /;50>q16233P4cl5Z6;+7Mg8o9*%:'P=$x>7?-@@A,DBRqCZD E'@FvhG+s% 1$G,l   %/ U 0v0)(Z,&\$!’ ´% !"## ?$&`%Ç&Ö'ò()$*+*,A-V.$l/đ0#ı123 4,%5+R6'~7Ŧ8ź9:;<=#>;?U@nAƌBƝCƹDFEyFJǖGOH1IhȴJ=K9[LɕMnlNTOn0PY˟QgR?aS9̡TUVfW~XYZI [Bj\nέ]-^@J_ϋ`7ϣajbFcsdJe fgVh:ZiWҕjk0ӕl<m=n`AoԢpyԳqU-rՃs&q<m6ת4;Rd i u؁؝ع     % 8E Vbx ٔ٠ٱ 6E M[ o!ڀ" ړ#ڠ$ڱ%&'*()3!*4U+ۊ,ۡ-ۿ./0%1#273L4 j5v6'܊7ܲ89: ;<=/*>'Z?6݂@%ݹA#B,C60DgE!~F+ޠGHIJ$K =L^M{N$ߒO ߷P-QRlS.TGUVV%W+X.Y"Zu [D\,]E^w:_ `'a/b+c%Hdne$fGgDh,;iDhj,k#lmrn#op#qrNsXKtVu.v*w9x?yFzk]{r|W<}~KH@@DR[<*4CS)+dw4@)4.!c#K)n>+.=9F)%Y0*8[.+z*j4;9=@2~';.HD4-$)3?.s!@R .86@PEK.#Rn!@>U-f)6-%#7I.<7%4 |?<A;;twQR1%0WDEQe z           1Rdu '%8,X%2gsW A : K<   / F ? G$ . . &- 3 87 l3 7 6 : G> B ;  ? @ 9 .  $M+ y!, )OJSBF1FxJ! , %J!5p"9#9$=%:X&> _y2<(p2Z<FgtPzZ{Wdsn8GxJQdw>x6/vfUsage: celpp [-i filename] [-o filename] [[-I include path] ...] celpp: Cannot allocate space for include path celpp: Debug level set to %1$s celpp: Cannot open input file %1$s celpp: Input file is %1$s celpp: Output file will be %1$s celpp: Unable to open output file '%1$s' : %2$s celpp: Unrecognized argument '%1$s'. celpp: Error - No include path specified celpp: Error - No input file specified celpp: Error - No output file specified celpp: Error - No debug level specified celpp: Error - Bad option Value file %1$s, line %2$ld: %3$s line %1$ld: %2$s celpp: malloc error (includeFile) celpp: Line %1$ld longer than %2$ld characters celpp: Space buffer overflow at line %ld. illegal character: 0x%ld >>> returning type %1$ld, value %2$s >>> returning %ld freeing subtree type %ld (not really) YUCK! Unexpected Subtree type %1$s yesno invalid string No disk name entered, exiting! VGDA read failed, exiting! clgetvg {-l | -f } FS %1$s not found Usage: cdsh [-q[1|2]] [args] %1$s: Command '%2$s' not found in path. %1$s: No available nodes found! %1$s: Unable to connect to node %2$s LV %1$s not found clresactive {-v ,-l ,-f ,-u ,-g ,-V ,-v must be root (UID=0) to run clupdatevgts can't read time stamp from %1$s, disk reserved on another node? don't know about vg %1$s updating local VG info for %1$s, this may take a while recovery of VG %1$s in progress Can't exportvg: %1$s Can't importvg: %1$s Can not disable auto varyon of VG %1$s Can't update timestamp for %1$s varyonvg %1$s failed usage: clvaryonvg [-F] [-f] [-n] [-p Number] [-s] [-o] must be root (UID=0) to run clvaryonvg don't know about vg %1$s The replay file failed, manual intervention needed can't clvaryonvg a vg which is already varied on %1$s instances of clvgdats failed not found Can not disable quorum on VG %1$s Usage: climportvg [-V MajorNumber] -y VolumeGroup [-f] [-c] [-x] PhysicalVolume Physical volume %1$s does not exist! importvg -y %1$s %2$s failed! importvg -L %1$s failed! Usage: clsynclvodm Error executing synclvodm %1$s! Usage: clupdatevg Error executing %1$s %1$s %1$s! Error executing clupdatevgts %1$s! Volume group %1$s does not exist! Usage: clisconvg Usage: cexec [args] %1$s: Unable to determine node list for the cluster. %1$s: Unable to determine target node list! %1$s: _get_rgnodes: A resource group must be specified. %1$s: Invalid C-SPOC flag [%2$s] specified. %1$s: Option [%2$s] requires an argument. %1$s: Option [%2$s] does not take an argument. %1$s: Invalid option [%2$s]. %1$s: Mandatory option [%2$s] not specified. %1$s: C-SPOC options '%2$s' and '%3$s' are mutually exclusive. %1$s: Unable to open file: %2$s %1$s: C-SPOC only supports up to 8 node clusters. %1$s: The node [%2$s] is not a part of this cluster. %1$s: Unable to verify PowerHA SystemMirror Version on node [%2$s]. %1$s: %2$s is not running PowerHA SystemMirror version %3$s or higher %1$s: Resource group %2$s not found. %1$s: Unable to connect to node [%2$s]. %1$s: All C-SPOC commands require the user to either be root, or have PowerHASM.admin authorization %1$s: 2 nodes have the same serial number, probably due to IPAT. %1$s: C-SPOC -g flag is not allowed for this command. %1$s: C-SPOC -n flag is not allowed for this command. %1$s: Either the '-g' or the '-n' C-SPOC flag must be specified. %1$s: C-SPOC -g and -n flags are mutually exclusive. %1$s: C-SPOC -g flag is required. %1$s: C-SPOC -n flag is required. %1$s: Can't reach %2$s, continuing anyway %1$s: Error executing clgetvg %2$s %3$s on node %4$s %1$s: Error executing clvaryonvg %2$s on node %3$s %1$s: Error attempting to locate volume group %2$s on %3$s %1$s: No node has access to volume group %2$s %1$s: Error executing clupdatevgts %2$s on node %3$s %1$s: Error executing varyoffvg %2$s on node %3$s %1$s: Error executing %2$s %3$s %4$s on node %5$s %1$s: Error executing %2$s %3$s on node %4$s %1$s: VG %2$s is concurrent %1$s: Cannot determine C-SPOC request mode! %1$s: This operation not currently supported for non-concurrent volume groups. %1$s: Operation not allowed on mixed volume group (RAID and non-RAID disks). %1$s: Error executing varyonvg -n -b -u %2$s on node %3$s %1$s: Error executing varyonvg -n %2$s on node %3$s %1$s: Error executing clupdatevg %2$s %3$s on node %4$s %1$s: Do you wish to continue? y(es) n(o)? %1$s: Reference node %2$s does not have access to volume group %3$s %1$s: This operation is not allowed on a RAID device. %1$s: Physical volumes %2$s not available on node %3$s %1$s: Physical volumes %2$s are not allocated to volume group %3$s %1$s: The -R switch is required when providing physical volumes. %1$s: No disks provided. Ignoring -R option. %1$s: Error executing lspv on node %2$s %1$s: No node has volume group %2$s varied on in concurrent mode %1$s: Volume group %2$s is not a member of any PowerHA SystemMirror resource group %1$s: Warning, all data contained on physical volumes (%2$s) will be destroyed. %1$s: Operation is not allowed because %2$s is a RAID concurrent volume group. %1$s: Volume group %2$s is not defined in cluster. Auto-select%1$s: Cannot obtain the list of physical volumes for volume group %2$s %1$s: Cannot obtain the VG IDENTIFIER for volume group %2$s %1$s: Cannot obtain the volume group %2$s Varyon mode %1$s: Volume group %2$s has the disks in missing state. Trying to activate them %1$s: Map file %2$s does not exist on node %3$s %1$s: Logical volume name %2$s already exists on node %3$s %1$s: No volume group was specified on the command %1$s: Given disk name %2$s could not be associated with a volume group Enter device name: %1$s: wrong number of arguments %1$s: Error executing varyonvg -c -P %2$s on node %3$s WARNING: %1$s is a %2$s Replicated Resource. Since the cluster is NOT active on node %1$s with %2$s active, the CSPOC operation may not succeed on the remote peers. Verifying %1$s pair state ... Failed to complete mirror pool configuration. Please Contact IBM Support. %1$s: Error executing varyonvg -n -c -P %2$s on node %3$s The PowerHA SystemMirror configuration has been changed. Mirror pool %1$s has been renamed to %2$s. The configuration must be synchronized to make this change effective across the cluster. The PowerHA SystemMirror configuration has been changed. Mirror pool %1$s removed from the volume group %2$s. The configuration must be synchronized to make this change effective across the cluster. The PowerHA SystemMirror configuration has been changed. The preferred storage location for mirror pool %1$s has been set to %2$s. The configuration must be synchronized to make this change effective across the cluster. The PowerHA SystemMirror configuration has been changed. LVM Preferred Read for volume group %1$s has been set to %2$s. The configuration must be synchronized to make this change effective across the cluster. %1$s: No filesystem given %1$s: %2$s is not a valid filesystem name %1$s: Error executing chfs %2$s on node %3$s Usage: cl_chfs [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-m newmtpt] [-u mtgrp] [-A {yes | no}] [-p {ro | rw}] [-a attr1=val1] [-d attr] [-t {yes | no}] FileSystem %1$s: Filesystem %2$s is configured as an PowerHA SystemMirror resource %1$s: File system mount point %2$s is in use on node(s) %3$s Error detail: %1$s: No logical volume given %1$s: Error executing chlv %2$s on node %3$s Usage: cl_chlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-b BadBlocks] [-d Schedule] [-e Range] [-L label] [-p Permission] [-r Relocate] [-s Strict] [-t Type] [-u Upperbound] [-v Verify] [-w MirrorWriteConsistency] [-x Maximum] LogicalVolume ... cl_chlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] -n NewLogicalVolume LogicalVolume Usage: cl_chlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-b BadBlocks] [-d Schedule] [-e Range] [-L label] [-p Permission] [-r Relocate] [-s Strict] [-t Type] [-u Upperbound] [-v Verify] [-w MirrorWriteConsistency] [-x Maximum] [-U userid] [-G groupid] [-P modes] LogicalVolume ... cl_chlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] -n NewLogicalVolume LogicalVolume %1$s: "LC_ALL=C lslpp -lcqOr bos.rte.lvm" failed on node %2$s %1$s: The rename of logical volume %2$s in volume group %3$s failed because node %4$s cannot perform the rename function. To rename a logical volume on a concurrent volume group, fileset bos.rte.lvm must be at least at level %5$s. The level of fileset bos.rte.lvm on %1$s is %2$s. %1$s: Filesystem %2$s is currently configured as an PowerHA SystemMirror resource %1$s: Error executing rmfs %2$s on node %3$s Usage: cl_rmfs [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-r] FileSystem%1$s: Warning: mount point %2$s could not be removed from node %3$s %1$s: Error executing lsfs /dev/%2$s on node %3$s %1$s: Filesystem %2$s (contained within logical volume %3$s) is configured as an PowerHA SystemMirror resourceWarning, all data contained on logical volume %1$s will be destroyed. %1$s: Error executing rmlv %2$s on node %3$s Usage: cl_rmlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-f] LogicalVolumecl_rmlv: Do you wish to continue? y(es) n(o)? Usage: cl_lsuser [-cspoc "-f [-g ResourceGroup | -n NodeList]"] [-c | -f] [-a List] {"ALL" | Name [,Name] ...}Usage: cl_lsgroup [-cspoc "-f [-g ResourceGroup | -n NodeList]"] [ -c | -f ] [-a attr attr .. ] { "ALL" | group1,group2 ... }Usage: cl_chuser [-cspoc "-f [-g ResourceGroup | -n NodeList]"] Attribute=Value ... Name%1$s: User %2$s does not exist on node %3$s %1$s: User id %2$s already exists on nodes %3$s Usage: cl_chgroup [-cspoc "-f [-g ResourceGroup | -n NodeList]"] Attribute=Value ... Name%1$s: Group %2$s does not exist on node %3$s %1$s: Group id %2$s already exists on nodes %3$s Usage: cl_chgrpmem [-cspoc "-f [ -g ResourceGroup | -n NodeList ]" [-R load_module] [ { -a | -m } { + | - | = } User ... ] Group')Usage: cl_mkuser [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a] [Attribute=Value ...] Name%1$s: User %2$s already exists on node %3$s Usage: cl_mkgroup [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a] [-A] [Attribute=Value ...] Group%1$s: Group %2$s already exists on node %3$s Usage: cl_rmuser [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-p] NameUsage: cl_rmgroup [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] NameUsage: cl_rc.cluster [-cspoc "[-f] [-n NodeList | -g ResourceGroup]"] [-boot] [-l] [i | -I] [-b] [-N | -R | -B] [-M | -A]Usage: cl_rc.cluster [-cspoc "[-f] [-n NodeList | -g ResourceGroup]"] [-boot] [-l] [-i | -I] [-b] [-N | -R | -B] [-M | -A] [-r] [-x]Usage: cl_rc.cluster [-cspoc "[-f] [-n NodeList | -g ResourceGroup]"] [-boot] [-l] [-i | -I] [-b] [-N | -R | -B] [-M | -A] [-r] [-x] [-v] [-C interactive|yes]Usage: cl_clstop [-cspoc "[-f] [-n NodeList | -g ResourceGroup]"] -f cl_clstop [-cspoc "[-f] [-n NodeList | -g ResourceGroup]"] -g [-s] [-y] [-N | -R | -B] cl_clstop [-cspoc "[-f] [-n NodeList | -g ResourceGroup]"] -gr [-s] [-y] [-N | -R | -B] cl_clstop: ERROR: Please specify one or more nodes to stop with takeover. cl_clstop: ERROR: Node "%1$s" has %2$d active event(s), as reported by "lssrc -ls clstrmgrES", and cannot be stopped until all those events have completed. Therefore, the stop request has been aborted for all nodes. Please wait for all nodes to stabilize before attempting to stop cluster services again. cl_clstop: ERROR: Node "%1$s" is not stable. It is in state "%2$s", as reported by "lssrc -ls clstrmgrES". Therefore, the stop request has been aborted for all nodes. Please wait for all nodes to stabilize before attempting to stop cluster services again. cl_clstop: Successfully ran varyonvg -S for volume group "%1$s". %1$s: No logical volume given %1$s: Error executing lslv %2$s on node %3$s %1$s: Error attempting to locate lv %2$s on node %3$s Usage: cl_lslv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-l | -m] LogicalVolume cl_lslv: Can't reach $node, continuing anyway%1$s: Error executing clfiltlsvg %2$s %3$s on node %4$s Usage: cl_lsvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-o] | [-i] [-l|-M|-p] [VolumeGroup ...]%1$s: no volume groups found %1$s: an error occurred running cllsvg %1$s: can't locate VG %2$s %1$s: The cluster does not appear to be configured - no nodes are defined Configure the cluster, nodes and networks then try this operation again. %1$s: No volume groups were found Use CSPOC to configure volume groups first, then try this operation again %1$s: No shared volume groups were found that could be marked as CRITICAL. A volume group must be part of an OAAN resource group to be eligible to be marked as CRITICAL %1$s: No critical volume groups found %1$s: No active volume groups found. %1$s: Error executing clfiltlsfs %2$s %3$s on node %4$s Usage: cl_lsfs [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-q] [-c|-l] [FileSystem] ... %1$s: no filesystems found %1$s: an error occurred running cllsfs %1$s: can't locate FS %2$s Node: Name Nodename Mount Pt VFS Size Options Auto Accounting Usage: cl_updatevg -cspoc "[-f] -g ResourceGroup" VolumeGroup %1$s: Error attempting to locate VG %2$s on %3$s %1$s: Can't reach %2$s, continuing anyway %1$s: VG %2$s found active on %3$s %1$s: Error executing clvaryonvg %2$s on node %3$s no VG given error attempting to locate vg %1$s on node %2$s can't reach %1$s, continuing anyway no node has access to VG %1$s error executing clvaryonvg %1$s on node %2$s VG %1$s varied on on node %2$s, chvg will run there error executing "chvg %1$s" on node %2$s error executing clupdatevgts %1$s on node %2$s Error: Volume group %1$s is varied-on on one or more nodes: %2$s. Volume group must be varied-off on all nodes. Error: Volume group %1$s is varied-on concurrently on one or more nodes: %2$s. Volume group must be varied-off on all nodes. Warning: One or more nodes are unreachable, or the cluster manager is not running. Manual intervention may be necessary on those nodes. Do you wish to continue? y(es) n(o)? Warning: The volume group must be varied off on all nodes in the cluster before the conversion to enhanced concurrent mode. This can be done by bringing the resource group, that the volume group belongs to, offline. After conversion, the resource group can be brought online which will vary on the volume group.Error: Volume group type changes cannot be combined with any other operation Error: Volume group %1$s is varied-on on one or more nodes: %2$s. Volume group must be varied-off on all nodes. %1$s: Marking volume group %2$s in resource group %3$s as CRITICAL failed %1$s: The PowerHA SystemMirror configuration has been changed - Volume Group %2$s is no longer marked CRITICAL. The configuration must be synchronized to make this change effective across the cluster %1$s: The default action for Critical Volume Group %2$s on loss of quorum is to halt the node. The SMIT panel to 'Configure failure action for CRITICAL Volume Groups' can be used to change this action %1$s: The HACMP configuration has been changed - a custom failure action has been specified for CRITICAL Volume Group %2$s. The configuration must be synchronized to make this change effective across the cluster Warning: Marking volume group %1$s as crtitcal is not allowed on non-concurrent resource group. To mark VG as critical, add VG to a concurrent resource group. Error: Updating of Preferred Read %1$s for volume group %2$s is failed.Contact IBM Support cannot mix C-SPOC's -g and -n flags C-SPOC's -g flag requires an argument cannot mix C-SPOC's -g and -n flags C-SPOC's -n flag requires an argument invalid C-SPOC flag %1$s this C-SPOC command does not take a nodelist argument _cspoc_verify can only be called after _cspoc_init C-SPOC only supports 8 nodes at this release error checking PowerHA SystemMirror version on %1$s "-cspoc" requires an argument can only have a single "-cspoc" on command line cl_lslv: Can't reach $node, continuing anyway Usage: cl_nodecmd [-cspoc "[-q] [-f] [ -g ResourceGroup | -n NodeList ]"] command args USAGE: nls_msg [-0 | -1 | -2] [-l logfile] USAGE: %1$s logfile, messageset, messageid [default_message] ERROR: Must specify a stdout or stderr flag. USAGE: %1$s { -1 | -2 } [default_message]Usage: cl_mklvcopy [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-e Range] [-k] [-u UpperBound] [-s Strict] [-m MapFile] [-R ReferenceNode] LogicalVolume Copies [PhysicalVolume...] %1$s: Error executing mklvcopy %2$s on node %3$s Usage: cl_mklv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-b BadBlocks] [-c Copies] [-d Schedule] [-e Range] [-i] [-L Label] [-r Relocate] [-s Strict] [-t Type] [-u UpperBound] [-v Verify] [-w MirrorWriteConsistency] [-x MaxLPs] [-y NewLogicalVolume | -Y Prefix] [-S StripeSize] [-R ReferenceNode] VolumeGroup NumberOfLPs [PhysicalVolume...] %1$s: Error executing mklv %2$s on node %3$s Usage: cl_mklv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-b BadBlocks] [-c Copies] [-d Schedule] [-e Range] [-i] [-L Label] [-r Relocate] [-s Strict] [-t Type] [-u UpperBound] [-v Verify] [-w MirrorWriteConsistency] [-x MaxLPs] [-y NewLogicalVolume | -Y Prefix] [-S StripeSize] [-R ReferenceNode] [-U userid] [-G groupid] [-P modes] VolumeGroup NumberOfLPs [PhysicalVolume...] Usage: cl_mklv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-b BadBlocks] [-c Copies] [-d Schedule] [-e Range] [-i] [-L Label] [-m MapFile] [-r Relocate] [-s Strict] [-t Type] [-u UpperBound] [-v Verify] [-w MirrorWriteConsistency] [-x MaxLPs] [-y NewLogicalVolume | -Y Prefix] [-S StripeSize] [-R ReferenceNode] [-U userid] [-G groupid] [-P modes] VolumeGroup NumberOfLPs [PhysicalVolume...] Enter physical disk names WARNING: Encryption for volume group "%1$s" is enabled, but the logical volume "%2$s" is not encrypted. To enable the encryption for logical volume, You can run "%3$s %2$s [...]" or "use Change a Logical Volume from %4$s menu". Usage: cl_extendvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-f] -R ReferenceNode VolumeGroup PhysicalVolume...%1$s: Error executing extendvg %2$s on node %3$s Physical Volume IDsDisk %1$s with PVID %2$s is not valid on all nodes Disk %1$s is already in volume group %2$s ERROR: Failed to register and reserve disk %1$s using SCSI persistent reserve from node %2$s.Usage: cl_extendvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-f] [-p MirrorPool] [-t StorageLocation] VolumeGroup PhysicalVolume...Usage: cl_mirrorvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-S | -s] [-Q] [-c Copies] [-m] [-R ReferenceNode] VolumeGroup [PhysicalVolume...]%1$s: Error executing mirrorvg %2$s on node %3$s %1$s: Error executing lsvg -l %2$s on node %3$s %1$s: No logical volumes in the volume group %2$s to mirror %1$s: Number of copies must be either 2 or 3 Usage: cl_unmirrorvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-c Copies] [-R ReferenceNode] VolumeGroup [PhysicalVolume...]%1$s: Error executing unmirrorvg %2$s on node %3$s Usage: cl_unmirrorvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-c Copies] [-R ReferenceNode] [-p MirrorPoolName] VolumeGroup [PhysicalVolume...]Usage: cl_rmlvcopy [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-R ReferenceNode] LogicalVolume Copies [PhysicalVolume...] Usage: cl_rmlvcopy [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-R ReferenceNode] [-p MirrorPoolName] LogicalVolume Copies [PhysicalVolume...] Usage: cl_reducevg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-f] -R ReferenceNode VolumeGroup PhysicalVolume...%1$s: Error executing reducevg %2$s on node %3$s %1$s: Could not export volume group %2$s %1$s: Error executing reducevg -d %2$s %3$s %4$s on node %5$s %1$s: The reducevg command cannot be completed because %2$s is not valid %1$s: The reducevg command cannot be completed because %2$s is in use for logical volume(s) %3$s. Either remove all logical volumes from disk %4$s or specify the force flag Usage: cl_importvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] -R ReferenceNode [-V MajorNumber] -y VolumeGroup [-c] [-x] PhysicalVolume %1$s: Error executing importvg %2$s on node %3$s %1$s: Volume group %2$s has been imported. %1$s: Volume group %2$s has been updated. %1$s: Volume group %2$s is up-to-date. No action taken. %1$s: Volume group %2$s is varied on in concurrent mode on nodes %3$s. Volume group %1$s has been imported. Volume group %1$s has been updated. Volume group %1$s is up-to-date. No action taken. %1$s: This operation is not permitted on an SP with usermgmt_config set to 'true'. %1$s: User id %2$s already exists on nodes %3$s. %1$s: Id's above %2$s are not supported. %1$s: Next available id (%2$s) is greater than the supported limit (%3$s). %1$s: Error executing clchkspuser on node %2$s %1$s: Error executing lsuser -a id ALL %1$s: Error executing lsgroup -a id ALL %1$s: Group id %2$s already exists on nodes %3$s Usage: cl_syncvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" [-f] [-i] [-H] [-P NumParallelLps] {-l|-v|-p} Name Usage: cl_extendlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]] [-a Position] [-e Range] [-u UpperBound] [-s Strict] [-R ReferenceNode] LogicalVolume Partitions [PhysicalVolume...] Usage: cl_extendlv [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] [-a Position] [-e Range] [-u UpperBound] [-s Strict] [-m Map File] [-R ReferenceNode] LogicalVolume Partitions [PhysicalVolume...] Usage: cl_lspvsmit [-cspoc "[-f] [-g ResourceGroup]"] [-a] [-s] {-l|-v} Name Usage: cl_mkvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" [-d MaxPVs] [-B] [-G] [-f] [-c | -C] [-l true | false] [-x] [-i] [-s PPsize] [-n] [-m MaxPVsize | -t factor] [-r ResourceGroup] [-E] [-V MajorNumber] -f -y VGname PhysicalVolumes ... Invalid list of nodes. Error getting PVIDs from nodes. %1$s: Invalid PVID %2$s - the PVID either does not exist, or is part of an existing volume group Volume Group Name not valid. No free disks found. Unable to reach one of the nodes. Unable to reach one of the nodes. Continuing... Maximum of 8 nodes allowed for disk operations. Selected PVIDs require that an Enhanced Concurrent Mode volume group be created. Change the Enhanced Concurrent Mode option on the previous screen to 'true'. %1$s: Volume Group Name %2$s already exists on nodes %3$s %1$s: An error occurred executing mkvg on node %2$s %1$s: An error occurred executing mkvg on node %2$s ..Continuing.. %1$s: Discovering Volume Group Configuration... %1$s: Volume Group Discovery failed all cluster nodesThere are no disks available for disk heart beat%1$s: claddres -g %2$s failed.%1$s on all selected nodes%1$s on nodes %2$s at site %3$s%1$s on nodes %2$s%1$s on node %2$s at site %3$s%1$s on node %2$sall selected nodes%1$s: Unable to obtain volume group names from cluster node %2$s %1$s: Volume group created may not have a unique name %1$s: Error attempting to add a concurrent volume group %2$s to a non-concurrent Resource Group %3$s %1$s: Error attempting to add a concurrent volume group %2$s to a concurrent Resource Group %3$s %1$s: An error occurred executing mkvg %2$s on node %3$s %1$s: Add of volume group %2$s to resource group %3$s failed %1$s: The PowerHA SystemMirror configuration has been changed - %2$s %3$s has been added. The configuration must be synchronized to make this change effective across the cluster %1$s: Cross site mirroring set up failed for volume group %2$s in %3$s Legacy Original Big Scalable%1$s: Set forced varyon for resource group %2$s failed Check each node to see if any disks need to have PVIDs allocated %1$s: Volume group %2$s fence height could not be set to %3$s %1$s: Marking volume group %2$s in resource group %3$s as CRITICAL failed %1$s on all nodes at site %2$s%1$s: Volume group %2$s fence height could not be set to %3$s on node %4$s %1$s: Unable to vary on volume group %2$s on node %3$s %1$s: Unable to determine the logical volumes in volume group %2$s %1$s: Unable to determine the disks in volume group %2$s ERROR: Failed to register and reserve volume group %1$s using SCSI persistent reserve. Usage: cl_mkvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" [-d MaxPVs] [-B] [-G] [-f] [-c | -C] [-l true | false] [-x] [-i] [-s PPsize] [-P MaxPPs] [-n] [-m MaxPVsize | -t factor] [-r ResourceGroup] [-E] [-V MajorNumber] -f -y VGname PhysicalVolumes ... Usage: cl_mkvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" [-d MaxPVs] [-B] [-G] [-f] [-c | -C] [-l true | false] [-x] [-i] [-s PPsize] [-P MaxPPs] [-n] [-m MaxPVsize | -t factor] [-r ResourceGroup] [-E] [-V MajorNumber] -f [-M y|s] [-p mirror pool name ] [-T StorageLocation] -y VGname PhysicalVolumes ... ERROR: Cannot add more than %1$s volume groups to a resource group: %2$s ERROR: Configuration is in migration and cannot be modified until migration is complete. INFO: Following default policies are used for resource group during volume group creation. You can change the policies using modify resource group policy option. Startup Policy as '%1$s'. Fallover Policy as '%2$s'. Fallback Policy as '%3$s'. %1$s: Adding concurrent access policy volume group %2$s to non-concurrent resource group %3$s is not supportable, so removing concurrent access and enabling Fast Disk Takeover to match with resource group startup policy. %1$s: Adding Fast Disk Takeover policy volume group %2$s to concurrent resource group %3$s is not supportable, so removing Fast Disk Takeover policy and enabling concurrent access to match with resource group startup policy. Usage: cl_mkvg [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" [-d MaxPVs] [-B] [-G] [-c | -C] [-l true | false] [-x] [-i] [-s PPsize] [-P MaxPPs] [-n] [-m MaxPVsize | -t factor] [-r ResourceGroup] [-E] [-V MajorNumber] -f [-M y|s] [-p mirror pool name ] [-T StorageLocation] [-k {y|n}] -y VGname PhysicalVolumes %1$s: claddgrp failed - could not create resource group %2$s Usage: cl_lsfreelvs [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" Error getting free filesystems from nodes. No free logical volumes of type %1$s found No free logical volumes found There are no free %1$s logical volumes Usage: cl_crfs [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] -v VfsType {-g VolumeGroup | -d Device} -m MountPoint [-u MountGroup] [-A {yes|no}] [-t {yes|no}] [-p {ro|rw}] [-l LogPartitions] [-n NodeName] [[-a Attribute=Value]...] Invalid list of nodes. Failed to get a list of free logical volumes. No free logical volumes to create a filesystem. Cannot find parent volume group of logical volume [%1$s]. Invalid volume group [%1$s]. Error locating volume group %1$s on node %2$s. Cannot reach node %1$s. Continuing anyway. No node has access to volume group %1$s. Error executing "%1$s" on node %2$s. Failed to create a filesystem on node %1$s. Missing volume group name. Failed creating logical volume on volume group %1$s. Failed to obtain partition size of volume group %1$s. Not enough space in volume group %1$s. SIZE of file system (in 512-byte blocks) must be specified. Invalid file system size. Failed to get a list of volume groups. No volume groups. Invalid volume group [%1$s]. Missing logical volume name. No valid cluster nodes. cl_crfs: logical volume %1$s is in use on node %2$s. Usage: cllsdev -cspoc "[-f] [-n NodeList]" Usage: cl_rmdisk [-cspoc "[-f] [-g ResourceGroup | -n NodeList]"] -l PVID [-d] %1$s: %2$s: %3$s does not have a Defined disk associated with it. %1$s: %2$s: Can not remove %3$s, it is part of a volume group. Usage: cl_mkdisk -cspoc "[-f] [-n NodeList]" -c disk -t -s -p -w or Usage: cl_mkdisk -cspoc "[-f] [-n NodeList]" -c disk -t -s -p -w -a cl_mkdisk: Invalid list of nodes cl_mkdisk: Invalid disk subclass. Only 'scsi' and 'ssa' are supported Usage: cl_crlvfs -cspoc "[-f] [-n Nodelist | -g ResourceGroup]" -v Vfs -g VolumeGroup -m Mountpoint -a size=Value [[-a Attribute=Value]...] [-u MountGroup] [-A {yes|no}] [-t {yes|no}] [-p {ro|rw}] [-l Logpartitions] Volume Group %1$s not found on nodes %2$s Volume Group name %1$s not unique among nodes, or volume group descriptions not consistent across nodes: %2$s Unable to reach all cluster nodes Logical volume(s) created may not have unique name(s) Volume Group descriptions will not be imported on nodes: %1$s Error: Volume group %1$s varied-on on multiple nodes: %2$s Volume group %1$s not varied-on on an accessible node Unable to obtain logical volume names from cluster node %1$s Failure obtaining varyon status from node %1$s Unable to determine volume group status on nodes: %1$s No node available for varying on volume group %1$s Error varying-on volume group %1$s Error getting volume group information %1$s Error creating log logical volume %1$s Error formatting log logical volume %1$s Error creating logical volume %1$s Error creating filesystem Error unlocking volume group %1$s Error importing volume group %1$s Error varying-off volume group %1$s Error re-locking volume group %1$s Exiting due to errors. User action required to correct or complete changes. This command must be run from a node with active Concurrent Logical Volume Manager (gsclvmd) subsystem. Select a %1$s logical volume from the list below Or leave blank to have a new one chosen automatically Volume group %1$s has no %2$s logical volumes Leave blank to have a new one chosen automatically %1$s: File system size %2$s is invalid. It must be a number, optionally followed by a 'M' or a 'G' %1$s: Mount point %2$s already is in use on node %3$s %1$s: The key storage mechanism for PowerHA SystemMirror support of encrypted file systems has not been configured. File system %2$s cannot be created as an encrypted file system. Configure the key storage mechanism, and retry the operation. %1$s: The key storage mechanism for PowerHA SystemMirror support of encrypted file systems has been set to 'LDAP' but no LDAP server has been configured. File system %2$s cannot be created as an encrypted file system. Configure an LDAP server, and retry the operation. %1$s: The key storage mechanism for PowerHA SystemMirror support of encrypted file systems has been set to 'Shared File System'. This requires that any resource group that holds an encrypted file system be brought on line after the resource group 'EFS_KeyStore'. However, %2$s, the volume group holding %3$s, is not a member of a resource group. Add %4$s to some resource group, and retry the operation. %1$s The resource group %2$s containing %3$s and %4$s could not be set to start after 'EFS_KeyStore'. %1$s: The configured key storage mechanism is not valid. PowerHA SystemMirrorManage Cluster EnvironmentStart Cluster Services on these nodesManage Node EnvironmentShow EnvironmentStop Cluster Services on these nodesManage Cluster ServicesRecover From Script FailureCluster RAS SupportManage Application ServersApplication ServersAdd a Cluster DefinitionChange a Cluster DefinitionDelete a Cluster DefinitionAdd a Cluster NodeAssociate a Network Adapter with a Cluster NodeChange Cluster Node/Network Adapter AttributesDelete a Cluster NodeDissociate a Network Adapter from a Cluster NodeSelect Application ServerShow Application ServersServer NameAdd Application ServerNew Server NameChange Application ServerStart ScriptRemove Application ServerStop ScriptStart Cluster ServicesStop Cluster ServicesShow Cluster ServicesChange Resource AllocationShow Cluster TopologyShow Cluster DefinitionsShow Cluster EnvironmentShow Topology Information by NodeShow Topology Information by Network NameShow Topology Information by Network AdapterShow All Clusters DefinedSelect a Cluster to ShowShow All NodesSelect a Node to ShowShow All NetworksSelect a Network to ShowShow All AdaptersSelect an Adapter to ShowShow Node EnvironmentVerify EnvironmentRemove Node EnvironmentChange/Show Cluster EventsArchive File NameError CountVerify Cluster Networks, Resources, or BothConfigure Node EnvironmentConfigure Run Time ParametersConfigure Owned ResourcesConfigure Rotating ResourcesConfigure Take Over ResourcesDelete Node EnvironmentView PowerHA SystemMirror Log FilesTrace FacilityError NotificationEnable/Disable Tracing of PowerHA SystemMirror DaemonsStart/Stop/Report Tracing of PowerHA SystemMirror ServicesAdd/Change/Show/Delete a Notify MethodAdd a Notify MethodChange/Show a Notify MethodDelete a Notify MethodScan the PowerHA SystemMirror Scripts Log FileWatch the PowerHA SystemMirror Scripts Log FileScan the PowerHA SystemMirror System Log FileWatch the PowerHA SystemMirror System Log FileSynchronize all Cluster NodesSelect adapter to showSelect node to showSelect network to showSelect cluster to deleteSelect node to deleteSelect cluster to changeSelect a Node ID to Add to ClusterSelect a Node ID to which to add a Network AdapterSelect a Node ID to ConfigureSelect a Shared IP Label to ConfigureSelect Owner Node ID to ConfigureSelect node to changeSelect Take Over Node ID to ConfigureSelect node Id to delete its environmentSelect network adapter to changeSelect network adapter to dissociateConfigure Node EnvironmentSelect Node RoleSelect Notification Object NameSelect Scripts Log File NameStart Cluster ServicesStop Cluster ServicesShow Cluster ServicesChange Resource AllocationSet Cluster Identification ValuesChange Cluster Identification ValuesDelete a Cluster DefinitionAdd a Node or Network Adapter to the ClusterChange Attributes of Cluster Node or AdapterRemove a Node from the ClusterRemove a Network Adapter from the ClusterShow Cluster DefinitionShow Definitions for All ClustersShow Definition for Selected ClusterShow Information for All NetworksShow Information for Selected NetworkShow Information for All Network AdaptersShow Information for Selected Network AdapterShow Cluster Node AttributesShow Cluster Node AttributesDelete Node EnvironmentAdd/Change/Show Hot Standby Configuration (Active Node)Add/Change/Show Hot Standby Configuration (Standby Node)Add/Change/Show Rotating Standby ConfigurationAdd/Change/Show One-Sided Takeover Configuration (Primary Active Node)Add/Change/Show One-Sided Takeover Configuration (Secondary Active/Standby Node)Add/Change/Show Mutual Takeover Configuration (Primary Active Node)Add/Change/Show Mutual Takeover Configuration (Secondary Active Node)Add/Change/Show Third-Party Takeover Configuration (Primary Active Node)Add/Change/Show Third-Party Takeover Configuration (Secondary Active Node)Add/Change/Show Third-Party Takeover Configuration (Standby Node)Start on local node, remote node, or both Start now, on system restart or bothStop on local node, remote node, or both Select nodes on which to run command Select nodes by Resource Group *** No selection means all nodes! *** BROADCAST message at startup?Startup Cluster Manager?Startup Cluster Lock Services?Startup Cluster Information Daemon?Stop now, on system restart or bothBROADCAST cluster shutdown?Lock Segment SizeResource Segment SizePre-Allocated Resource SegmentsPROMPT for confirmation before cluster shutdown?Shutdown mode (graceful or graceful with takeover)How many seconds to delay before shutdown? (Enter 00 for IMMEDIATE shutdown)**NOTE: Cluster Manager MUST BE RESTARTED in order for changes to be acknowledged.** Node Adapter IP LabelCommandArgumentsCluster IDCluster NameName of cluster.cf file If not specified, default cluster.cf shownCluster NetworksNetwork to showCluster Network EntryPre-event CommandDescriptionEvent CommandNotify CommandSelect Event Name to ChangePost-event CommandEvent NameEvent Recovery CommandRecovery CounterNode IDShared IP LabelSelect Owner Node IdOwner Node IDTake Over Node IDAdapter IP labelSelect Take Over Node IdNode ID to delete environmentSelect Node IdAdapter functionNetwork nameNetwork attributeAdapter IP addressNode ID to be changedAdapter to be changedNew Node IDNew Adapter IP labelNode ID to be deletedNode IDCluster NodesNode to showCluster Node EntryAdapter to be dissociatedAdapter IP Label to dissociateNetwork AdaptersAdapter to showCluster Adapter EntryID of cluster to be changedCluster IDNew Cluster IDCluster nameNew Cluster nameCluster ID to deleteCluster IDCluster DefinitionsConfigure Node EnvironmentSelect Node RoleNode ID for local nodeNode ID for remote nodeConcurrent Volume groupsNode ID for local (active) nodeNode ID for local (standby) nodeNode ID for remote (standby) nodeNode ID for remote (active) nodeNode ID for active primary nodeNode ID for active secondary nodeService IP label for local nodeVolume groups owned by active serverService IP label for active serverDisks owned by active serverDisksDisks owned by active primary nodeDisks owned by remote nodeDisks owned by active secondary nodeVolume groupsService IP labelVolume groups owned by remote nodeVolume groups owned by active secondary nodeFilesystemsFilesystems owned by active primary nodeFilesystems owned by remote nodeFilesystems owned by active secondary nodeFilesystems to be exportedFilesystems to be exported by primary nodeFilesystems to be exported by remote nodeFilesystems to be exported by secondary nodeFilesystems to be NFS mountedFilesystems mounted by primary from secondaryFilesystems mounted by secondary from primaryFilesystems owned by active serverFilesystems to be exported by active serverNumber of times to retry an NFS mountThe following four entries are requiredThe following five entries are requiredThe following six entries are requiredThe following seven entries are requiredThe following eight entries are requiredonly for IP Address Takeover / Reintegration:Participate in IP Address Takeover?Service interface for local nodeStandby interface to masquerade as primaryStandby interface to masquerade as secondaryBoot IP label for local nodeService IP label for active primary nodeService IP label for remote nodeService IP label for active secondary nodeStandby IP label for local nodeService IP label for standby nodeStandby interface for local nodeStandby IP label to masquerade as primaryStandby IP label to masquerade as secondaryNetmask for local nodeTurn on disk fencing?Debug LevelTakeover for inactive nodeHost uses NIS or Name ServerStart Cluster Lock Manager at cluster start?Start PowerHA SystemMirror/6000 demo software at cluster start?Directory containing images for PowerHA SystemMirror/6000 demoConcurrent Access Volume GroupsNotification Object NameProcess ID for use by Notify MethodPersist across system restart?Match Alertable errors?Select Error IDSelect Error LabelNotify MethodSelect Error ClassSelect Error TypeResource NameResource ClassResource TypeNotification Object NameProcess ID for use by Notify MethodPersistence across system restart?Select Error ClassSelect Error TypeMatch ALERTable errors?Select Error IDSelect Error LabelResource NameResource ClassResource TypeNotify Methodfalse,trueyeshigh,medium,lowservice,standby,boot,sharedpublic,private,serialnow,restart,bothlocal,remote,bothtruegraceful,takeoverYes,NoIgnore,All,True,FalseIgnore,All,Hardware,Software,ErrloggerIgnore,All,PEND,PERM,PERF,TEMP,UNKNnetworks,resources,bothAdd a User to the ClusterChange / Show Characteristics of a User in the ClusterRemove a User from the ClusterList Users in the ClusterAdd a Group to the ClusterRemove a Group from the ClusterList Groups in the ClusterList All Cluster UsersAdd a User to the ClusterUser NAMEADMINISTRATIVE User?Change / Show Characteristics of a User in the ClusterUser IDLOGIN User?PRIMARY GroupGroup setADMINISTRATIVE GroupsSU GroupsHOME DirectoryInitial PROGRAMUser informationAnother user CAN SU to user?User CAN RLOGIN?User CAN TELNET?Trusted path?Valid TTYsAUDIT classesPRIMARY authentication methodSECONDARY authentication methodMax FILE SizeMax CPU TimeMax DATA SegmentMax STACK SizeMax CORE File SizeMax physical MEMORYFile creation MASKEXPIRATION date (MMDDhhmmyy)Remove a User from the ClusterRemove AUTHENTICATION information?List All Groups on the ClusterAdd a Group to the ClusterGroup NAMEADMINISTRATIVE group?Change Group Attributes on the ClusterGroup IDUSER listADMINISTRATOR listRemove a Group from the ClusterList All Volume Groups in the ClusterUser must change password on first login?Selection nodes by resource group *** No selection means all nodes! *** Original AIX System Command,Link to Cluster Password Utility,Unable to determine state/bin/passwd utility isF4 lists all users defined to all cluster nodes (some system-defined users are not listed)Users allowed to change password cluster-wideChange Current Users PasswordModify System Password UtilityManage List of Users Allowed to Change PasswordList Users Allowed to Change PasswordChange/Show characteristics of a Volume GroupSelect the Volume Group to Change or ShowSelect the Volume Groups whose Logical Volumes will be ListedSelect the Volume Group to UpdateSelect the Volume Group that will hold the new Logical VolumeSelect the Volume Group to Enable for Fast Disk TakeoverFile containing ALLOCATION MAPSerlialize I/O?Make first block available for applications?Select the Volume Group that holds the Logical Volume to DisplaySelect the Volume Group that holds the Logical Volume to RenameSelect the Logical Volume to RenameSelect the Volume Group that holds the Logical Volume to ExtendSelect the Logical Volume to ExtendSelect the Physical Volumes onto which the Logical Volume is ExtendedSelect the File System to RemoveSelect the Volume Group that holds the Logical Volume Add a CopySelect the Logical Volume to Add a CopySelect the Physical Volumes to Hold the New CopySelect the Volume Group that holds the Logical Volume to have a Copy RemovedSelect the Logical Volume to have a Copy RemovedSelect the Physical Volumes from which the Copy will be RemovedSelect the Volume Group that holds the Logical Volume to be ChangedSelect the Logical Volume to ChangeSelect the Volume Group that holds the Logical Volume to be RemovedSelect the Logical Volume to RemoveSelect the File System to Show or ChangeSelect the Volume Group to hold the new File SystemSelect the type of File System to AddSelect the Logical Volume to Hold the new File SystemAdd the new File SystemTo Add a new File System to volume group %1$s,you must either chose toCreate a new Logical Volume for this File SystemOr select an existing logical volume from the list belowSelect the Volume Group to MirrorSelect the Volume Group to UnmirrorSelect the Volume Group to SynchronizeSelect the Volume Group that holds the Logical Volume to SynchronizeSelect the Volume Group to ReduceSelect the Physical Volume to Remove# Reference Node Physical Volume Name Select the Volume Group to ImportSelect the Physical Volume to ImportSelect the Volume Group to ExtendSelect the Physical Volumes to hold the new Logical VolumeSelect the Physical Volumes to hold the new Volume Group MirrorsSelect the Physical Volumes to no longer hold the Volume Group MirrorsSelect the Logical Volume to SynchronizeSelect the Physical Volumes to Add to the Volume GroupLogical volume NAMELogical volume TYPEPOSITION on physical volumeRANGE of physical volumesMAXIMUM NUMBER of PHYSICAL VOLUMES to use for allocationNumber of copies allocated for each logical partitionAllocate each logical partition copy on a SEPARATE physical volume?RELOCATE the logical volume during reorganization?MAXIMUM NUMBER of LOGICAL PARTITIONSPERMISSIONSEnable BAD BLOCK relocation?SCHEDULING POLICY for writing logical partition copiesEnable WRITE VERIFY?Logical volume LABELRename a Logical VolumeCURRENT logical volume nameNEW logical volume nameno,yesChange Characteristics of a Physical VolumePhysical volume NAMEAllow physical partition ALLOCATION?Physical volume STATEactive,not activeChange a Volume GroupVOLUME GROUP nameActivate volume group AUTOMATICALLY at system restart?Copy a Logical VolumeSOURCE logical volume nameHow is the DESTINATION logical volume specified?DESTINATION logical volumeDestination VOLUME GROUP namenew logical volume name,overwrite existing logical volume,system assigned logical volume nameExport a Volume GroupIncrease the Size of a Logical VolumeLOGICAL VOLUME nameNumber of ADDITIONAL logical partitionsPHYSICAL VOLUME namesFile containing ALLOCATION MAPAdd a Physical Volume to a Volume GroupImport a Volume GroupACTIVATE volume group after it is imported?Show Characteristics of a Logical Volume in the ClusterList OPTIONstatus,physical volume map,logical partition mapList Contents of a Physical VolumePHYSICAL VOLUME nameSHARED VOLUME GROUP namestatus,logical volumes,physical partitionsList All Physical VolumesList Contents of a Volume Groupstatus,logical volumes,physical volumesList All Logical Volumes by Volume GroupList All Volume GroupsList only the ACTIVE volume groups?Move Contents of a Physical VolumeSOURCE physical volume nameDESTINATION physical volumesMove only data belonging to this LOGICAL VOLUME?Add a Logical VolumeNumber of LOGICAL PARTITIONSLogical volume name PREFIXNumber of COPIES of each logical partitionMAXIMUM NUMBER of LOGICAL PARTITIONSAdd Copies to a Logical VolumeNEW TOTAL number of logical partition copiesSYNCHRONIZE the data in the new logical partition copies?Add a Volume GroupPhysical partition SIZE in megabytesACTIVATE volume group after it is created?Volume Group MAJOR NUMBERRemove a Physical Volume from a Volume GroupFORCE deallocation of all partitions on this physical volume?Remove a Volume GroupRename a Logical VolumeCURRENT logical volume nameNEW logical volume nameReorganize a Volume GroupLOGICAL VOLUME namesRemove a Logical Volume from the ClusterRemove Copies from a Logical VolumeNEW maximum number of logical partition copiesDeactivate a Volume GroupPut volume group in SYSTEM MANAGEMENT mode?Activate a Volume GroupRESET device configuration database?RESYNCHRONIZE stale physical partitions?Activate volume group in SYSTEM MANAGEMENT mode?no,yes,on errorFORCE activation of the volume group? Warning--this may cause loss of data integrity.jfs,jfslog,paging,boot,copy,sysdumpouter_edge,outer_middle,center,inner_middle,inner_edgeminimum,maximumparallel,sequentialread/write,readonlyMirror Write Consistency?Synchronize a Volume GroupChange a Logical Volume on the ClusterCluster Concurrent Logical Volume ManagerVolume GroupsSynchronize LVM MirrorsList All Concurrent Volume Groups on the ClusterConcurrent Volume GroupsConcurrent Logical VolumesSynchronize Concurrent LVM MirrorsSet Characteristics of a Volume GroupImport a Volume GroupMirror a Volume GroupUnmirror a Volume GroupSynchronize LVM MirrorsSet Characteristics of a Concurrent Volume GroupImport a Concurrent Volume GroupMirror a Concurrent Volume GroupUnmirror a Concurrent Volume GroupAdd a Physical Volume to a Volume GroupRemove a Physical Volume from a Volume GroupAdd a Physical Volume to a Concurrent Volume GroupRemove a Physical Volume from a Concurrent Volume GroupAdd a Logical VolumeSet Characteristics of a Logical VolumeShow Characteristics of a Logical VolumeList All Concurrent Logical Volumes by Volume GroupShow Characteristics of a Concurrent Logical VolumeAdd a Concurrent Logical VolumeAdd a Copy to a Concurrent Logical VolumeRemove a Copy from a Concurrent Logical VolumeRemove a Concurrent Logical VolumeRename a Logical VolumeIncrease the Size of a Logical VolumeAdd a Copy to a Logical VolumeRemove a Copy from a Logical VolumeVolume Group NamesPhysical Volume NamesVolume group MAJOR NUMBERMake this VG Concurrent Capable?Make default varyon of VG Concurrent?Mirror sync modeKeep Quorum Checking On?Create Exact LV Mapping?Logical volume NAMEStripe Size?Logical Volume NamesPHYSICAL VOLUME name(s) to remove copies fromSet Characteristics of a Concurrent Logical VolumeConcurrent Logical Volume NamesSynchronize by Volume GroupSynchronize by Logical VolumeSynchronize LVM Mirrors by Volume GroupSynchronize LVM Mirrors by Logical VolumeSynchronize Concurrent LVM Mirrors by Volume GroupSynchronize Concurrent LVM Mirrors by Logical VolumeNumber of Partitions to Sync in ParallelSynchronize All PartitionsDelay Writes to VG from other cluster nodes during this SyncReference nodeConcurrent Volume Group Namesno,yes,superstrictEnable a Volume Group for Fast Disk Takeover or Concurrent AccessEnable Fast Disk Takeover or Concurrent AccessList all Logical Volumes by Volume GroupVolume Group TypeForce RemoveEnable LVM EncryptionAuth MethodMethod DetailsAuth Method NameRemove Auth Methodyes,noVerify a File SystemNAME of file system -OR-TYPE of file system FAST check?SCRATCH file (must not be on the file system being checked)List All Mounted File Systems on the ClusterMount a File SystemFILE SYSTEM nameDIRECTORY over which to mountFORCE the mount?Request an INHERITED mount?REMOTE NODE containing the file system to mountMount as a REMOVABLE file system?Mount as a READ-ONLY system?Unmount a File SystemNAME of file system to unmountREMOTE NODE containing the file system(s) to unmountTYPE of file systemUnmount ALL mounted file systems? (except /, /tmp, /usr) -OR-Unmount all REMOTELY mounted file systems? Mount a Group of File SystemsGROUP nameUnmount a Group of File SystemsGROUP nameList All File Systems on the ClusterAdd a Journaled File SystemVolume Group NameVolume group nameSIZE of file system Unit Size Number of UnitsMOUNT POINTMount AUTOMATICALLY at system restart?PERMISSIONSChange / Show Characteristics of a File System in the ClusterFile System NameFile system nameNEW mount pointMount GROUPVolume group nameRemove a Journaled File System from the ClusterAdd a CDROM File SystemDEVICE nameChange / Show Characteristics of a CDROM File SystemAdd a Journaled File System on a Previously Defined Logical VolumeLOGICAL VOLUME nameRemove a CDROM File Systemread only,read/writeMount OPTIONSDisallow DEVICE access via this mount?Disallow execution of SUID and SGID programs in this file system?Remove Mount PointFile SystemThere are no shared %1$s file systems File system %1$s not found Encrypted File SystemSystem ManagementInstallation and MaintenanceDevicesPhysical & Logical StorageCluster Users & GroupsPowerHA SystemMirror Cluster ServicesSpooler (Print Jobs)Problem DeterminationPerformance & Resource SchedulingSystem EnvironmentsApplicationsUsing SMIT (information only)Select the Keyboard Map for Next System RestartAssign the ConsoleChange / Show Date and TimeInstall and Update Optional Program ProductsInventory / Vital Product DataMaintenanceSet System Run LevelConfigure Devices Added After IPLManage Local PrintersTTYPTYFixed DiskCD ROM DriveDiskette DriveTape DriveCommunicationHigh Function Terminal (HFT)List All Supported DevicesList All Defined DevicesFile SystemsPaging SpaceCluster Logical Volume ManagerUsersGroupsChange User PasswordStart a Print JobCancel a Print JobShow the Status of Print JobsPrioritize a Print JobSchedule JobsManage Local Printers, Queues, and Queue DevicesManage Remote Printer SubsystemError LogTraceSystem DumpHardware DiagnosticsVerify an Optional Program ProductCurrent Shell Diagnostics This selection invokes the "diag" command in the current shell. No users or applications will be affected. If the device you want to test is not listed, or if you want to do further analysis, then select the Maintenance Shell Diagnostics. Maintenance Shell Diagnostics This selection invokes the "cd /;shutdown -m" command, which brings the system down to the Maintenance Shell. During a shutdown, all users are notified of the impending shutdown, and are given 60 seconds to log off. The following steps describe how to use the Maintenance Shell Diagnostics. * Login as "root" at the default console (/dev/console). * Enter the "diag" command. * Enter "Ctrl d" to return to the default user shell.Report System ActivityList All Scheduled JobsSchedule a JobRemove a Job from the ScheduleStop the SystemChange / Show Date, Time, and Time ZoneChange Language EnvironmentChange Number of Licensed UsersManage ProcessesBroadcast Message to all UsersList All Options on MediaList All Pending UpdatesInstall Optional Program Products with UpdatesInstall Optional Program ProductsCommit Verified Updates (Replace Previous Versions)Update Optional Program ProductsReject Updates and Use Previous VersionsList All Problems Fixed by UpdatesCreate Backup Format for Later InstallationClean up after Install FailureShow System LevelList All Defined DevicesList All Installed Optional Program ProductsShow History of an Optional Program ProductShow Prerequisites of an Optional Program ProductShow Dependencies of an Optional Program ProductShow ID of an Optional Program ProductShow Files Included in an Optional Program ProductPrinter/PlotterManage Local Printer SubsystemList All Defined TTYsAdd a TTYMove a TTY to Another PortChange / Show Characteristics of a TTYRemove a TTYConfigure a Defined TTYGenerate Error ReportTrace a TTYChange / Show Characteristics of the PTYRemove the PTY; Keep DefinitionConfigure the Defined PTYGenerate Error Report for the PTYTrace the PTYList All Defined DisksList All Supported DisksAdd a DiskChange / Show Characteristics of a DiskRemove a DiskConfigure a Defined DiskGenerate Error Report for a DiskTrace a DiskList All Defined CD ROM DrivesList All Supported CD ROM DrivesAdd a CD ROM DriveChange / Show Characteristics of CD ROM DriveRemove a CD ROM DriveConfigure a Defined CD ROM DriveGenerate an Error Report for a CD ROM DriveTrace a CD ROM DriveList All Defined Diskette DrivesAdd a Diskette DriveChange / Show Characteristics of a Diskette DriveRemove a Diskette DriveConfigure a Defined Diskette DriveGenerate Error Report for a Diskette DriveTrace a Diskette DriveList All Defined Tape DrivesList All Supported Tape DrivesAdd a Tape DriveChange / Show Characteristics of a Tape DriveRemove a Tape DriveConfigure a Defined Tape DriveGenerate an Error Report for a Tape DriveTrace a Tape DriveEthernet AdapterToken Ring AdapterMultiprotocol Adapter3270 Connection Adapter5085/86/88 Attachment AdapterX.25 AdapterKeyboardDisplaysFontsMouseTabletSpeakerVolume GroupsLogical VolumesPhysical VolumesList All File Systems by Volume GroupList All Mounted File Systems on the ClusterChange / Show Characteristics of a File SystemJournaled File SystemsAdd a Journaled File SystemAdd a Journaled File System on a Previously Defined Logical VolumeChange / Show Characteristics of a Journaled File System on the ClusterRemove a File SystemCDROM File SystemsAdd a CDROM File SystemChange / Show Characteristics of a CDROM File SystemRemove a CDROM File SystemMount a File SystemMount a Group of File SystemsUnmount a File SystemUnmount a Group of File SystemsBackup Files in a File SystemRestore Files in a File SystemVerify a File SystemList All Paging SpacesAdd Another Paging SpaceChange / Show Characteristics of a Paging SpaceRemove a Paging SpaceActivate a Paging SpaceList All Users in the ClusterAdd a User to the ClusterChange / Show Characteristics of a User in the ClusterRemove a User from the ClusterList All Groups in the ClusterAdd a Group to the ClusterChange / Show Characteristics of a Group in the ClusterRemove a Group from the ClusterGenerate an Error ReportClean the Error LogStart TraceStop TraceGenerate a Trace ReportChange the Primary Dump DeviceChange the Secondary Dump DeviceCopy a System Dump from a Dump Device to a FileFormat a System DumpShow All Current ProcessesRemove a ProcessList All Defined Printers/PlottersList All Supported Printers/PlottersAdd a Printer/PlotterMove a Printer/Plotter to Another PortChange / Show Characteristics of a Printer/PlotterRemove a Printer/PlotterConfigure a Defined Printer/PlotterGenerate an Error Report for Printers/PlottersTrace a Printer/PlotterVirtual PrintersLocal Printer QueuesQueue DevicesClient ServicesServer ServicesRemote Printer QueuesQueue DevicesHost Access for Printinglpd Remote Printer SubsystemList All QueuesAdd a Remote QueueChange / Show Characteristics of a QueueRemove a QueueList All Queue DevicesAdd Another Remote Queue Device to an Existing QueueChange / Show Characteristics of a Queue DeviceRemove a Queue DeviceList All Remote HostsAdd a Remote HostRemove a Remote HostStart Using the lpd SubsystemChange Restart Characteristics of the lpd SubsystemStop Using the lpd SubsystemStart Trace on a TTYStop Trace on a TTYGenerate a Trace Report for a TTYStart Trace on the PTYStop Trace on the PTYGenerate a Trace Report for the PTYStart Trace on a CD ROM DriveStop Trace on a CD ROM DriveGenerate a Trace Report for a CD ROM DriveStart Trace on a Diskette DriveStop Trace on a Diskette DriveGenerate a Trace Report for a Diskette DriveStart Trace on a Tape DriveStop Trace on a Tape DriveGenerate a Trace Report for a Tape DriveAdapterServicesUser ApplicationsData Link ControlAdd a Data Link ControlShow a Data Link ControlRemove a Data Link ControlSwap Available Keyboard MapsRemap a Key on the KeyboardList All Software Keyboard MapsAdd a New Keyboard MapChange the Keyboard RateChange the Keyboard ClickGenerate an Error Report for the KeyboardTrace the KeyboardAssign a Single Character to a KeyAssign a Function to a KeyAssign a String to a KeyAssign a Non-Spacing Character to a KeyMove This Virtual Terminal to Another DisplaySelect the Background & Foreground ColorsSelect the Palette ColorsSelect the Cursor ShapeSelect the Default DisplayGenerate Error Report on a DisplayTrace a DisplayList All Fonts in the SystemSelect the Active FontSelect the Font PaletteAdd a Font to the SystemGenerate Error Report on the MouseTrace the MouseGenerate an Error Report for the TabletTrace the TabletSet the Speaker VolumeGenerate Error Report on the SpeakerTrace the SpeakerList All Volume GroupsAdd a Volume GroupSet Characteristics of a Volume GroupList Contents of a Volume GroupRemove a Volume GroupActivate a Volume GroupDeactivate a Volume GroupList All Logical Volumes by Volume GroupAdd a Logical VolumeSet Characteristics of a Logical VolumeShow Characteristics of a Logical VolumeRemove a Logical VolumeCopy a Logical VolumeList All Physical Volumes in SystemChange Characteristics of a Physical VolumeMove Contents of a Physical VolumeStart Trace on a Printer/PlotterStop Trace on a Printer/PlotterGenerate a Trace Report for a Printer/PlotterAdd a Virtual PrinterChange / Show Characteristics of a Virtual PrinterRemove a Virtual PrinterStop a QueueStart a QueueAdd a Local QueueAdd a Device to an Existing QueueList All Ethernet AdaptersChange / Show Characteristics of an Ethernet AdapterGenerate an Error Report for an Ethernet AdapterTrace an Ethernet AdapterChange / Show Characteristics of a Network Interface DriverList All Token Ring AdaptersChange / Show Characteristics of a Token Ring AdapterGenerate an Error Report for a Token Ring AdapterTrace a Token Ring AdapterChange / Show Characteristics of a Network Interface DriverList All Defined Multiprotocol PortsAdd a Multiprotocol PortMove a Multiprotocol Port Definition to Another PortChange / Show Characteristics of a Multiprotocol PortRemove a Multiprotocol PortConfigure a Defined Multiprotocol PortGenerate an Error Report for a Multiprotocol PortTrace a Multiprotocol PortList All 3270 Connection AdaptersChange / Show Characteristics of a 3270 Connection AdapterGenerate an Error Report for a 3270 Connection AdapterTrace a 3270 Connection AdapterList All 5085/86/88 Attachment AdaptersAdd a 5085/86/88 Attachment AdapterChange / Show Characteristics of a 5085/86/88 Attachment AdapterRemove a 5085/86/88 Attachment AdapterConfigure a Defined 5085/86/88 Attachment AdapterGenerate an Error Report for a 5085/86/88 Attachment AdapterTrace a 5085/86/88 Attachment AdapterChange / Show an X.25 Adapter for InitializationList All X.25 AdaptersChange / Show Characteristics of an X.25 AdapterChange / Show Network ParametersChange / Show Packet ParametersChange / Show Frame ParametersChange / Show Default for Permanent Virtual Circuits (PVC)Change / Show a Specific Permanent Virtual Circuit (PVC)Change / Show General ParametersGenerate an Error Report for an X.25 AdapterTrace an X.25 AdapterStart Trace on the KeyboardStop Trace on the KeyboardGenerate a Trace Report for the KeyboardStart Trace on a DisplayStop Trace on a DisplayGenerate a Trace Report for a DisplayStart Trace on a DiskStop Trace on a DiskGenerate a Trace Report for a DiskStart Trace on the MouseStop Trace on the MouseGenerate a Trace Report for the MouseStart Trace on the TabletStop Trace on the TabletGenerate a Trace Report for the TabletStart Trace on the SpeakerStop Trace on the SpeakerGenerate a Trace Report for the SpeakerChange a Volume GroupAdd a Physical Volume to a Volume GroupRemove a Physical Volume from a Volume GroupReorganize a Volume GroupImport a Volume GroupExport a Volume GroupChange a Logical VolumeRename a Logical Volume on the ClusterIncrease the Size of a Logical Volume on the ClusterAdd a Copy to a Logical VolumeRemove a Copy from a Logical VolumeStart Trace on an Ethernet AdapterStop Trace on an Ethernet AdapterGenerate a Trace Report for an Ethernet AdapterStart Trace on a Token Ring AdapterStop Trace on a Token Ring AdapterGenerate a Trace Report for a Token Ring AdapterStart Trace on a Multiprotocol PortStop Trace on a Multiprotocol PortGenerate a Trace Report for a Multiprotocol PortStart Trace on a 3270 Connection AdapterStop Trace on a 3270 Connection AdapterGenerate a Trace Report for a 3270 Connection AdapterStart Trace on a 5085/86/88 Attachment AdapterStop Trace on a 5085/86/88 Attachment AdapterGenerate a Trace Report for a 5085/86/88 Attachment AdapterStart Trace on an X.25 AdapterStop Trace on an X.25 AdapterGenerate a Trace Report for an X.25 Adapter__ROOT__SYSTEM STARTUP MENU Your Base Operating System has been installed. You can now perform any of the options below.Backup SystemOptional Program ProductsTCP/IPNFSStart NFSAdd a File System for MountingAdd a PrinterChange a PrinterList PrintersChange a TTYList TTYsList DisksList CD ROM DrivesList Diskette DrivesList Tape DrivesAdd a Communications AdapterChange a Communications AdapterList Communications AdaptersRemove a Communications AdapterList Volume GroupsReduce a Volume GroupAdd a File SystemChange a File SystemRemove a File SystemAdminister a QueueAdd a QueueAdd a Queue DeviceSystem Controlled ProcessesSelect the Display SizeStart NOWStart at Next System RESTARTStart BOTH Now and at System RestartStop NOWStop at Next System RESTARTStop BOTH Now and at System RestartList Contents of a Physical VolumeTCP/IP StartupAdd the Hostname for a Remote ServerShow Current Dump DevicesShow Information About the Previous System DumpStart a Dump to the Primary Dump DeviceStart a Dump to the Secondary Dump DeviceCopy a System Dump from a Dump Device to DisketteDefine a Fixed Disk to the Operating SystemAdd a Fixed Disk to an Existing GroupAdd a Fixed Disk Without Data to a New GroupAdd a Fixed Disk With DataConsoleAssign the ConsoleRedirect Console OutputSynchronize a Volume GroupAdd a Dials/LPFKeysChange / Show Characteristics of a Dials/LPFKeysRemove a Dials/LPFKeysConfigure a Defined Dials/LPFKeysGenerate an Error ReportTrace a Dials/LPFKeysDials/LPFKeysList All Defined Dials/LPFKeysSelect the Fonts to Load on Next System RestartTerminalsMultimediaList All Audio Capture & Playback AdaptersChange/Show Characteristics of an ACPAGenerate an Error ReportTrace an ACPASCSI AdapterSCSI Initiator DeviceList All SCSI AdaptersChange / Show Characteristics of a SCSI AdapterTrace a SCSI AdapterList All Defined SCSI Initiator DevicesList All Supported SCSI Initiator DevicesAdd a SCSI Initiator DeviceChange / Show Characteristics of a SCSI Initiator DeviceRemove a SCSI Initiator DeviceConfigure a Defined SCSI Initiator DeviceTrace a SCSI Initiator DeviceChange / Show Characteristics of Asynchronous I/ORemove Asynchronous I/O; Keep DefinitionConfigure Defined Asynchronous I/OAsynchronous I/OResource Status & MonitorsAnalysis ToolsResource ControlsShow Process StatusShow Virtual Memory StatisticsShow Input/Output StatisticsShow System Activity While Running a CommandTraceRemove a ProcessSet Initial Priority of a ProcessAlter the Priority of a Running ProcessMultimediaShow Characteristics of a Supported DeviceShow Characteristics of a Defined DeviceAudioVideoList All Audio Capture & Playback AdaptersChange / Show Characteristics of an ACPATrace an ACPAClean Up After a Failed InstallationCommit Software (Remove Previous Version)Copy Software to Hard Disk for Future InstallationDiskless Workstation ManagementEnable Software for a ClientInstall / Update SoftwareInstall SoftwareList All Installed SoftwareList All Installed UpdatesList All Problems Fixed by Software on Installation MediaList All Software on Installation MediaList All Uncommitted SoftwareList Dependents of a Software ProductList Files Included in a Software ProductList Prerequisites of a Software ProductManage Software InventoryReject Uncommitted Software (Use Previous Version)Show History of a Software ProductShow ID of a Software ProductSoftware Installation & MaintenanceSoftware InventoryStandard Installation & MaintenanceSystem MaintenanceVerify Consistent Installation LevelVerify a Software ProductList Devices370 Parallel ChannelList all 370 Parallel Channel AdaptersChange/Show Characteristics of a 370 Parallel Channel AdapterTrace a 370 Parallel Channel AdapterChange / Show Characteristics of the Error LogPrinter/Plotter DevicesLocal Queue DevicesAdd Another Local Queue Device to an Existing QueueComplete Initial Network Configuration of an X.25 AdapterTrace Asynchronous I/OBackup the SystemPasswordsRemote Printer Queue DevicesChange the Keyboard Map for the Next System RestartManage FontsConvert FilesOptional Software ProductsSoftware Configuration MigrationManage Language EnvironmentList Software ConfigurationSave Software ConfigurationRestore Software ConfigurationMove Configuration Files to Permanent DirectoryLocal Printer Queue DevicesChange / Show Restart Characteristics of the lpd SubsystemValidate SoftwareProcesses & SubsystemsProcessesConvert System MessagesConvert Flat FilesStop a Single SubsystemChange Initial Priority of a ProcessInstall / Maintain SoftwareList All Applied but Not Committed SoftwareCommit Applied Software (Remove Previous Version)Reject Applied Software (Use Previous Version)Install Software With UpdatesInstall Software Without UpdatesUpdate SoftwareFinish Incomplete Client InstallationList All Updates to a Software ProductAdd a Keyboard MapChange Number of Virtual Terminals at Next System RestartVirtual TerminalsList All Jobs ScheduledStart Daemons on ServerStart BOOTP DaemonConfigure NFS (if Not Already Configured)Manage Shared Product Object Trees (SPOTs)List all SPOTsList All Clients on a SPOTShow Characteristics of a SPOTAdd a SPOTRemove a SPOTManage ClientsAdd a Diskless ClientChange / Show Client SPOTChange / Show Characteristics of a ClientRemove a Diskless ClientList All Diskless ClientsSoftware ProductsFDDI AdapterList All FDDI AdaptersChange / Show Characteristics of a FDDI AdapterTrace a FDDI AdapterInstall Updates OnlyAdd a HostStart TFTP DaemonReject Applied Updates (Use Previous Version)Remove Applied Software ProductsRemote /usr Client ManagementInstall / Update This Client From Remote /usrPowerHA SystemMirror Cluster System ManagementChange/Show Characteristics of a File System in the ClusterSynchronize a Volume Group DefinitionCluster Resource Group ManagementBring a Resource Group OnlineBring a Resource Group OfflineMove a Resource GroupCreate a Volume GroupCreate a Concurrent Volume GroupPhysical VolumesNode NamesPVIDVolume Group NamePhysical Partition Size in MegabytesVolume Group Major Number Warning: Changing the volume group major number may result in the command being unable to execute successfully on a node that does not have the major number currently available. Please check for a commonly available major number on all nodes before changing this setting. Create a File SystemLogical Volume NamesChange / Show Characteristics of a Enhanced Journaled File SystemEnhanced Journaled File System Name and Resource GroupCreate a Volume Group with Data Path DevicesCreate a Concurrent Volume Group with Data Path DevicesEnhanced Concurrent ModeAdd a Volume to a Volume GroupAdd a Volume to a Concurrent Volume GroupRemove a Volume from a Volume GroupRemove a Volume from a Concurrent Volume GroupConvert a Concurrent Volume Group to Enhanced Concurrent ModeVOLUME namesAdd a Standard Journaled File SystemAdd a Compressed Journaled File SystemAdd a Large File Enabled Journaled File SystemCluster Physical Volume ManagerAdd a Disk to the ClusterRemove a Disk From the ClusterSelect Node(s) to Remove Disk FromSelect A Disk To RemoveNodesDiskKEEP definition in databaseForce the operation if system errors occur on other nodesCluster Data Path Device ManagementNode Name(s) to which Disk is AttachedDevice TypeNode-Parent Adapter PairsUsage: cl_chpasswd [-cspoc "-f [-g ResourceGroup | -n NodeList]"] [-k] UserName %1$s: Error: User %2$s does not exist on node %3$s %1$s: Error: User %2$s does not exist on local node %3$s, exiting! %1$s: Error: This operation is not permitted on an SP with usermgmt_config set to 'true'. %1$s: Error executing clchkspuser on node %2$s Error: Can not retrieve password for User %1$s from local node %2$s %1$s: Error executing lsuser -a id ALL on node %2$s Change a User's Password in the Cluster%1$s: Error: Unable to edit passwd in /etc/passwd for user %2$s on node %3$s %1$s: Error: Unable to commit passwd in /etc/passwd for user %2$s on node %3$s %1$s: Error: Unable to edit passwd in /etc/security/passwd for user %2$s on node %3$s %1$s: Warning: Unable to clear "force change" flag in password file on local node. User will be required to change password on each node at next login.%1$s: ERROR! %2$s is either missing or not executable. %1$s: ERROR! The AIX "passwd" command is either missing or not executable and could not be restored from the intended backup copy %2$s. Usage: cl_harvestIP -cspoc "[-f] [-d 1..9]" [-Z] [-i|-h|-s|-c] [NetworkName]%1$s: Error executing %2$s on node %3$s # Configured Interfaces # Found in /etc/hosts # Subnet/Netmask_length (Network Type - NIM Type) # Subnet/Netmask_length # Subnet/Netmask_length (Network Type) Shutdown mode (graceful or graceful with takeover, forced)Bring Resource Groups Offline,Move Resource Groups,Unmanage Resource Groups%1$s: Error: This command can only be executed through the SMIT interface. %1$s: Error: Unable to reach node %2$s exiting. %1$s: Warning: Unable to reach node %2$s ...continuing. Cluster Disk ReplacementSelect a Source Disk for ReplacementSelect a Destination Disk for ReplacementSOURCE DISKVolume GrouphdiskPVIDNodenameResource GroupDESTINATION DISKPVIDNode ListNot in a Resource GroupLogical Volume%1$s: Error: There are no Volume Groups in any Resource Group in the cluster. #Volume Group hdisk PVID Cluster Node #--------------------------------------------------------------------- %1$s: Error: Retrieving the capacity of disk %2$s on node %3$s. No free disks found. %1$s: Error: There are no suitable available disk for replacement. #hdisk Disk PVID Cluster Node #--------------------------------------------------------- hdisk Disk PVID Cluster Node --------------------------------------------------------- %1$s: Volume Group %2$s contains only one disk. Two or more disks are necessary for Cluster Disk Replacement. EncryptedERROR: Failed to register and reserve disk with PVID %1$s using SCSI Persistent Reserve from node %2$s.Collecting information from local node... Collecting information from all cluster nodes... %1$s:Could not collect information from node %2$s Display Data Path Device ConfigurationDisplay Data Path Device StatusDisplay Data Path Device Adapter StatusDefine and Configure all Data Path DevicesAdd Paths to Available Data Path DevicesConfigure a Defined Data Path DeviceRemove a Data Path DeviceConvert ESS hdisk Device Volume Group to an SDD VPATH Device Volume GroupConvert SDD VPATH Device Volume Group to an ESS hdisk Device Volume GroupUsage: cl_SDDinstalled -cspoc [ -g ResourceGroup | -n NodeList ]Usage: cl_lsvpcfg -cspoc [ -g ResourceGroup | -n NodeList ]Usage: cl_lsvpdevstatus -cspoc [ -g ResourceGroup / -n NodeList ]Usage: cl_lsadapterstatus -cspoc [ -g ResourceGroup / -n NodeList ]Usage: cl_vpalldefcfg -cspoc [ -g ResourceGroup / -n NodeList ]Usage: cl_addpaths -cspoc [ -g ResourceGroup / -n NodeList ]Usage: cl_vpgenlst -cspoc [ -g ResourceGroup / -n NodeList ]Usage: cl_vprmdev -cspoc [ -g ResourceGroup / -n NodeList ] -l -dUsage: cl_hd2vp -cspoc [ -g ResourceGroup / -n NodeList ] -v [-f / -b]Usage: cl_vpcfg -cspoc [ -g ResourceGroup / -n NodeList ] -lSelect Node(s)Select PVIDPVID%1$s: Can't reach %2$s, continuing anyway %1$s: Error attempting to verify the software on %2$s %1$s: Can't locate SDD software on %2$s %1$s: SDD software on %2$s is version %3$s. Required is %4$s or greater %1$s: Error attempting to retrieve Data Path configuration on %2$s %1$s: Error executing clmk command on %2$s %1$s: Error attempting to retrieve VPATHs and PVIDs configuration on %2$s %1$s: Error attempting to retrieve adapter status on %2$s %1$s: Error executing cfallvpath command on %2$s %1$s: Error executing addpaths command on %2$s %1$s: Error - participating volume group %2$s is online on node %3$s %1$s: Error attempting to retrieve Volume Group status on %2$s %1$s: Error executing clhd2vp command on %2$s %1$s: Error attempting to retrieve Data Path Device status on %2$s %1$s: Error attempting to remove a device on %2$s %1$s: Error attempting to configure a device on %2$s no Volume Group given error attempting to locate vg %1$s on node %2$s can't reach %1$s, continuing anyway Usage: cl_dpovgfix -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" VolumeGroup Unable to reach one of the nodes Unable to reach one of the nodes..Continuing.. Usage: cl_mkvg4vp [-cspoc "[-f] [-g ResourceGroup | -n NodeList]" [-d MaxPVs] [-B] [-G] [-f] [-c] [-l true | false] [-x] [-i] [-s PPsize] [-n] [-m MaxPVsize | -t factor] [-r ResourceGroup] [-E] [-V MajorNumber] -f -y VGname PhysicalVolumes ... Invalid list of nodes. Error getting VPATHIDs from nodes. %1$s Invalid PVID - The VPATHID %2$s either does not exist or may be a part of an existing volume group. Volume Group Name not valid. No free disks found. Unable to reach one of the nodes. Unable to reach one of the nodes. Continuing... Maximum of 8 nodes allowed for disk operations. cl_mkvg4vp: Selected PVIDs require that an Enhanced Concurrent Mode volume group be created. Change the Enhanced Concurrent Mode option on the previous screen to 'true'. %1$s: Volume Group Name %2$s already exists on nodes %3$s %1$s: An error occurred executing mkvg4vp on node %2$s %1$s: An error occurred executing mkvg4vp on node %2$s ..Continuing.. %1$s: Unable to obtain volume group names from cluster node %2$s %1$s: Volume group created may not have a unique name %1$s: Error attempting to add a concurrent volume group %2$s to a Resource Group %3$s %1$s: Error attempting to add a concurrent volume group %2$s to a concurrent Resource Group %3$s %1$s: An error occurred executing mkvg4vp %2$s on node %3$s %1$s: Add of volume group %2$s to resource group %3$s failed %1$s: The PowerHA SystemMirror configuration has been changed - %2$s %3$s has been added. The configuration must be synchronized to make this change effective across the cluster %1$s: Cross site mirroring set up failed for volume group %2$s in %3$s ERROR: Cannot add more than %1$s volume groups to a resource group: %2$s cldpovgfix Error performing dpovgfix on volume group %1$s. The volume group may contain both vpath and non-vpath capable disks. Please reference the Subsystem Device Driver User's Guide. All of the selected nodes must have SDD installed. clgetvgdisktype cllsvapthids {-n | -c} Maximum of 32 nodes allowed for disk operations. Usage: cl_lsfreedisks -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" VolumeGroup%1$s: Error executing clgetvgdiskmode on node %2$s %1$s: Error executing clgetvpathid on node %2$s Usage: cl_lsvpathids -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]"cl_lsvpathids: Unable to reach one of the nodes. cl_lsvpathids: Unable to reach one of the nodes. Continuing... # Node / Network # Interface/Device IP Label/Device Path IP Address # Network / Node # Interface IP Label IP Address # Node Device Pvid Start now, on system restart or bothManage Resource GroupsMONTH (01-12)DAY (01-31)MINUTES (00-59)SECONDS (00-59)Select a Node on which to Configure a Network InterfaceSelect a Network Interface TypeStart Disk Accounting?Fragment Size (bytes)Number of bytes per inodeCompression algorithmRELOCATE the logical volume during reorganization?Mirror Sync ModeAllocation Group Size (MBytes) Volume group nameSelect A Disk To removeNode Name(s) to which disk is attachedDisk TypeDevice typeDisk typeDisk interfaceParentCONNECTION addressLocation LabelASSIGN physical volume identifierRESERVE disk on openQueue depthMaximum CoalesceBlock Size (bytes)Inline Log?Inline Log size (MBytes)Logical Volume nameCustom Log NameSelect nodes by resource group ADMINISTRATIVE USER?Primary GROUPGroup SETADMINISTRATIVE GROUPSAnother user can SU TO USER?SU GROUPSHOME directoryUser INFORMATIONIs this user ACCOUNT LOCKED?User can LOGIN?User can LOGIN REMOTELY?Allowed LOGIN TIMESLogin AUTHENTICATION GRAMMARDays to WARN USER before password expiresPassword CHECK METHODSPassword DICTIONARY FILESNUMBER OF PASSWORDS before reuseWEEKS before password reuseWeeks between password EXPIRATION and LOCKOUTPassword MAX. AGEPassword MIN. AGEPassword MIN. LENGTHPassword MIN. ALPHA charactersPassword MIN. OTHER charactersPassword MAX. REPEATED charactersPassword MIN. DIFFERENT charactersPassword REGISTRYMAX. FILE sizeMAX. CPU timeMAX. DATA segmentMAX. STACK sizeMAX. CORE file sizeFile creation UMASKTRUSTED PATH?Resource groupLVM Preferred ReadRemove a Volume from a Concurrent Volume GroupAdd a File SystemAdd a Compressed Journaled File SystemAdd a Large File Enabled Journaled File SystemAdd a Large File Enabled Journaled File SystemAdd an SSA Logical DiskAdd an Enhanced Journaled File SystemAdd an Enhanced Journaled File System on a Previously Defined Logical VolumeChange/Show Characteristics of a Enhanced Journaled File SystemScan the PowerHA SystemMirror for AIX Scripts Log FileWatch the PowerHA SystemMirror for AIX Scripts Log FileScan the PowerHA SystemMirror for AIX System Log FileWatch the PowerHA SystemMirror for AIX System Log FileChange User Attributes on the ClusterList Users on the ClusterList Groups on the ClusterChange / Show Group Attributes on the ClusterChange a User's Password in the ClusterChange Current Users PasswordManage List of Users Allowed to Change PasswordModify System Password Utility WARNING: The /usr/es/sbin/cluster/etc/rhosts file must be removed from ALL nodes in the cluster when the security mode is set to 'Enhanced'. Failure to remove this file makes it possible for the authentication server to become compromised. Once the server has been compromised, all authentication passwords must be changed. Changes to the cluster security mode setting alter the cluster topology configuration, and therefore need to be synchronized across cluster nodes. Since cluster security mode changes are seen as topology changes, they cannot be performed along with dynamic cluster resource reconfigurations.Select nodes by resource group ADMINISTRATIVE USER?Enable Cross-Site LVM MirroringEnable/Disable a Volume Group for Cross-Site LVM MirroringEnable/Disable a Concurrent Volume Group for Cross-Site LVM MirroringConfigure Disk/Site Locations for Cross-Site LVM MirroringAdd Disk/Site Definition for Cross-Site LVM MirroringChange/Show Disk/Site Definition for Cross-Site LVM MirroringRemove Disk/Site Definition for Cross-Site LVM MirroringSite NameDisks PVIDSite NamesSelect Disk/Site Relationship to Change/ShowSelect Disk/Site Relationship to RemoveVolume Group(s)Enable for Cross-Site LVM MirroringSelect Volume Groups to Enable/Disable for Cross-Site LVM Mirroring# There are no sites configured. Sites must be configured to use this function. There are no sites configured. Sites must be configured before setting "Enable Cross-Site LVM Mirroring" to true. Warning: When "Enable Cross-Site LVM Mirroring" is changed, select "Extended Configuration" and perform "Extended Verification and Synchronization". Warning : These menu selections will only function correctly if the "Disk Discovery File" reflects the current disk configuration. To update the "Disk Discovery File" select "Extended Configuration" and perform "Discover PowerHA SystemMirror-related Information from Configured Nodes". When changes are complete select "Extended Configuration" and perform "Extended Verification and Synchronization". The odmget command failed. The odmadd command failed. The odmdelete command failed. The odmchange command failed. Cannot get the list of physical volumes for volume group %1$s. Enable Cross-Site LVM Mirroring VerificationEnable/Disable a Volume Group for Cross-Site LVM Mirroring VerificationLink already exists from %1$s to %1$s on node: %1$s ERROR: Unable to create the configuration file, reverting to the AIX system password utility. WARNING: %1$s is not hard linked to %1$s as expected. Creating soft link from %1$s to %1$s. ERROR: Unable to move %1$s to backup location %1$s. Please check your filesystems for available space before attempting this operation again.Creating link from %1$s to %1$s on node: %1$s ERROR: Unable to link %1$s to HA clpasswd binary %1$s. Restoring %1$s to default location.ERROR: Unable to restore the AIX passwd utility %1$s, a backup copy is stored in %1$s. Restoring AIX password utility %1$s on node: %1$s ERROR: Unable to restore %1$s from backup location %1$s, on node %1$s. Please check your filesystems for available space before attempting this operation again.ERROR: The checksum on the file %1$s is invalid, the restore failed.AIX password utility %1$s is already restored on node: %1$s ERROR: Unable to create link on node: %1$s. ERROR: Unable to restore /usr/bin/passwd on node: %1$s, an error occurred while attempting to communicate with the node. Please check to ensure the node is available. ERROR: The SMIT menu chosen can only be accessed by the system administrator. ERROR: The checksum on the backup %1$s is invalid, the restore failed for file %1$s on node: %1$s. ERROR: The backup file %1$s is missing, or corrupt on node: %1$s. Unable to restore AIX passwd command from the backup location. Please revert to the base AIX installation copy of %1$s. ERROR: Unable to lock bos.security.rte, restoring original system passwd WARNING: No lock was found on bos.security.rte Link already exists from %1$s to %2$s on node: %3$s WARNING: %1$s is not hard linked to %2$s as expected. Creating soft link from %3$s to %4$s. ERROR: Unable to move %1$s to backup location %2$s. Please check your filesystems for available space before attempting this operation again.Creating link from %1$s to %2$s on node: %3$s ERROR: Unable to link %1$s to HA clpasswd binary %2$s. Restoring %3$s to default location.ERROR: Unable to restore the AIX passwd utility %1$s, a backup copy is stored in %2$s. Restoring AIX password utility %1$s on node: %2$s ERROR: Unable to restore %1$s from backup location %2$s, on node %3$s. Please check your filesystems for available space before attempting this operation again.AIX password utility %1$s is already restored on node: %2$s ERROR: The checksum on the backup %1$s is invalid, the restore failed for file %2$s on node: %3$s. ERROR: The backup file %1$s is missing, or corrupt on node: %2$s. Unable to restore AIX passwd command from the backup location. Please revert to the base AIX installation copy of %3$s. ERROR: Unable to open configuration file %1$s ERROR: Cluster user %1$s is not allowed to change their password cluster-wide. Please see your administrator for additional assistance. INTERNAL ERROR: Invalid program name %1$s used. ERROR: Unable to determine the local username. ERROR: Out of memory while attempting to allocate %d bytes. Usage: clpasswd cluster_user [ -g ResourceGroup ] Where: -g ResourceGroup - Update on participating nodes only Otherwise all cluster nodes receive update ERROR: The following node(s) are not available: %1$s. Please re-try when all nodes are accessible. ERROR: The PowerHA SystemMirror resource group: %1$s is not defined. INTERNAL ERROR: Unable to execute command: %1$s ERROR: Resource group: %1$s does not exist. ERROR: Only one user can be specified at a time. ERROR: Unable to obtain the local node name. ERROR: Unable to change password for user: %1$s on the local node. The local node does not participate in resource group: %1$s. Please use node: %1$s. ERROR: User %1$s cannot change password for user: %1$s ERROR: Unknown error while attempting to open configuration file %1$s (errno = %2$d) ERROR: Unable to change password for user: %1$s on the local node. The local node does not participate in resource group: %2$s. Please use node: %3$s. ERROR: User %1$s cannot change password for user: %2$s Allowing all cluster users to change password cluster wide. Adding user %1$s Node %1$s could not be contacted, please ensure the clcomdES sub-system is running properly and that the /usr/es/sbin/cluster/etc/rhosts file is properly configured. Synchronized allowed user data to node: %1$s. ERROR: Please select one or more users to authorize password change across the cluster, or select only the ALL_USERS option to allow all cluster users to change their password cluster wide. ERROR: Unable to open /etc/passwd, this operation can only be performed by a user with root privileges. Please logout and log back in as a user with such authority before trying this operation again, or contact your system administrator for further details. ERROR: Unable to open the allowed users database. This operation can only be performed by a user with root privileges. Please logout and log back in as a user with such authority before trying this operation again, or contact your system administrator for further details. No cluster nodes are defined on the local node. Please check your cluster configuration before attempting this operation again. Aborting could not touch file %1$s. Aborting could not change ownership of %1$s file. Aborting could not change mode of %1$s file. ERROR: The SMIT menu chosen can only be accessed by the system administrator. Original AIX System Command,Link to Cluster Password Utility The Cluster Information Daemon is not active. Emulation requires the Cluster Information Daemon to be active on all nodes. Please start the Cluster Information Daemon on all cluster nodes. Usage: cl_disable_mndhb -cspoc "[-f] [-n NodeList| -g ResourceGroup]" [-e NetworkName] [-r yes|no] VolumeGroupName Usage: cl_enable_mndhb_vg -cspoc "[-f] [-d] [-g ResourceGroup | -n NodeList]" [-e NetworkName] LogicalVolume VGname Usage: cl_list_mndhb_vg -cspoc "[-f] [-d] [-g ResourceGroup]" VolumeGroupName Usage: cl_mk_mndhb_lv -cspoc "[-f] [-g ResourceGroup] [-n NodeList]" [-r ResourceGroup] [-V MajorNumber] [-L LVLabel] [-y VGname] [-e NetworkName] PVID Usage: cl_show_mndhb_vg -cspoc "[-c] [-e NetName] [-f VolumeGroup] [-g ResourceGroup] [-n NodeList] [-p LogicalVolume] [-s] %1$s: A network name must be provided %1$s: A volume group name must be provided %1$s: Unable to find the logical volume associated with %2$s. It will not be removed. %1$s: Unable to examine logical volume %2$s for MNDHB network %3$s. Logical volume %4$s is not removed. %1$s: Unable to examine physical volume %2$s for MNDHB network %3$s %1$s: Unable to remove MNDHB network %2$s %1$s: Logical volume %2$s for MNDHB network %3$s removed %1$s: Unable to remove logical volume %2$s for MNDHB network %3$s %1$s: Disk %2$s removed from volume group %3$s %1$s: Unable to remove disk %2$s from volume group %3$s after MNDHB logical volume %4$s was removed %1$s: Unable to list the volume groups on node %2$s %1$s: Deleting PowerHA SystemMirror resource %2$s %1$s: error retrieving information via 'lsvg -l' from node %2$s on volume group %3$s %1$s: error retrieving information via '%2$s' from node %3$s on logical volume %4$s in volume group %5$s %1$s: error trying to run lspv on node %2$s %1$s: physical volume %2$s is already in use in volume group %3$s %1$s: physical volume %2$s is not known on node %3$s %1$s: volume group %2$s is not known on node %3$s %1$s: Unable to obtain logical volume names from cluster node %2$s %1$s: Logical volume %2$s already exists %1$s: Resource group %2$s is incorrectly defined for MNDHB %1$s: Unable to create volume group %2$s containing disk %3$s %1$s: Created volume group %2$s on physical disk %3$s %1$s: Unable to add disk %2$s to volume group %3$s %1$s: Unable to determine the characteristics of volume group %2$s on node %3$s %1$s: Unable to create logical volume %2$s on physical disk %3$s (%4$s) to hold MNDHB network %5$s %1$s: Created logical volume %2$s on physical disk %3$s (%4$s) to hold MNDHB network %5$s %1$s: odmadd failed - could not create resource group %2$s %1$s: odmadd failed - could not add volume group %2$s to resource group %3$s %1$s: Create failed for MNDHB network %2$s on logical volume %3$s in volume group %4$s and resource group %5$s %1$s: The reference node was not detected %1$s: PVID %2$s is used inconsistently %1$s: Network [%2$s] already exists %1$s: Error: Failed creating nim entry for diskhbmulti %1$s: Error: Could not create network [%2$s] %1$s: Error: Failed creating adapter entries for network [%2$s] %1$s: Error: Network [%2$s] does not exist %1$s: Error: Failed removing network [%2$s], exit code from clmodnetwork was %3$s %1$s: Error: Failed removing failure action for [%2$s], exit code from odmdelete was %3$s %1$s: _REFNODE was not detected %1$s: PVID %2$s is used inconsistently Name of site containing data to be preserved by data divergence recovery processing (Asynchronous GLVM Mirroring only)Volume Group Resource Group Node List Manage Mirror Pools for Volume GroupsShow all Mirror PoolsShow Mirror Pools for a Volume GroupChange/Show Characteristics of a Mirror PoolAdd Disks to a Mirror PoolRemove Disks from a Mirror PoolRename a Mirror PoolRemove a Mirror PoolRemove a Volume GroupShow Cluster Release LevelManage Mirror Pools for Volume GroupsList all shared Physical VolumesChange/Show Characteristics of a Physical VolumeRename a Physical VolumeShow UUID for a Physical VolumeRemove a Volume GroupChange/Show a Mirror PoolRename a Physical VolumeChange/Show Characteristics of a Physical VolumeShow all Mirror Pools for a Volume GroupRemove a Volume GroupShow Cluster Release LevelShow all Mirror PoolsChange/Show Characteristics of a Mirror PoolShow UUID for a Shared Physical VolumeTo create a new mirror pool in %1$s specify the actual mirror pool name in place of '' Mirror Pool nameSelect the Volume Group to RemoveSelect the Mirror Pool to RemoveSelect the Mirror Pool to Change/ShowMirror PoolMirror Pool is Super StrictShared Physical Volume NameList all Shared Physical VolumesSelect the Mirror Pool to Remove DisksMirroring ModeForce Synchronous MirroringAsync Cache Logical VolumeAsync Cache High Water MarkPhysical Volumes in this Mirror PoolMirror Pool for New VolumesSelect Physical VolumePhysical Volume NameMirror Pool to RemoveDisks to Remove from the Mirror PoolDisks to Add to the Mirror PoolSelect the Mirror Pool to Add Disksactive,not active,varied offPhysical Volume IdentifierVolume Group IdentifierSelect the Volume Group to List Mirror PoolsChange all Physical Volumes with this PVID?Physical Volume is Known on these NodesCurrent Mirror PoolSet Mirror PoolChange Mirror Pool NameRemove from Mirror PoolVolume Group DescriptorsMaximum RequestPhysical Partition SizeTotal Physical PartitionsFree Physical PartitionsAllocated Physical PartitionsStale PartitionsFree Partition DistributionUsed Partition DistributionUsage: cl_rmvg -cspoc '[-f] [-g ResourceGroup | -n NodeList]' VG_name cl_rmvg: Volume group %1$s in resource group %2$s cannot be deleted while it is active and the resource group is on line cl_rmvg: Volume group %1$s cannot be removed because it is in use on %2$s cl_rmvg: Volume group %1$s cannot be removed because it is in use on node %2$s cl_rmvg: Volume group %1$s could not be removed from the definition of resource group %2$s. The volume group will not be removed cl_rmvg: Volume group %1$s cannot be removed because the following file systems are still present: %2$s cl_rmvg: Volume group %1$s could not be removed on node %2$s cl_rmvg: Volume group %1$s has been removed on all nodes %1$s: The PowerHA SystemMirror configuration has been changed - volume group %2$s has been removed from resource group %3$s. The configuration must be synchronized to make this change effective across the cluster %1$s: 'LC_ALL=C lsvg -P %2$s' command failed on node %3$s - could not determine the disks in each mirror pool %1$s: chpv -P command failed on node %2$s - could not remove disks %3$s from %3%1$s cl_rendisk: %1$s cannot be renamed to %2$s on node %3$s because that name is already in use by another device cl_rendisk: %1$s cannot be renamed to %2$s on node %3$s because /dev/%4$s already exists Usage: cl_rendisk -cspoc '[-f] [-g ResourceGroup | -n NodeList]' -N new_disk_name [-Y PVID] hdisk_name cl_rendisk: rename of %1$s to %2$s (%3$s) failed on node $node cl_rendisk: Physical volume %1$s renamed to %2$s on %3$s %1$s: PVID %2$s is invalid %1$s: No valid PVIDs specified Usage: cl_mp_disks -cspoc '[-f] [-g ResourceGroup | -n NodeList]' [-p MirrorPool | -P] PVID [PVID ...}Usage: cl_lsmpvgs -cspoc '[-f] [-g ResourceGroup | -n NodeList]' [-l | -p | -v | -V | -P volumegroup | -q volumegroup | -m mirrorpool volumegroup] VG TypeMirror Pool Volume Group Resource Group Node List Volume Group Resource Group Node List VG Type %1$s: 'LC_ALL=C lsvg -P $DVG' command failed on node %2$s - could not determine the disks in each mirror pool All disks in %1$s are already in mirror poolsAdd new disks to %1$s, or remove some from existing mirror poolsAnd retry the operationNo disks found in mirror pool %1$s in volume group %2$sUsage: cl_lsmpdisks -cspoc '[-f] [-g ResourceGroup | -n NodeList]' [ -f | -m MirrorPool ] [-P] VolumeGroupUsage: cl_chshmp -cspoc '[-f] [-g ResourceGroup | -n NodeList] -d[1-9]' [-A|-S] [-h] [-f] [-c AIOcachelogicalvolume] -m MirrorPool VolumeGroup # Only physical volumes that are not part of any volume group # can be renamed. Select a disk from the list # Only physical volumes that are part of shared volume group # have changable characteristics. Select a disk from the list Select the Mirror Pool to RenameNew Mirror Pool Name%1$s: chpv -m command failed on node %2$s - could not rename mirror pool %3$s to %4$s %1$s: Mirror pool name %2$s is not valid - no disks found %1$s: Mirror pool name %2$s in volume group %3$s containing disks %4$s renamed to %5$s cl_rendisk: %1$s cannot be renamed to %2$s on nodes %3$s because those nodes do not support the device rename command. The device rename command is part of AIX 6.1.6 Volume group %1$s does not support mirror pools There are no shared volume groups that support mirror pools There are no mirror pools defined on any shared volume group ERROR:Failed to clear the SCSI persistent reservations and registrations from volume group %1$s.Storage locationUsage: cl_mp_disks -cspoc '[-f] [-g ResourceGroup | -n NodeList]' [-p MirrorPool | -P] -t StorageLocation PVID [PVID ...}ERROR: Given %1$s is not a valid storage location. Valid storage locations are: %2$s Usage: cl_chshmp -cspoc '[-f] [-g ResourceGroup | -n NodeList] -d[1-9]' [-A|-S] [-h] [-f] [-c AIOcachelogicalvolume] -m MirrorPool -t StorageLocation VolumeGroup %1$s: The PowerHA SystemMirror configuration has been changed - LVM Preferred Read for volume group %2$s has been removed . The configuration must be synchronized to make this change effective across the cluster Usage: cl_on_node [-cspoc '[-d DebugLevel] [-f] [-n node_name]'] [-V |-R ] commandERROR: Unable to determine the state of resource group %1$s ERROR: Resource group %1$s is not online on any node. ERROR: Volume group %1$s is not online on any node. Unable to identify a target node. The command is not run. LOCAL(FILES) LDAPLDAPLDAP ServerLDAP ClientAdd an existing LDAP serverConfigure a new LDAP serverShow LDAP server configurationDelete the LDAP serverLDAP server(s)Bind DNBind passwordSuffix / Base DNServer port numberSSL Key pathSSL Key passwordHostname(s)LDAP Administrator DNLDAP Administrator passwordSchema typeSuffix / Base DNServer port numberSSL Key pathSSL Key passwordVersionConfigure LDAP clientShow LDAP client configurationDelete the LDAP clientLDAP server(s)Bind DNBind passwordAuthentication typeSuffix / Base DNServer port numberSSL Key pathSSL Key passwordLogin Authentication GrammarRegistryKeystore AccessSelect an Authentication and registry modeAffected NodesChange / Show Characteristics of a User in the LDAPChange / Show Characteristics of a Group in the LDAPAdd a User to the LDAPChange a LDAP User AttributesRemove a LDAP UserList all LDAP UserAdd a Group to the LDAPChange / Show a LDAP Group AttributesRemove a LDAP GroupList all LDAP GroupsChange a LDAP User's PasswordEnable EFS?Enable EFS KeystoreChange/Show EFS Keystore characteristicDelete EFS KeystoreEFS keystore modeVolume group for EFS KeystoreService IPMode should be supplied. Invalid Mode. Service IP should be given for Shared FS mode. VG should be given for Shared FS mode. LDAP is already configured in Cluster, Use LDAP mode. Nodes are not configured in cluster. EFS VG does not exist on Node %1$s EFS Keystore FS does not exist on Node %1$s EFS Keystore mount point creation Failed on Node %1$s %1$s creation failed. %1$s Additon of VG to RG failed. NFSv4 enablement for EFS RG is failed %1$s ODM update is failed. Removing EFS_KeyStore RG. EFS Keystore is enabled. EFS Keystore is already configured. EFS Keystore is not configured. Cannot remove EFS keystore. Already in LDAP mode. Cluster not configured, exitting... ERROR: %1$s Failed with return code %1$s, terminating... The PowerHA SystemMirror configuration has been changed - %1$s has been done. The configuration must be synchronized to make this change effective across the cluster. Run verification and Synchronization. Verify and Synchronization did not run by user after last configuration change. Please run it and then try. %1$s added successfully WARNING: %1$s Usage: %1$s -h -a -w -s -d -p -S -W Failed in %1$s node, cleaning all... RBAC configuration failed, cleaning all... Usage: %1$s -h -a -w -s -d -p -S -W -V -X -E TDS setup failed, cleaning all... Usage: %1$s -h -a -w -d -p -S -W ITDS client version %1$s is compatible, continuing configuration... Incompatible ITDS client version installed! Assuming IBM TDS server not specified and configuration continues... RSH service failed with an error on %1$s, continuing assuming server already updated with relevant schemas and data... Keys and certificates exists... LDAP configuration failed, cleaning... Restarting server on %1$s node, please wait... Machine Hardware is 64 bit. LDAP Server requires 64bit Hardware. Kernel is 64 bit enabled. LDAP Server requires 64 bit kernel. DB2 Version %1$s installed on this system, continuing configuration... ITDS server version %1$s is compatible, continuing configuration... Incompatible ITDS server version installed! ITDS client version %1$s is compatible, continuing configuration... Incompatible ITDS client version installed! Increasing %1$s Filesystem size... EFS Keystore is changed. RG %1$s is not online, please make sure cluster services are up and stable to contiue with the EFS configuration. Creating EFS Keystore FileSytem... DB2 instance passwordEncryption seed for Key stash filesEFS admin passwordUsage: %1$s {-a 1 -A } {-a 2 [-v ] [-s ]} Usage: %1$s {-a 1 -A } {-a 2 -A -v -s } INFORMATION: Volume group and Service IP is invalid and will be ignored in LDAP mode. LDAP not configured for PowerHA SystemMirror. EFS ManagementRolesno,yesLDAP,Shared FilesystemWARNING: This will remove ldapdb2 instance completely from the machine. It may take quite a bit of time... WARNING: Either ldap client daemon is not running or server is not accessible in node %1$s. Check and correct it. WARNING: LDAP client is not able to contact server in node %1$s. Check and correct it. WARNING: Node %1$s having directory instance/server running, configuration can only be continued only in case the instance name is not 'ldapdb2'. However this is not recommended. WARNING: LDAP server is not accessible in node %1$s. Check and correct it. INFO: Running ldap client configuration on %1$s, please wait... INFO: Running ldap server configuration on %1$s, please wait... INFO: Running mksecldap on %1$s, it may take quite a bit of time... INFO: Running RBAC configuration, it may take quite a bit of time, please wait... Only change mode from Shared Filesystem to LDAP is allowed. Cluster services should be up and stable. Password should be supplied. A LDAP client is not defined. DARE operation failed, please check logs and correct the failures. INFO: Taking EFS keystore backup in %1$s . In case if setup failed in middle then, use this backup file to restore EFS keys. In case of failure perform 'Delete EFS' and then 'Enable EFS'. Once enabled, untar the backup file to restore keys. Restoring EFS keys failed, try manually. Nothing to change. A LDAP server is not defined. Key file path should be in '*.kdb' format. EFS Keystore with LDAP mode is configured, make sure to delete that first through 'smit sysmirror'. A LDAP server exists. Encryption seed should be minimum of 12 characters. Supported number of servers should be at least 2 and at most 6. ITDS client filesets were not installed. Server %1$s specified is not valid or inaccessible. Keys should exists on all nodes. GSKIT filesets not installed. DB2 not installed on this machine. Another %1$s instance 'ldapdb2' exists, configuration cannot be continued. ITDS server filesets were not installed. Try to update ODM manually using odmadd %1$s, in case not succeed then clean the configuration and try again. expect.base not installed in this machine. LDAP server configuration failed, cleaning... EFS Keystore is already configured for PowerHA SystemMirror. EFS Keystore is not configured for PowerHA SystemMirror. Cannot remove EFS keystore from cluster. 'Expect' script not created in %1$s. Resource group "%1$s" is already online. It must be offline before clean up can proceed. The 'Key Storage' option is not yet configured. Key Storage is configured to use a shared volume group. The resource group 'EFS_KeyStore' is present. EFS KeyStore volume group %1$s is present. The volume group '%1$s' has been specified to PowerHA SystemMirror to hold the EFS keys, but has not been defined to AIX. EFS Keystore volume group is not defined. NFS server defined for resource group EFS_KeyStore. NFS server is not defined for resource group EFS_KeyStore. NFS cross mounts set up for resource group EFS_KeyStore. NFS cross mounts not set up for resource group EFS_KeyStore. The resource group 'EFS_KeyStore' is not present. Key Storage is configured to use LDAP. EFS is configured for every stanza in /etc/security/group. EFS is not configured in /etc/security/group. EFS is not configured for the following stanzas in /etc/security/group. EFS is configured by default in /etc/security/user. EFS is not configured in /etc/security/user. EFS is present in the Config_Rules. EFS rule is not present in Config_Rules. The EFS enablement flag, %1$s, is present in %2$s. EFS is not enabled in %1$s. %2$s is missing. EFS support is fully configured. EFS file system key store is not supported in a linked cluster. The given volume group for the EFS KeyStore, "%1$s", is already in use in resource group "%2$s". Please choose an existing volume group that is not in any resource group. The given volume group for the EFS KeyStore, "%1$s", is currently active on node "%2$s". Please choose an existing volume group that is not currently active. The given volume group for the EFS Keystore, "%1$s", does not exist in the cluster. Please either create the volume group, and re-enter the command, or choose an existing volume group. The given volume group for a the EFS KeyStore, "%1$S", is not known on nodes %2$s. Please re-enter the command with a volume group that is available cluster wide. Creating the EFS KeyStore FileSystem, "%1$s". Creation of the EFS KeyStore FileSystem, "%1$s" failed. Creating the NFSv4 Stable Storage FileSystem, "%1$s". Creation of the NFSv4 Stable Storage FileSystem, "%1$s" failed. The EFS KeyStore Filesystem, "%1$s", does not exist on node(s) %2$s. The NFSv4 Stable Store Filesystem, "%1$s", does not exist on node(s) %2$s. Creation of the resource group "%1$s" failed. Reserved name "%1$s" is already used by an existing resource group, and cannot be used as the EFS Filesystem KeyStore. That resource group must be removed before EFS Filesystem Keystore can be configured. Resource group "%1$s" is already online. It must be offline before EFS FileSystem KeyStore can be configured. Additon of volume group "%1$s" to resource group "%2$s" failed. Enabling NFSv4 cross mounts for resource group "%1$s" failed. Failed to update the configuration. A Verification and Synchronization is required. Unable to access shared volume group "%1$s". File system "%1$s" not found in "%2$s.". Unable to access shared file system "%1$s" in "%2$s". Enabling EFS support in AIX. Please wait... Unable to enable EFS support in AIX. Unable to release shared file system "%1$s" in "%2$s". Unable to release shared volume group "%1$s". Enabling EFS support in AIX on node "%1$s". Please wait... Copy of efsenable expect script to node "%1$s" failed. The 'expect' package is not present on node(s) "%1$s".This package must be present on all cluster nodes in order to configure EFS support.It can be downloaded from ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/tcltk Unable to enable EFS support in AIX on node "%1$s". The mode (the "-m" parameter) must be suppied. The mode must be "%1$s" for LDAP KeyStore or "%2$s" for filesystem KeyStore. The initial password (the "-A" parameter) must be supplied. A Service IP address is required for shared filesystem KeyStore. A volume group is required for shared filesystem KeyStore. The given mode, "%1$s", is not valid. The mode must be "%2$s" for LDAP KeyStore or "%3$s" for filesystem KeyStore. Nodes are not yet configured to PowerHA SystemMirror. The cluster must be configured, and the definition synchronized, before EFS can be enabled. Configuration of EFS KeyStore was not successful. See %1$s for more information. The resource group "%1$s" must be on line prior to creating any EFS file systems. Failed to perform authinit operation for "%1$s". Failed to add "%1$s" authentication for "%2$s". WARNING: Failed to restore "%1$s" state after hdcryptmgr operations.Authentication methods are not configured for logical volume "%1$s". Failed to delete authentication method name(s) "%1$s" for logical volume "%2$s". Not in a Fence GroupRead/WriteRead/OnlyNo AccessFail AccessNo ReserveSingle PathExclusive Persistent ReserveReserveReleaseAutomatically Reacquire ReserveDo Not Allow BreakingLocal access onlyShared but not Concurrent accessConcurrent accessMember of rootvgMember of the RepositoryDisk State: %d Disk Type: %0#4x (%1$s) UnspecifiedReserve mode: %d (%1$s) Fence height: %d (%1$s) Disk device major/minor number: %d, %d Disk name: %1$s Disk UUID: %0.16llx %0.16llx Fence Group UUID: %0.16llx %0.16llx - %1$s Fence Group UUID: %0.16llx %0.16llx Not in a Fence Group known to PowerHA SystemMirrorFence Group %1$scl_vg_fence_init[%1$d]: Fence Group %2$s not created because disk %3$s does not have the PCM attribute cl_set_vg_fence_height[%1$d]: Volume group %2$s is not associated with a Fence Group known to PowerHA SystemMirror User System Cluster_RepositoryUserSystemCluster_RepositoryFailed to find a Resource group for Mirror group with name %1$s Unable to get the PPRC active path for Mirror group: %1$s Failed to find affiliated nodes for the mirrror group: %1$s Invalid flagFailed to read ODM for the Mirrror group: %1$s Failed to find storage system id corresponding to Mirrror group: %1$s Failed to find the corresponding site name of the Mirror group Unable to refresh Mirror group %1$s Failed to find Mirror Group ID from odm: %1$s Failed to perform swap for Mirror group: %1$s Requested capability id=%1$d is not present. Site capability is defined and globally available Site capability is defined but not globally available IPv6 capability is defined and globally available IPv6 capability is defined but not globally available Unicast capability is defined and globally available Unicast capability is defined but not globally available Hostname change capability is defined and globally available Hostname change capability is defined but not globally available Unknown capability %1$d is defined and globally available Unknown capability %1$d is defined but not globally available cl_get_capabilities[%1$d]: cl_get_capability error: %2$s cl_get_capabilities[%1$d]: malloc error: %2$s There are %1$d capabilities CAA Cluster services are not active CAA Cluster services are not active on this node because it has been STOPPED CAA Cluster services are active There are no capabilities to get CAA Cluster Capabilities Usage: cl_get_capabilities [-v] [-n] [-i n] Capability %1$d Unknown capability id: %d version: %d flag: %x Automatic Repository Replacement capability is defined and globally available Automatic Repository Replacement capability is defined but not globally available CAA Network Monitor capability is defined and globally available CAA Network Monitor capability is defined but not globally available Sub Cluster Split Merge capability is defined and globally available Sub Cluster Split Merge capability is defined but not globally available Kernel extension is not loaded. Kernel extension is loaded. Local node is a member of a cluster CAA DR capability is defined and globally available CAA DR capability is defined but not globally available CAA 4KDISK capability is defined and globally available CAA 4KDISK capability is defined but not globally available CAA COMDISK capability is defined and globally available CAA COMDISK capability is defined but not globally available %1$s[%2$s]: Sites are not defined. All nodes will be treated as being part of the local site. %1$s[%2$s]: Unable to determine local and remote site membership. Consistency group information will not be propagated. %1$s[%2$s]: No Composite Groups weree given or found in ODM %1$s[%2$s]: The %3$s command was not found on node %4$s. Consistency group information will not be propagated. %1$s[%2$s]: Unable to extract composite group information for nodes at the local site. Composite group information will not be proagated. %1$s[%2$s]: Unable to extract composite group information for nodes at the remote site. Composite group information will not be propagated. %1$s[%2$s]: Propagation of the composite group information to other nodes failed with return code %3$d %1$s[%2$s]: WARNING: symcg returned rc=%3$d Importing the composite group information from %4$s.local failed on node %5$s %1$s[%2$s]: WARNING: symcg returned rc=%3$d Importing the composite group information from %4$s.remote failed on node %5$s %1$s[%2$s]: Unable to determine local and remote site membership. Device group information will not be propagated. %1$s[%2$s]: No Device Groups were given or found in ODM %1$s[%2$s]: The EMC fileset SYMCLI.SYMCLI.rte was not found on node %3$s. %1$s[%2$s]: Unable to extract device group information for nodes at the local site. Device group information will not be proagated. %1$s[%2$s]: Unable to extract device group information for nodes at the remote site. Device group information will not be propagated. %1$s[%2$s]: Propagation of the device group information to other nodes failed with return code %3$d %1$s[%2$s]: WARNING: symdg returned rc=%3$d Importing the device group information from %4$s.local failed on node %5$s %1$s[%2$s]: WARNING: symdg returned rc=%3$s Importing the device group information from %4$s.remote failed on node %5$s %1$s: ERROR: could not assign a PVID to disk "%2$s". ERROR: "%1$s" cannot be used due to the current value of environment variable "CL_PVID_ASSIGNMENT" ("%2$s").